英文摘要 |
Many contests were annually held in hospitals, using summed scores directly to rank performances of examinees. In contrast to classical test theory, Rasch (1960) analysis with the software Facets (Linacre, 2006) was implemented in this study, detecting judges unfair in a contest selected the best head nurse, to examine whether items or judges meet the requirement of unidimensional construct and then to estimate the abilities of examinees after removing unexpected items and unfair judges. An annual contest selected the best head nurse was held at an academic medical center in spring 2006. Four nurse superintendents as judges appraised 10 candidates with force ranking scores, coding 9 for the most excellent and 0 the poorest, across 7 performance items. Five approaches were proceeded to (1) examine data fitting the Rasch model, (2) evaluate impacts while removing unexpected data, (3) distinguish changes before and after using the fitted data, (4) detect underlying latent characteristics of the awarded head nurse, and (5) summarize results of the achievement assessments. The results showed (1) the item 7 (whether fully execute superintendents' occasional tasks and commands) and the judge 3 should be removed from the assessment list due to misfitting Rasch model's expectation, (2) performances were excellently shown to candidate 5 and 10 in management and education respectively, poorest to candidate 4 in communication and coordination, and data distortion scoring onto judge 3, and (3) items related to leadership and education were statisti-cally significantly differentiated by linear regression to be suggested further emphases placed upon for those would-be candidates pursuing the title of the best head nurse in the next year. |