英文摘要 |
This article reports on the reliability of information literacy portfolio assessment in the primary grades. Three central questions were examined: (1) how reliable are total scores of information portfolio? (2) how reliable are scores of big six skills? and (3) what are probable sources of unreliability? Do teachers who rate their own students' work systematically score differently than do outside raters? The framework of information portfolio was based on the big six skills approach. This assessment was based on a sample of approximately 70 portfolios integrated three learning areas for students in Grades 3 and 5: language arts, mathematics, and science and technology. The teachers were required to collect each student's portfolio for 3 semesters. The products of portfolios in each learning areas were rated by 3 raters: the student's classroom teacher, a teacher of the same primary school, and an external reviewer. Results indicated that most of the generalizability coefficients for total scores were above 0.75 across learning areas and semesters with a single rater. The dependability coefficients revealed a similar pattern with slightly lower size. Variance in scores of each processing dimension was attributed to individual difference, processing dimension, interaction of individual difference and dimension, and interaction of individual difference, dimension, and rater. The generalizability coefficients for the six-dimensional task were around 0.5 to 0.6. The results indicated that instructional planning based on scores of specific information skill should be questioned. In general, the correlations between classroom teachers were higher than those between classroom teachers and external raters. Classroom teachers tended to rate student portfolios higher than did external raters; however, these differences were small. This study showed that given a solidly structured portfolio, teachers can reliably rate their students' work. |