Document Type

Article

Publication Date

2013

Abstract

Assessments have been pervasively used in classrooms by educators and researchers for various purposes to detection and study of educational innovations. A concern has been recurrently brought up by scholars about the interpretation and use of assessment scores: Are assessments sensitive enough to detect student achievement differences? In this paper we examined the instructional sensitivity of two formats of assessment items, open‐ended and multiple‐choice, in order to answer the research question, do multiple‐choice and open‐ended formats differ in their instructional sensitivity?. Using the pretest and posttest design with booklets including pairs of two item formats, science items varying in instructional sensitivity were administered to 427 students taught by nineteen 5th grade teachers. We applied the item‐specific coding systems to score student responses of open‐ended items. By comparing the scores of two item formats for seven items, preliminary results indicate that with respect to instructional sensitivity, the psychometric indicators of instructional sensitivity based on open‐ended items were not comparable to those of multiple‐choices. In addition, students performed significantly poorly on open‐ ended items than on multiple‐choice item, showing lack of understanding or incorrect understanding.

Share

COinS