There’s a lot of confusion among policy makers, educators, and the general public about states’ current options for their K–12 accountability assessment programs. Why? To explain the background and reasons for the confusion, I recently published a paper called “Proficient, Eligible to Graduate, College-Ready? The mystery of achievement-level assessment results.” Here’s a quick summary of that paper.
Despite advances in computer-based or automated scoring of student work on academic assessments, there is still a need in large-scale testing programs (e.g., state educational assessments) for humans to score students’ responses to higher-order, constructed-response test questions and performance tasks. And every few years an issue is raised about the qualifications of persons engaged to accomplish this scoring. The testing companies typically hire thousands of temporary staff for this task, most through temp agencies. The job of these seasonal workers is to view images of student responses and assign a score to each.
Research spanning the last three decades has repeatedly shown that the nature of high stakes accountability testing impacts instruction. In the late 1980s and early 1990s, we learned that efficient (predominantly selected-response) external tests served as models for local testing and led to a narrowing of curriculum and instruction—an emphasis on a few school subjects and on low-level knowledge and skills. The authentic assessment era of the ’90s taught us a lot about the dos and don’ts of the less efficient performance assessments. Unfortunately, that era was short-lived as efficiency again became a priority because of the volume of testing and reporting requirements associated with NCLB. Many states significantly reduced or dropped their non-multiple-choice assessment components.