CONTACT US

Assessment Insights

Are We Ignoring the Impact of High Stakes Testing Again? Maybe We Can Get it Right in Science.

[fa icon="calendar"] August, 2015 / by Dr. Stuart Kahl

Dr. Stuart Kahl

Research spanning the last three decades has repeatedly shown that the nature of high stakes accountability testing impacts instruction. In the late 1980s and early 1990s, we learned that efficient (predominantly selected-response) external tests served as models for local testing and led to a narrowing of curriculum and instruction—an emphasis on a few school subjects and on low-level knowledge and skills. The authentic assessment era of the ’90s taught us a lot about the dos and don’ts of the less efficient performance assessments. Unfortunately, that era was short-lived as efficiency again became a priority because of the volume of testing and reporting requirements associated with NCLB. Many states significantly reduced or dropped their non-multiple-choice assessment components.

Tug-of-War over Performance Assessment

More recently, many have become concerned about the relatively poor performance of U.S. students on international tests and about the college and career readiness of our high school graduates. These issues have rekindled our interest in performance assessments that require students to demonstrate deeper learning and critical thinking. At the same time, interestingly, we are being pulled in the opposite direction because of competing concerns about excessive testing time and over-testing of American students. Both of the major state assessment consortia, PARCC® and Smarter Balanced®, have retreated somewhat from their original plans for performance assessment—at least in part—for those very reasons. Thus, our “next-generation” English language arts and mathematics assessments may not be so “next-generation” after all, except for their greater use of technology. They are still efficient tests with fairly restricted on-demand (as opposed to extended) performance components. And we’re even seeing negative reactions to these scaled-back tests among educators and non-educators, fueled not just by concerns about testing time, but also by politically charged views of the Common Core State Standards they measure.

Using Curriculum-Embedded Performance Assessments

My ideal accountability assessment system would involve curriculum-embedded performance assessments (CEPAs) and efficient, relatively short, end-of-year summative tests. A CEPA is an instructional unit consisting of a sequence of activities, some yielding student work demonstrating foundational knowledge for purposes of formative assessment, and some leading to scorable student work demonstrating deeper learning for summative purposes. Check out “Re-Balancing Assessment” by Hofman, Goodwin, and Kahl (2015) for ideas about how the two-component system could work.

Let Science Lead the Way

From our experience in performance assessment, we know state involvement in the development of a locally administered CEPA component can assure high quality performance tasks, we know how to assure consistency of scoring by multiple means, and we know how to produce and report results quickly. However, if current concerns and limitations mean that such a system is less likely to emerge soon in English language arts and mathematics, then perhaps the greater flexibility states have in science can enable new science assessments addressing the Next Generation Science Standards (NGSS*) to show the way. These standards, like the Common Core, lend themselves to project-based learning and performance assessment.

The NGSS integration of Disciplinary Core Ideas, Science and Engineering Practices, and Crosscutting Concepts, as well as effective STEM programs addressing these dimensions, reflect a commitment to deeper learning, without shortchanging important foundational knowledge and skills. Along those lines, a locally administered CEPA could have students go online to learn the basics of heat transfer; work in teams to design, conduct, and report on an empirical investigation to determine which of two fabrics would be the better protection against the winter cold; and independently write an essay on how a home heating system relies on conduction, convection, and radiation to work.

Balancing Instructional Practice and Accountability

A few CEPAs like this, presented during the course of the year, would produce good evidence of deeper learning that, in combination with results of a brief end-of-year summative test, could satisfy accountability requirements and constitute good instructional practice. Here’s hoping that we take this opportunity to get it right, to strike a better balance in accountability assessment in the interest of influencing instruction that truly reflects the new standards.


*NGSS is a registered trademark of Achieve. Neither Achieve nor the lead states and partners that developed the Next Generation Science Standards were involved in the production of this product, and do not endorse it.

PARCC® is a registered mark of Parcc, Inc.

Smarter Balanced® is a registered trademark of the Smarter Balanced Assessment Consortium.

Topics: NGSS, Accountability

Dr. Stuart Kahl

Written by Dr. Stuart Kahl

As founder of Measured Progress, Dr. Stuart Kahl contributes regularly to the thought leadership of the assessment community. In recent years, his particular interests have included formative assessment, curriculum-embedded performance assessment, and new models for accountability assessment programs. The Association of Test Publishers (ATP) awarded Dr.Kahl the 2010 ATP Award for Professional Contributions and Service to Testing. He regularly publishes research papers and commentaries introducing and analyzing current issues and trends in education, and as a frequent speaker at industry conferences, Dr. Kahl also serves as a technical consultant to various education agencies.