Delivering high-quality assessments that provide evidence of student understanding can be a challenge, especially when time and resources are stretched thin. Accurate and relevant assessments that help inform future instruction require a rigorous development process with several levels of review, but often district staff members simply don’t have that time and expertise. So, can you ensure quality assessments—even if they’re created quickly, with limited resources?
The answer is YES. When you boil it down, there are really 5 main features to keep in mind for building assessments.1. Purpose of the test. We know that assessments drive decision making and that ultimately, when used at the right time and for the right purpose, assessments create a positive impact on student learning. But not all tests accomplish the same goals. Different assessments are designed to address different purposes and to support distinct types of decisions. A test that doesn’t return the information you need may not be a poor test; it may be the wrong test. Each component of a balanced assessment system serves a specific role to inform instruction and/or policy. How to determine the best test for your needs? Check out the infographic.
2. Type of results you need. Once you’ve determined the purpose of the assessment, consider what decisions you want to make based on the data you receive. This will help you define your reporting categories . . . which in turn helps define the test design.
3. Item types. All item types—selected-response, constructed-response, extended-response, technology-enhanced, performance tasks—will elicit a different type of response from the student. Different item types provide different levels of cognitive complexity. Within the constraints of your test, try to include item types that provide different modes of response and a range of complexity.
4. Item validity. It may sound circular, but assessments need to measure what they intend to measure for their purpose. Validity of an item refers to the extent to which an item demonstrably accomplishes that. Valid items created following evidence-centered design principles accurately align with the designated standards and elicit meaningful demonstrations of your students’ knowledge and skills.
5. Equity. Equity, or fairness, can be broken down into three different categories: cultural sensitivity, bias, and accessibility. High-quality items undergo multiple reviews for bias, sensitivity, and accessibility during the development process.
- Cultural sensitivity refers to the awareness that certain topics or objects have different perceptions and value in particular cultures.
- Bias refers to an unintended advantage or disadvantage to a group or groups of students based on construct-irrelevant features that can affect students’ responses to an item. An item may be biased if its content matter caters to or assumes familiarity with an experience or environment not equally familiar to all groups. A good item is free of bias.
- Accessibility means that an assessment evaluates actual learning outcomes and not the speed, manual dexterity, vision, or hearing of the learner. An accessibility review and accessibility tools ensure that students don’t encounter obstacles to demonstrating their knowledge.
If all of this seems a little overwhelming, don’t worry. Measured Progress can help! Experts from the Assessment Services team can work with you and your staff to create assessments that include the features described above, and then some. Here are a few ways the Assessment Services team can work with your district:
- Build test blueprints and select items for benchmark tests aligned to a district’s curriculum scope and sequence
- Design performance-based assessments that provide students with the opportunity to more authentically demonstrate learning of multidimensional standards
- Create customized pre- and post-tests for use as common district assessments
- Develop computer- and paper-based interim assessments to monitor student learning