| Title: |
The Sensitivity of Value-Added Estimates to Test Scoring Decisions |
| Language: |
English |
| Authors: |
Joshua B. Gilbert (ORCID 0000-0003-3496-2710); James G. Soland; Benjamin W. Domingue |
| Source: |
Educational Measurement: Issues and Practice. 2026 45(1). |
| Availability: |
Wiley. Available from: John Wiley & Sons, Inc. 111 River Street, Hoboken, NJ 07030. Tel: 800-835-6770; e-mail: cs-journals@wiley.com; Web site: https://www.wiley.com/en-us |
| Peer Reviewed: |
Y |
| Page Count: |
12 |
| Publication Date: |
2026 |
| Document Type: |
Journal Articles; Reports - Research |
| Descriptors: |
Value Added Models; Tests; Scoring; Item Response Theory; Computation; Error of Measurement |
| DOI: |
10.1111/emip.70011 |
| ISSN: |
0731-1745; 1745-3992 |
| Abstract: |
Value-added models (VAMs) are both common and controversial in education policy and accountability research. While the sensitivity of VAM results to model specification and covariate selection is well documented, the extent to which test scoring methods (e.g., mean scores vs. item response theory based scores) may affect Value-added (VA) estimates is less studied. We examine the sensitivity of VA estimates to the scoring method using empirical item response data from 18 education datasets. We find that VA estimates can be sensitive to the choice of scoring method, holding constant students and items. While the various test scores are highly correlated, on average, using different scoring approaches leads to variation in VA percentile ranks of over 20 points, and more than 50% of teachers or schools are classified in multiple quartiles of the VA distribution. Dispersion in VA ranks is reduced with more complete item response data. Our findings suggest that consideration of both measurement error and model uncertainty are important for the appropriate interpretation of VAMs. |
| Abstractor: |
As Provided |
| Notes: |
https://doi.org/10.7910/DVN/AJDUN2 |
| Entry Date: |
2026 |
| Accession Number: |
EJ1498357 |
| Database: |
ERIC |