The impetus to adopt value-added assessment notwithstanding, critics of this approach make some good points. On a year-to-year basis the data can fluctuate widely. As the labor union-supported Economic Policy Institute reports, “One study found that across five large urban districts, among teachers who were ranked in the top 20% of effectiveness in the first year, fewer than a third were in that top group the next year, and another third moved all the way down to the bottom 40%. Should administrators base high-stakes decisions such as hiring and firing on such unreliable data?
Furthermore, many teachers teach subjects that are not readily measured by standardized tests. Who’s to say that a student’s improved math scores shouldn’t be attributed to a challenging science teacher or that a student’s improved reading scores weren’t spurred by an engaging history teacher?
BYU economics professor Lars Lefgren’s research finds that most measurable teacher effects on test scores disappear rather quickly and that what parents most want are teachers who increase students’ satisfaction and excitement at learning.
Finally, value-added measurements raise the whole issue of whether test scores truly measure learning or just a teacher’s effectiveness in “teaching to the test.”
Supporters of value-added assessment have at least partial answers to these criticisms:
- Test scores aren’t a perfect measure, but they are the best measure we have of whether students are mastering the basic skills required in reading and mathematics.
- The data is most unreliable in the middle of the spectrum; it more accurately identifies the truly outstanding or truly terrible teachers at either end of the curve, especially when multi-year data is used.
- Most value-added supporters in fact advocate employing three-year moving averages, rather than year-to-year comparisons, and most also favor looking more leniently at the results for new teachers, many of whom voluntarily leave the profession if they feel unsuccessful.
- The demonstrated statistical relationship between value-added scores and other, more intensive and much more expensive methods of assessing teachers, while not as high as one might wish, is comparable to the correlation between SAT and ACT scores and college success, and performance-based assessments of occupations as far ranging as insurance salespeople, surgeons and baseball players.
Moreover, most proponents argue that value-added data should be just one of the criteria used for evaluating teachers. Even when then Washington, D.C., school superintendent Michelle Rhee generated controversy by using this data to fire ineffective teachers, value-added measurements accounted for only 50 percent of the evaluation score.
Finally, it is possible to separate the issues of using value-added measurements for internal assessment and publishing this data for general consumption. (For a good summary of these arguments see the Brookings report cited above.)