Value-added teacher assessments: Part 2

Here in Utah, the State Office of Education has compiled the data it would need for value-added analysis, but it is not publishing this information, according to the Salt Lake Tribune, “because of the cost involved in purchasing an expanded software license.” The same article notes that value-added scores are “not being used to rate educators” and adds that “the state office has also stopped publishing annual reports of state test scores by school after lawmakers scrapped the U-PASS accountability system.”

Given the strain on Utah’s education budget this may reflect a reasonable choice about how best to allocate scarce resources – or it may provide a convenient excuse to escape the furor that the LA Times publication decision ignited. At any rate, if we as parents, students, teachers and citizens are going to educate ourselves, we need to know what all the fuss is about.

So what is “value-added” assessment, anyway?

The answer can get quite technical – technical beyond my statistical understanding, for that matter. (See The Washington Post for a good, non-technical article about disputes over statistical methods. Those of you who live to interpret standard deviations and correlation coefficients might check out the Rand Corporation’s 2003 report, Evaluating Value-Added Models for Teacher Accountability.)

The Brookings Institution, a centrist think tank (although I should add in the interests of full disclosure that its scholars generally support using value-added measurements to assess teacher performance) provides the following reasonably user-friendly explanation of what is in fact a fairly complicated statistical technique:

“The latest generation of teacher evaluation systems seeks to incorporate information on the value-added by individual teachers to the achievement of their students. The teacher’s contribution can be estimated in a variety of ways, but typically entails some variant of subtracting the achievement test score of a teacher’s students at the beginning of the year from their score at the end of the year, and making statistical adjustments to account for differences in student learning that might result from student background or school-wide factors outside the teacher’s control. These adjusted gains in student achievement are compared across teachers.”

A number of converging forces have pushed value-added assessment to the top of the education reform agenda.

  • Multiple recent studies have demonstrated that teacher effectiveness is the most important determinant of student success that is under the school system’s control. The italics are deliberate. Frustrated teachers will point out that poverty, family disintegration, and the distraction of television, the Internet, and video games all contribute mightily to student failure. Fair enough. But teachers still matter. Indeed, as the Center for American Progress reports, “children from low-income families and children of color are disproportionately assigned to the least effective teachers, a finding that helps explain yawning gaps between average educational outcomes of groups defined by family income or ethnicity.”
  • At the same time, most current teacher evaluation systems produce essentially meaningless results. The New Teacher Project studied teacher evaluation systems in 12 districts in four states: Arkansas, Colorado, Illinois and Ohio, ranging in “size, geographic location, evaluation policies and practices and overall approach to teacher management.” The study found that “in districts that use binary evaluation ratings (generally “satisfactory” or “unsatisfactory”), more than 99 percent of teachers receive the satisfactory rating. Districts that use a broader range of rating options do little better; in these districts, 94 percent of teachers receive one of the top two ratings and less than 1 percent are rated unsatisfactory.”
  • States, in the throes of a budget crisis, laid off more than 70,000 teachers this fall, and as federal stimulus money ends more teachers will likely face dismissal. Which ones? According to a report from the conservative American Enterprise Institute, “all of the 75 largest school districts in the nation use seniority as a factor in layoff decisions, and seniority is the sole factor in over 70 percent of these districts.” Yet a study that simulated the difference between using seniority and using value-added assessments for layoff decisions in Washington state (in other words, laying off the allegedly least effective teachers rather than the newest hires) found that “36% of those teachers who actually received layoff notices were estimated to be more effective than the average teacher who did not.” This in turn translates into “between one-fifth and one-half of a school year or 2 to 4 months of student learning.”
  • Finally, the data is now available. Whatever one thinks of No Child Left Behind — and there is plenty to criticize — it did require schools to collect a wealth of new information about both individual student and school performance.

So what do you think? We encourage responses from teachers, parents, and administrators — both those of you with technical or professional background in this area, and those who simply have something to say.

In my next post I look at what critics of value added assessment say.

I can be contacted at MMcConnell@desnews.com.

Leave a comment

DeseretNews.com encourages a civil dialogue among its readers. We welcome your thoughtful comments.

*