If there is anything approaching consensus in the contentious education reform debate, it’s that good teaching makes a significant, measurable difference in student performance. Yes, many teachers will argue that good parents make a bigger difference, and they may be right. But that doesn’t change the reality that holding all other factors constant – including uninvolved parents – good teachers add significantly more to student learning than poor teachers.
So how can we help teachers teach better?
Note how I phrase the question. So many discussions of teacher evaluation seem to circle around how we should train, hire, and fire teachers. I’d like to see changes in all three policies, but it still seems to me that the most important use of teacher evaluation is to help the teachers we already have improve.
So I was pleased that a just-published scholarly analysis suggests that well-designed evaluation systems can help mid-career teachers (whom some education reformers view as hopeless causes) improve their teaching and their students’ outcomes.
The study focused on Cincinnati’s Teacher Evaluation System. Here’s how the authors describe this approach:
During the yearlong TES process, teachers are typically observed in the classroom and scored four times: three times by an assigned peer evaluator—a high-performing, experienced teacher who previously taught in a different school in the district—and once by the principal or another school administrator. Teachers are informed of the week during which the first observation will occur, with all other observations unannounced. Owing mostly to cost, tenured teachers are typically evaluated only once every five years.
The evaluation measures dozens of specific skills and practices covering classroom management, instruction, content knowledge, and planning, among other topics. Evaluators use a scoring rubric based on Charlotte Danielson’s Enhancing Professional Practice: A Framework for Teaching, which describes performance of each skill and practice at four levels: “Distinguished,” “Proficient,” “Basic,” and “Unsatisfactory.” (See Table 1 for a sample standard.)
Both the peer evaluators and administrators complete an intensive TES training course and must accurately score videotaped teaching examples. After each classroom observation, peer evaluators and administrators provide written feedback to the teacher and meet with the teacher at least once to discuss the results. At the end of the evaluation school year, a final summative score in each of four domains of practice is calculated and presented to the evaluated teacher. Only these final scores carry explicit consequences. For beginning teachers (those evaluated in their first and fourth years), a poor evaluation could result in nonrenewal of their contract, while a successful evaluation is required before receiving tenure. For tenured teachers, evaluation scores determine eligibility for some promotions or additional tenure protection, or, in the case of very low scores, placement in a peer assistance program with a small risk of termination.
And the results?
We find suggestive evidence that the effectiveness of individual teachers improves during the school year when they are evaluated. Specifically, the average teacher’s students score 0.05 standard deviations higher on end-of-year math tests during the evaluation year than in previous years, although this result is not consistently statistically significant across our different specifications.
These improvements persist and, in fact, increase in the years after evaluation (see Figure 1). We estimate that the average teacher’s students score 0.11 standard deviations higher in years after the teacher has undergone an evaluation compared to how her students scored in the years before her evaluation. To get a sense of the magnitude of this impact, consider two students taught by the same teacher in different years who both begin the year at the 50th percentile of math achievement. The student taught after the teacher went through the TES process would score about 4.5 percentile points higher at the end of the year than the student taught before the teacher went through the evaluation.
Note that this evaluation system does not rely heavily on test score data, but rather on observation and input. (I think test score data is a valuable additional tool, but this is NOT a test-based approach.) Most, but not all, of the evaluating is done by experienced teachers from OTHER schools, who can presumably be more objective. All but the first evaluation visits are unannounced. And the system costs a lot of money.
So what do you think?