Getting teacher evaluations right

This morning’s Deseret News ran a very interesting article,  with the headline, “A better way to grade teachers: Grading on how teachers promote student learning rather than test scores.”

Actually, the headline is a little misleading, since test scores ARE included in the evaluation. To quote from the article:

According to preliminary findings released in January, the best way to predict whether a teacher’s future students will make academic gains is to combine test data from last year’s students with two different sources of information on teacher and student behavior in the classroom: rigorous classroom observations and student surveys. Teachers who scored well on all three measures were not simply “teaching to the test.”

http://www.deseretnews.com/article/765614274/A-better-way-to-grade-teachers-Grading-on-how-teachers-promote-student-learning-rather-than-test.html

“Scored”? I pulled up the study’s findings, and, sure enough, tests are still important to judging effectiveness. But the project does rely on more sophisticated tests.

To investigate validity, we use two types of achievement gains. Given concerns over the quality of current state
tests, many have worried that those teachers who achieve large gains on the state tests are simply coaching
children to take tests, not teaching the underlying concepts. For this reason, the MET project administered
two assessments to supplement the state test results: the Balanced Assessment in Mathematics (BAM) and the
open-ended version of the Stanford 9 (SAT9 OE) reading test. The BAM test was designed to measure students’
conceptual understanding of math topics. The open-ended version of the Stanford 9 provides a series
of reading passages (like the existing state ELA tests) but, unlike most state tests, asks students to write short answer
responses to questions testing their comprehension, rather than asking them to choose an answer
in multiple-choice format.

http://www.metproject.org/downloads/MET_Gathering_Feedback_Research_Paper.pdf

I’ve mentioned before that my experience teaching Advanced Placement classes has given more more respect for “teaching to the test” . . . when the test requires students to analyze documents, write a coherent essay, and master some genuine content.

But please understand that I’m not quibbling with the article, and that I hope you’ll read on. Because as the Deseret News reporter notes, good evaluations rely more heavily on two other measures: rigorous classroom observations and, drum roll please, STUDENT evaluations.

“Rigorous” observations translate into repeated (four times a year in Memphis, which is using this approach) observations by “different trained observers, including people with no personal relationship to the teacher.” Understand that this goes way beyond the mostly perfunctory “in house” evaluations that teachers usually receive.

But what really intrigued me about this proposed new evaluation method is its third leg: student evaluations.

Again, from the Deseret News article:

Across the country, educators are hesitant about student surveys being used to formally evaluate their work. Tara Black, a first-grade teacher, says she uses surveys with her students all the time to find out what is working and what she needs to improve. She likes using surveys to improve her teaching, but she calls the idea of surveys being used to punish her “not fun.” Other teachers worry that student surveys could lead to popularity contests among teachers.

The trick, according to the MET project researchers, is to ask the right questions. Instead of eliciting students’ personal opinions of a teacher, the student survey they tested, called the TRIPOD Student Survey, focuses on specific, observable student and teacher behaviors within the classroom.

In Memphis, students will be surveyed twice a year with the TRIPOD survey.

I want to learn more about the survey methodology, but my own experience suggests that student evaluations can be very helpful.

One of the unexpected bonuses of teaching concurrent enrollment classes through Salt Lake Community College and Utah Valley University was that both institutions required students to evaluate my classes (anonymously). I learned a lot from these evaluations, and made some changes every year to reflect what I learned.

This may sound like a minor example, but several students commented that I spoke too quickly. I asked the class, flat out, if this was true. Yup, it was true. One of my braver students, a Hispanic young man who spoke good English but struggled with absorbing economics at McConnell speed, agreed to start giving me water-skiing signals. A thumb down meant I needed to slow down the boat. Thanks, Miguel!

I was also pleasantly surprised to learn that students valued (note that I say valued, not liked) my daily comprehension quizzes. One question asked them which elements of the class improved their learning, and an astonishing number identified these quizzes.This reinforced one of my strongest beliefs, which is that students don’t resent work if they think they’re actually learning something from it.

How these student evaluations work with, say, second graders, I’m far from certain. But obviously these researchers are trying to come up, and then test drive,  some answers.

I’ve blogged several times about ways that Utah could most effectively deploy additional educational resources. Better tests and more rigorous evaluations won’t come cheap. I’m pretty sure they won’t be popular, either, especially when better tests reveal more discouraging, not more encouraging, results. (Read any student essays lately.)

Anyway, kudos to the Deseret News for publishing this interesting story.

One comment

  1. Dan Rothstein

    One of the best measures of effective teaching is the kind of questioning that is going on in the classroom. Part of that might focus on teacher questions, but it is important to also see if students are asking their own questions. In our book, Make Just One Change: Teach Students to Ask Their Own Questions (Harvard Education Press: 2011), we have reported on the transformational power of using simple methods to build students’ question-asking skills. Their questions should definitely be part of a more complete picture of assessing teacher effectiveness.

Leave a comment

DeseretNews.com encourages a civil dialogue among its readers. We welcome your thoughtful comments.

*