Merry Christmas, everyone. I’ve finally got most of my leftovers simmering in soup pots, so it’s back to blogging.
I’ve written before about Utah’s decision to withdraw from the Smarter Balanced Assessment Consortium – a decision driven mostly by growing concern over the common core standards.
I had, and still have, mixed feelings about Utah’s abandonment of this project. Adopting the standards and abandoning the tests strikes me as a very odd compromise: why pursue curriculum changes that generate expense and bother without pursuing serious accountability? Moreover, the Consortium was supposedly developing more in-depth and meaningful tests, which in my view are sorely needed. Finally, a common test potentially offers economies of scale and comparability among states.
But the promise of more meaningful tests may be fading anyway, as the Consortium faces some testing realities. To use some of the economics lingo that I teach my students, the opportunity costs of testing are high. Economists measure the cost of a good or service by what we must forego, or give up, to acquire that good or service. Testing can help teachers tailor and improve instruction – but it also devours instructional time.
As Education Week reports,
A group that is developing tests for half the states in the nation has dramatically reduced the length of its assessment in a bid to balance the desire for a more meaningful and useful exam with concerns about the amount of time spent on testing.
The decision by the Smarter Balanced Assessment Consortium reflects months of conversation among its 25 state members and technical experts and carries heavy freight for millions of students, who will be tested in two years. The group is one of two state consortia crafting tests for the Common Core State Standards with $360 million in federal Race to the Top money.
From an original design that included multiple, lengthy performance tasks, the test has been revised to include only one such task in each subject—mathematics and English/language arts—and has been tightened in other ways, reducing its length by several hours.
The final blueprint of the assessment, approved by the consortium last month now estimates it will take seven hours in grades 3-5, 7½ hours in grades 6-8, and 8½ hours in grade 11.
The article continues (I’m quoting at length since non-subscribers may not be able to gain access to the ful article):
The evolution of the Smarter Balanced assessment showcases a persistent tension at the heart of the purpose of student testing, some experts say.
“Is it about getting data for instruction? Or is it about measuring the results of instruction? In a nutshell, that’s what this is all about,” said Douglas J. McRae, a retired test designer who helped shape California’s assessment system. “You cannot adequately serve both purposes with one test.”
That’s because the more-complex, nuanced items and tasks that make assessment a more valuable educational experience for students, and yield information detailed and meaningful enough to help educators adjust instruction to students’ needs, also make tests longer and more expensive, Mr. McRae and other experts said.
What Smarter Balanced did, he said, was to compromise on obtaining data to guide instruction in order to produce a test that measures the results of instruction. As a strong supporter of accountability, that’s an approach Mr. McRae supports. It’s also crucial to have data that guide day-to-day instruction, he said, but that should come from separate formative and interim tests.
Ouch. Surely we need tests both to measure whether students are learning – accountability – and to help teachers improve the effectiveness of instruction. I wonder, however, if we can rely rely on one big-ticket test to do both, especially when teachers may not receive information early enough to affect instruction (at least with this year’s students.) Isn’t it possible that smaller, more frequent tests would be more helpful for instruction . . . while end-of-year tests would provide useful accountability measures? Just a thought.