Since I’ve frequently voiced qualified support for “data-driven” educational policy, I feel especially obliged to post Rick Hess’s warning that data is “no deux ex machina”:
Data expose inequities, create transparency, and help drive organizational improvement.
But something is amiss. Many educators regard talk of data-based decision-making as an external imposition, sensing new obligations and what they see as a push to narrow schooling to test scores and graduation rates. Districts remain hidebound and bureaucratic, with precious few looking like data-informed learning organizations. And the data—which are relatively crude, consisting mostly of reading and math scores—are unequal to the heavy weight they’re asked to bear.
Despite these challenges, enthusiasts continue to make sweeping claims about the restorative power of data. Too often, as we talk to policymakers, system leaders, funders, advocates, and vendors, we get a whiff of deus ex machina, the theatrical trick of having a god drop from the heavens to miraculously save the day. (The phrase’s literal meaning is “God in the machine.”) Like a Euripides tragedy in which an unforeseen development bails out the playwright who has written himself into a corner, would-be reformers too often suggest that this wonderful thing called “data” is going to resolve stubborn, long-standing problems.
He then offers up a brief history of education’s search for magic data. The many testing skeptics among my readers will especially like this bit:
Consider the IQ test, created to help sort new recruits mobilized for World War I. The U.S. government asked elite psychology professors to develop a system for gauging intelligence. In hindsight, some of the results were unreliable. In one analysis, testing expert H. H. Goddard identified 83 percent of Jews, 80 percent of Hungarians, and 79 percent of Italians as “feeble-minded” (Mathews, 2006). In one 1921 study, Harvard researcher Robert Yerkes concluded that “37 percent of whites and 89 percent of negroes” could be classified as “morons” (Gould, 1981, p. 227). Yerkes had no concerns about the results because the tests were “constructed and administered” to address potential biases and were “definitely known to measure native intellectual ability” (Graham, 2005, p. 48).
Statistical methods have improved since then, but I’m not sure the hubric has receded much. I still think that value-added data offers useful (though not sufficient) guidance for teachers, parents, administrators and students . . . but I still find Rick Hess’s warning useful.