Data-driven education: some cautionary tales

Since I’ve frequently voiced qualified support for “data-driven” educational policy, I feel especially obliged to post Rick Hess’s warning that data is “no deux ex machina”:

Data expose inequities, create transparency, and help drive organizational improvement.
But something is amiss. Many educators regard talk of data-based decision-making as an external imposition, sensing new obligations and what they see as a push to narrow schooling to test scores and graduation rates. Districts remain hidebound and bureaucratic, with precious few looking like data-informed learning organizations. And the data—which are relatively crude, consisting mostly of reading and math scores—are unequal to the heavy weight they’re asked to bear.
Despite these challenges, enthusiasts continue to make sweeping claims about the restorative power of data. Too often, as we talk to policymakers, system leaders, funders, advocates, and vendors, we get a whiff of deus ex machina, the theatrical trick of having a god drop from the heavens to miraculously save the day. (The phrase’s literal meaning is “God in the machine.”) Like a Euripides tragedy in which an unforeseen development bails out the playwright who has written himself into a corner, would-be reformers too often suggest that this wonderful thing called “data” is going to resolve stubborn, long-standing problems.

http://www.aei.org/article/education/k-12/leadership/data-no-dues-ex-machina/

He then offers up a brief history of education’s search for magic data. The many testing skeptics among my readers will especially like this bit:

Consider the IQ test, created to help sort new recruits mobilized for World War I. The U.S. government asked elite psychology professors to develop a system for gauging intelligence. In hindsight, some of the results were unreliable. In one analysis, testing expert H. H. Goddard identified 83 percent of Jews, 80 percent of Hungarians, and 79 percent of Italians as “feeble-minded” (Mathews, 2006). In one 1921 study, Harvard researcher Robert Yerkes concluded that “37 percent of whites and 89 percent of negroes” could be classified as “morons” (Gould, 1981, p. 227). Yerkes had no concerns about the results because the tests were “constructed and administered” to address potential biases and were “definitely known to measure native intellectual ability” (Graham, 2005, p. 48).

Statistical methods have improved since then, but I’m not sure the hubric has receded much. I still think that value-added data offers useful (though not sufficient) guidance for teachers, parents, administrators and students . . . but I still find Rick Hess’s warning useful.

3 comments

  1. Carolyn Sharette

    Data is like medical diagnostics – it doesn’t do anything to improve the condition, it only can inform us of the status of the “patient”. But it is still very important – vital even. Granted, it is a very tricky thing to be able to know how to “read” data, and how to discern what it is really telling us. This is where education, as an “industry” needs a great amount of advancement, and could use some assistance.

    In my experience, and coming from the medical field (my first degree is in Nursing), education rarely attracts people who are excited about the scientific processes that need to be put in place in order to gather valid data, who know how to read and analyze it, and how to make decisions based upon it. It would be helpful if we could borrow some great scientific minds who are skilled in this area and have them help out in the education arena.

    • Mary McConnell

      I’d go a step further, and say that educators are often a little data-phobic. That’s partly, of course, because data can be abused or wielded very crudely. I’m hoping, though, that greater insistence on data – from legislators, parents, the Department of Education – will encourage teachers and school districts to invest in better data acquisition and interpretation.

  2. Stephanie Sawyer

    Data is what it is – a measurement of something. The problem is that various groups try to attribute a cause to the data, which is where all of the policy-making and politics can easily go off the rails.
    Assuming that the test generating the data is valid, then we know, like Carolyn says, that there is a problem. There are too many hidden variables that are not measured by a test that really prevent one from even making a correlation to something else, much less attempt to attribute causation.

Leave a comment

DeseretNews.com encourages a civil dialogue among its readers. We welcome your thoughtful comments.

*