Note that the study makes it very clear that not all charters are successful, and indeed that average charter school performance is no more impressive than average public school performance – in other words, not very impressive at all.
The author focuses, instead, on charters that consistently, and dramatically, improve the performance and prospects of disadvantaged students.
I listed these practices in my last post, and I want to return to one of them today. But first, let me share this interesting description of the top-performing charters:
These [the report lists several successful schools] and other charter schools have used their freedom to develop an array of innovative practices. For instance, the Bronx Charter School for the Arts believes that participation
in the arts is a catalyst for academic and social success, and therefore integrates art into almost every aspect of the school experience and prompts students to use art as a language to express their thoughts and ideas. On the other end of the spectrum, YES Prep students in Houston log hundreds of volunteer hours through “service learning opportunities” that are integrated into the curriculum. There are also a number of so-called “No Excuses” schools—such as KIPP Infinity, the Harlem Children’s Zone Promise Academies, and the Democracy Prep Public Schools—that emphasize frequent student assessments, dramatically increased instructional
time, parental pledges of involvement, aggressive human
capital practices, a “broken-window” theory of discipline (where schools address even smaller behavioral infractions with the intent of preventing larger ones), and
a relentless focus on math and reading achievement (Carter 2000, Thernstrom and Thernstrom 2004, Whitman 2008).
Note that “innovation” encompasses a very wide variety of approaches – indeed, approaches that would seem to contradict one another. This should sound a warning note to those who seek to reform education by standardizing it.
But this list also begs the question, what does an arts school have in common with a “back to basics”, “no excuses” school?
One answer, according to the author, is that they both use “student data to drive instruction”:
Data can drive more-personalized and more-efficient learning, allowing both teachers and students to track progress and to make sure that each individual student is on an appropriate path. Assessments can be used to adjust everything from tutoring to student goals. To achieve this, schools should conduct regular assessments of students every four to six weeks. More in-depth assessments could be given several times a year, and teachers could meet with students individually to
discuss and set goals after each assessment.
Administrators will need to equip schools with the necessary technology, such as scanners and software, to quickly and easily input student test data into a central database. This database should be available to teachers and administrators, and provide information on student achievement along a variety of vectors.
Ah, that “D” word again. Before you barrage me with the usual “tests are killing education” comments, stop and think about how this model of data collection and assessment differs from a once a year standardized test. First, these schools insist on more testing, not less, but it’s testing with a clear instructional purpose. Feedback comes quickly enough, and frequently enough, to enable a teacher to change course, fine tune instructional strategies, or even just back up and revisit a lesson that hasn’t been adequately learned. Lots of people get access to this data – students, parents, teachers, administrators – but they’re expected to use this information to teach, learn, or coach more effectively, not to “grade” teachers or schools.
I’m still guessing that data-driven education isn’t going to be popular with many teachers, because there’s no getting around two uncomfortable realities: Constant data collection soaks up valuable classroom time, and widespread data sharing shines a spotlight on our effectiveness. Still, I think that many teachers would be much more open to gathering student performance data if we got that information soon enough to make a difference, and if administrators used this data as a tool and not as a club.