Editor’s Page: Failure to Thrive
How much longer can parents wait for schools to improve?
![]() Photo by Linny Morris |
This month, we present the fourth installment of our “Grading the Public Schools” report card. From the beginning, we thought it important to measure more than simply student performance on math and reading scores. Each school is judged on five measures—test scores, combined with satisfaction scores for teachers, parents and students, for an overall score.
We can’t say there hasn’t been any improvement over the past five years. In 2003, the first year we published the chart, the average score was 49 out of a possible 100. This means that, if we weren’t grading on a curve, the average school would have earned an F.
In this year’s report card, the average score climbed to 57.3, which, if we’re not grading on a curve, would still be an F. But closer to a D than ever! At this rate of improvement, we won’t see the average school earning higher than 80 until 2023.
How much longer can parents wait for improvements? This year’s high school graduates are leaving a system little better than it was when they were in the eighth grade. If it really takes us until 2023 before we can declare that the public schools are uniformly pretty good, we’ll have let yet another generation slip by.
This doesn’t have to be the case. For example, this month we profile Washington Middle School as one of the most improved campuses, a school that shot up 66 places in our rankings since 2006. Another among the most improved school in the system, rocketed 175 places since 2006, rising from a D- to a B+.
But the average is still determined by a system that can’t seem to nurture excellence at that speed in every neighborhood.
To its credit, the school system surveys teachers, parents and students every other year with its “School Quality Survey” (SQS), asking them about everything from how safe people feel on campus to how appropriately homework is assigned. These surveys can be viewed in full at http://arch.k12.hi.us/school/sqs/sqs.html; I encourage you to take a look.
We refer to the parent, teacher and student numbers in our report card as “satisfaction” scores. It might be more accurate to call them “approval ratings” because the number comes from a specific measure we extract from the SQS. For teachers, the score represents the percentage who say they would send their own child to the school at which they teach. For parents, it’s the percentage that would recommend their child’s school to other parents. For students, it’s a bit trickier—it’s the percentage of students who disagree with the statement, “If I could, I would go to a different public school.”
Through these key statements, the people who learn and work at a given school speak directly to us to endorse, or not, their own neighborhood schools.
While the average score in the report card has barely budged, the DOE budget has grown, even while enrollment has dropped. Since 1999, government spending on the DOE has grown more than 150 percent, to its current $2.34 billion. That’s nearly $14,000 a year for each of its 178,369 students.
We’ve asked this before, rhetorically, but perhaps the DOE might consider putting this question to parents on its School Quality Survey: “If the state just gave you the $14,000 to spend on your own child’s education as you see fit, would you continue to send your child to a public school?”
If you’re wondering how it is that a department meant to serve the public can be so resistant to change, this issue also contains a stunning essay by Randy Roth, “Politics in Hawai‘i: Is Something Broken?” Here, the co-author of the groundbreaking 1997 critique on Bishop Estate, “Broken Trust,” shows how Hawai‘i’s political class seems to have a habitual aversion to transparency and accountability, no matter what it sets its hand to—school reform, Act 221, grant distribution, judicial selection, etc. As Roth describes things, state leaders consistently skate by without consequence, even when major investigations turn up serious allegations of wrongdoing.
What these two features in this month’s issue suggest to me is a local government that’s better at measuring and investigating its failure to perform than it is at fixing its performance. If we don’t like what those measurements add up to, we always have another option: Different rulers.