Your Jeremiad on the Standards of Learning testing makes five basic points and concludes that it’s time to “scrap the SOLs and move on.” I’d like to mostly agree with your five points and suggest some different conclusions.
SOL Results Have Guided Home Purchase Decisions, With the Effect of Harming Low-Performing Schools
That is almost certainly true.
There is a plethora of Web sites dedicated to reporting score results – even one from the Department of Education. And surely one reason there are few school-age children in my Richmond neighborhood and many in your Henrico enclave is the quality of the schools.
Would you then have parents rely upon word-of-mouth rather than actual data in selecting a place to raise the family? Would you conceal the performance of this appalling middle school from the parents who are paying the taxes to support its gross failure to educate their children?
It’s not the job of parents to harm their kids by sending them to an awful school in order to improve the school. It’s the job of the school board and the Board of Education to fix the awful schools. And it is far past time to hold the school, the school board, and the education board accountable if they don’t deal with problems such as the example in the link above.
To do that we must have a quality measurement. As one of your commenters pointed out, the education establishment loves to measure inputs: money, facilities, credentials. But those things don’t measure productivity. For sure the SOL is imperfect but it’s the only productivity measure we have at hand. We should be thinking of ways to deal with the imperfections, not relapsing to a system where the only quality measures do not in fact measure quality.
The SOL Penalizes Poverty
It’s clear that children from economically disadvantaged households underperform their more affluent peers on the SOL tests. It’s also clear that some school systems with large ED populations perform better (or worse, e.g., Richmond) than others.
Thus, the bare SOL pass rate is not a fair measure of school quality.
Should we then abandon measurement of educational quality or should we look to improve the measure?
A student growth percentile expresses how much progress a student has made relative to the progress of students whose achievement was similar on previous assessments.
A student growth percentile complements a student’s SOL scaled score and gives his or her teacher, parents and principal a more complete picture of achievement and progress. A high growth percentile is an indicator of effective instruction, regardless of a student’s scaled score.
And a low SGP tells us that the kid didn’t learn much in comparison to the other students who started in the same place. See this for a discussion of the way the SGP illuminated the awful productivity of some of Richmond’s teachers.
Yet our Board of “Education” has abandoned the SGP. See this (scroll down to Part F) for a discussion of their bogus reasons.
Bottom line: We know the SOL is unfair to less affluent kids. We know how to correct for that effect but our Board of “Education” doesn’t want to use the tool that does that. Is that a problem with the SOL or a problem with the Board?
There is a long and ugly history of Virginia schools cheating to improve SOL scores. See this for a particularly cynical example. Go here and search for “cheat” to be inundated with data on the subject.
But is this a problem with the SOL testing or with the schools and the Board of Education?
If we take it that it’s crucial to have a measure of educational output, then we’ll have to be prepared to deal with attempts to skew the data. The (unacceptable) alternative is to forget about measuring teaching quality and to let far too many children be harmed by inadequate teaching.
Teaching to the Test
You, and others, condemn the SOL because it encourages teaching to the test. In fact that is not a problem; it is part of the reason for having the SOL.
Consider the alternative: If every school and, in some measure, every teacher gets to decide what shall be taught and what shall be on the exam, there can be no uniform measure of educational quality and no accountability (there’s that word again!).
Indeed, the Department of Education literature is ripe with discussion of “aligning the curriculum” to the Standards of Learning. Thus, if we have appropriate standards, we get schools that teach the appropriate material.
In terms of your example: If we think irregular polygons are important, we can put them in the standards and in the test; that will assure that the schools will try to teach irregular polygons.
In short, teaching to the test is not a bug; it is a feature.
Accountability remains elusive.
I like to talk about the argyle sock effect: If you go into an organization and find the employees wearing argyle socks, you know that the boss wears argyle socks. First corollary: You know it’s a good grocery store if you see the manager handling checkout or corralling carts.
The first step for improving education is evaluating how well the teachers teach. Yet the current system does not work and the Board of “Education” has abandoned its tool, the SGP, that can work. Indeed, their standards for evaluation are a crock.
The Board of Education is wearing white socks with its dress shoes.
Accountability starts at the top: If the Governor were serious about improving the schools, he would fire the current members of the Board of Education and replace them with people who understand accountability.
The Governor has not done so. The Governor is wearing white socks with his dress shoes.
As I said at the start, you have correctly identified major flaws in the current system for measuring student achievement. But I think your solution would take us back to the Bad Old Days when there was no objective measure that could allow us to hold public school teachers accountable for the job they do with our children.
That said, thanks for shining some Bacon’s Rebellion light on this important subject.
John Butcher publishes Cranky’s Blog and is a frequent contributor to Bacon’s Rebellion.