••• Education •••

Consequences, not spending, drive changes in student achievement

deficit

(This column is written by KPI scholar David Dorsey, who formerly taught in Kansas public schools for more than a decade.)

Recently I showed how the Supreme Court’s belief of a relationship between education spending and student achievement is fundamentally flawed. Throughout the Gannon case – the Court’s Gannon VI decision is expected any time – the Court has compared statewide averages of state assessment scores to average per-pupil spending over several years and concluded a causal relationship between spending and achievement. However, a look at expenditures at the district level within a single year provides contrary evidence that similar per-pupil spending does not produce similar state assessment results.

The Court has focused on what they refer to as the “constitutionally funded” 3-year period of 2008-2010 and the subsequent years that proceeded after the Legislature reduced base state aid per-pupil (BSAPP) due to the economic collapse. There is no dispute that state assessment scores increased during those “constitutionally funded” years. The Court also asserted in Gannon V that “student proficiency levels…appeared to have steadily regressed after the 2011-2012 school year through 2015-2016.” The opinion even included a chart of state assessment scores for that 5-year period to make their point. For the sake of this discussion, I will suspend criticism of the Court’s feeble foray into statistical analysis that compared the results of two completely different types of assessments and reporting methods without any effort to vertically align them.

The question for this examination is: Why did test scores track that up-and-down trajectory?

The answer can be summed up in a single word: consequences.

The justices who will ultimately determine the fate of the Gannon case are far removed from the testing realities of that time period. I was actually there – in the trenches, so to speak – to witness what really happened. I was integrally involved in the state assessment process through the 2014 testing sessions. I can say unequivocally that it was both the impact of (negative) consequences followed by the absence of any consequences that was the catalyst of the behavioral approach to state assessments during those years.

The reason scores went up: No Child Left Behind (NCLB). The reason the scores went down: The End of No Child Left Behind.

NCLB and consequences

NCLB was most famous (or infamous) for the goal of having 100% of all students be proficient in math and reading by 2014. Everyone knew the goal was unattainable, nevertheless schools were held accountable for achieving annual intermediate steps toward that goal. That was known as Adequate Yearly Progress (AYP). If a school failed to meet AYP goals – calculations of which are too detailed for this discussion – there could be consequences for the school, district, and state.  A potential loss of dollars and/or students to other districts, among others, were written into the law. Believe me, schools took these tests very seriously. (I will spare you the endless anecdotes that I recall.) The education community was frequently chastised for “teaching to the test,” which might have some validity. But an alternate consideration is to understand the behavior was to avoid negative consequences, which is a completely rational – even expected – response.

Is it any wonder that as the NCLB threshold trajected upward, so did assessment results on state-controlled tests?

The end of NCLB and the absence of consequences

Part 1. The waiver

Congressional gridlock kept NCLB from being reauthorized to address the “100%” requirement, so Education Secretary Arne Duncan established a waiver system that allowed states an opt-out of NCLB requirements. The fact is, the waiver was primarily a ruse to get states to adopt Common Core standards. Kansas was one of 45 states that took the bait, and with the stroke of a pen, state assessment consequences vanished. And as could be expected, not only did the consequences vanish, but so did our serious approach to preparing students for the test. (I will also spare you those anecdotes.)

Part 2. NCLB was replaced with the Every Student Succeeds Act (ESSA)

In the twilight of the Obama administration, Congressional Republicans and Democrats finally agreed on a replacement for NCLB. The ESSA pulled back most of the federal NCLB requirements and left assessment consequences up each state. Not surprisingly, the Kansas State Board of Education and KSDE has not only eliminated consequences for poor assessment performance, they now approach state assessments as a perfunctory exercise. The goal now is to have state assessments be minimally intrusive to the school. Kansas Education Commissioner Randy Watson summarizes that approach in this video to the Kansas education community.

Put succinctly, as soon as the consequences went away, test scores started to fall.

A 2005 article by Stanford’s Eric Hanushek puts this concept in broader context. Studying school accountability and student performance, Hanushek recognized and identified the difference between accountability and consequences in this passage: “Just reporting results has minimal impact on student performance and that the force of accountability comes from attaching consequences such as monetary awards or takeover threats to school performance.” He found that accountability leads to higher student achievement, but “the impact holds only for states attaching consequences to performance. States that simply provide information through report cards without attaching consequences to performance do not get significantly larger impacts than those with no accountability.” (emphasis added)

Nothing alters human behavior like the perception of consequences or the lack thereof; it’s in our very nature. Once  the role of consequences during the time frame of Supreme Court focus is understood, the variations in state assessment scores come at no surprise.

That leaves this reality: the changes in student achievement as measured by state assessment results and changes in education funding during those years is nothing more than a coincidence.

Regardless of what the Court decides in the coming Gannon VI opinion, the Legislature has already committed hundreds of millions of additional taxpayer dollars to education in order to placate the Supreme Court. Don’t expect the coincidence of the past to repeat itself.