The Supreme Court heard oral arguments on May 22 from the State Solicitor and plaintiffs’ attorney pursuant to what will be the sixth Supreme Court decision in the eight-year old Gannon saga. A decision will be presumably handed down before June 30.
Front and center in the arguments is whether the state legislature’s most recent attempt to satisfy the Court, SB 423, will pass constitutional muster among the five justices deciding the case. The decision will ultimately be based on whether the Court feels the additional $525 million over the next five years is sufficient to improve educational outcomes of students, defined by the justices as state assessment results.
In the 2018 WestEd education cost study commissioned by the Legislature, the researchers defined outcomes as a function of costs/spending. A response published by KPI was critical of both the methodology and the recommendations from the WestEd study. As it turns out, the cost study itself was moot because the Legislature seemingly chose to disregard it while forging SB 423, a fact acknowledged by the state’s solicitor during oral arguments.
However, the Court established as a “finding of fact” during the Gannon marathon that there is not only a correlation between spending (as defined by per-pupil expenditures) and outcomes, but a causal relationship. Furthermore, that relationship has continually been recognized by the Court as the driving force for determination of constitutionality. They have made it abundantly clear that the volume of money to education will ultimately determine constitutionality. Their perspective can be summed up like this:
-When education was “constitutionally funded” in the 3-year period from 2008-2010, state assessment scores increased.
-When education was not “constitutionally funded” since 2011, state assessment scores decreased.
There is an elemental limitation when looking at the data in this simple way. It only provides a perspective between spending and outcomes as it changes over time – as if a change in time is the only method in which to investigate the two variables. The analysis to date has always taken the entirety of the students in Kansas as a single, lumped variable, coupled with per-pupil spending, and compared the two over a series of years.
The question has always been:
How are the outcomes of the students of Kansas as a whole, defined as state assessment results, different over a period of years?
But the question could also be asked this way:
How are the outcomes of students among the school districts in Kansas, as defined as state assessment results, different within a single year?
If there is indeed a relationship between spending and outcomes, it should not only manifest itself across time, but also, for lack of a better word, in a snapshot in time. To my knowledge, the spending/outcomes relationship has never been investigated this way.
Until now.
One way of determining if even a correlation exists between spending and outcomes (setting aside the question of causality) is to compare the outcomes of districts that spend nearly the same dollars per pupil. If it is true that spending drives outcomes, as the Supreme Court claims and which WestEd based its research, it should hold true that districts which spend the same must have similar outcomes on state assessments.
However, it would be disingenuous not to consider at least one other factor that contributes to disparities in state assessment scores: income-based achievement gaps.
So recognizing that caveat, this inquiry asks the basic question: Do school districts with similar spending and similar income demographics have similar state assessment outcomes?
Before investigating that question, it should be clearly understood that this is not meant to be a formal research study with data subjected to rigorous econometric-type analysis. It is merely a preliminary effort to explore the relationship between spending and outcomes to see if that relationship seems to be manifested when comparing districts within the same year. In judicial terms, this would be considered a preliminary hearing. The purpose is to raise questions that may lead to further, more rigorous analysis.
It is also paramount to understand that there are great disparities in per-pupil spending across the state. In the 2016-17 school year per-pupil spending ranged from Cunningham’s $31,727 to Elkhart, a district that spent $8,308.* It’s not surprising that Cunningham, located just west of Wichita, is a small district of about 150 students. Clearly there are economies of scale at work that drive up the per-pupil cost in tiny school districts. Whether an excess of $30,000 per pupil is acceptable at any level is a valid concern, but not one that will be addressed here.
Low-income student percentages, defined as those who qualify for free or reduced school lunches also vary widely. In the Kansas City district 85.1% of students qualified, while at the other end of the scale in the Johnson county Blue Valley district 8.2% of students qualified in 2016-17.
Given those differences, for purposes of this inquiry, districts are considered “similar” that are:
-Within two percentage points of each other in low-income population and
-Within three percent of each other in per-pupil spending.
The accompanying table compares the 2017 state assessment math scores for 16 pairs of districts that meet those two criteria. Please note that Level 2 and above state assessment scores are reported because the Supreme Court has determined performance Level 2 is their measuring stick for constitutionality purposes. Also, this is not, nor meant to be, an exhaustive list of schools that fit the two criteria. This table displays those districts that also had a difference in math scores exceeding 10 percentage points. Furthermore, some pairs of districts are similar in size and some vary substantially in student population. That is not a factor in this analysis because the Court has never considered school district size as part of their declaration of a spending/achievement relationship.
The first pair of districts, Paradise and Rolla, spent nearly the same per pupil and have very similar percentages of low-income students. Yet, 81.1% of Paradise students scored Level 2 or higher in math, while Rolla had only 61.4% scoring Level 2 or higher.
*All data from KSDE. Per-pupil spending is all spending by a district NOT including bond and interest.
What does this information mean?
As stated above, this is not an exhaustive list of similar districts. There are also several examples of similar districts with math score differences of less than 10 percentage points. But for the purposes of this inquiry, it is only important to realize that significant differences unquestionably occur in outcomes when district spending is the same, even when making consideration for income-based achievement gaps. The data in this table provides new evidence that disputes the Court’s spending/outcomes relationship when taking a parallax view.
At the very least, a look at the data this way begs further investigation.
During Gannon oral arguments in September 2016, Justice Biles told this to plaintiffs’ attorney Alan Rupe, concerning the reporting of student achievement: “You should have trademarked…your statement about averages hiding the problem.” Justice Biles was exactly right. Throughout Gannon, the Court has been analyzing averages – yearly averages of state assessment scores – to make the connection between spending and outcomes. And true to his words, those averages hide the real problem, which is having drawn an incorrect relationship between spending and outcomes; a problem exposed when taking a different look at the data, as presented herein.
So how does this fit in the Gannon case? In a nutshell, the Court claims that during the “constitutionally” funded three-year period of 2008-2010 state assessment scores during the No Child Left Behind years were on an upward trajectory. When base state aid per pupil was reduced due to the Great Recession, state assessment scores also trended downward. The justices have concluded that money was the causal factor in both scores going up and scores going down.
Since the evidence here disputes the Court’s claim that money is the cause of differential achievement, what could it be that explains the changes in test scores during and after the No Child Left Behind years? Obviously, something else was going on. I was an elementary teacher during those years who was intimately involved in testing during the arc of the No Child Left Behind period. I can attest that other forces were at work – forces that have been overlooked because they are difficult, if not impossible, to conveniently quantify.
That is the subject of the next article. Stay tuned.