top of page
  • Google+ Social Icon
  • Twitter Social Icon
  • LinkedIn Social Icon
  • Facebook Social Icon
Search

Improving Indiana's Accountability Scoring System

  • emilytroyer2019
  • Apr 1, 2019
  • 6 min read

Updated: Apr 16, 2019

Statistics is a powerful tool. Let's use it to actually improve education in Indiana.


Popular statistical-analysis news outlet Fivethirtyeight accused the Supreme Court of being "allergic to math" after the oral arguments of Gill v. Whitford, a partisan gerrymandering case. The truth of this statement can be debated, but the underlying sentiment remains clear: modern methods of quantitative analysis are few and far between in many branches of governmental undertakings. Education is no exception.


When the federal government and state governments assess their schools to ensure students are being properly educated, they essentially hack together a few thing that directly indicate a school's efficacy. Think: absenteeism, performance on standardized tests, growth in performance for all students, performance and growth on these tests for disadvantaged subgroups. Schools are given accountability grades based on this assessment. “If a school gets a bad grade, so what?” you say. And to this I would answer: “So a LOT.”


At a federal level, schools risk losing funding. At a state level, schools could lose funding, be shut down, or be forced to merge with another school. Read: fewer resources for students, more difficult learning environments for students, lost jobs for teachers and administration. And maybe a merge or a shut down could be good for students, but it would certainly be tough on them. Students at risk for dropping out or who live particularly tumultuous home lives would have it the toughest.


Enter our dear friend: statistics.


Through an analysis of all 50 states, though, it became clear that no one uses regression to hold constant external, influential variables on any of the measures of a school's efficacy. The variables I listed above are indications, not predictors, of student performance. Schools are being graded on indications of how they're doing, but what if things outside of the schools' control are causing trends in the indications?


I hope you've read that and at least thought, "Well, yeah, obviously other things besides what a school does influence students." If not, well... now you know. External factors such as socioeconomic status, parent involvement, race, gender, urban-rural locale, and the likes all contribute to a student's performance on standardized tests, the indicator from above on which my analysis focuses.


Perhaps another thing you may be thinking is, "So what? Schools should still be held to high standards. Our children's education is no small matter." This is true, completely true and totally undeniable. In fact I agree so wholeheartedly that I want to distill the school's performance down into exactly what it can control when it comes to student performance, and then hold it accountable to that standard. This will mitigate the issue of conflating external factors with school effectiveness when assigning a school an accountability grade.


But we don't live in a perfect world. It is incredibly difficult to discern external environmental influences from internal ones from inherent ability. And it's extremely controversial to talk about individual ability versus the power of a positive environment. I won't comment on that, but I will comment on the fact that some combination of external factors affect a student's performance. Period. Let's use statistics to begin to account for these things outside of school's domain, and credit schools for their true(r) level of effectiveness.


And use statistics I have! Holding a proxy for socioeconomic status (the number of students on free and reduced price lunch for a given school's third grade), race, Hispanic ethnicity, and gender constant, I used OLS regression to predict the weighted average ISTEP score for third grades in Indiana. The difference between the actual and predicted score indicates more accurately the proportion of student performance due to the school.


Granted that my model is quite rudimentary and only explains 57% of the dependent variable, it still tells us more about how well a school is doing compared to how it should be doing. The maps below show schools' differences between their actual and predicted ISTEP scores through county-level measures of central tendency. The purpose of these maps is to simply give base level representations of true(r) school performance based on the analysis I performed in my thesis, "Identifying Exceptionally Performing Third Grade in Indiana: Who Is to Blame?"


Hover your mouse around; see how each county is doing. How is your county doing?



Yeah, Huntington County looks like it is doing pretty bad. But pay special attention to the number of observations in each county. Huntington has only one observation, so only one third grade in their county was tested in 2015-2016. And look at Spencer County at the bottom of Indiana in dark green--they did really well. Their minimum differential is still a positive performance. But they have only 7 observations. The number of students at each school is variant, and so are the number of schools in each county. This makes it difficult to compare measures of central tendency across counties because the ones with fewer schools and schools which have fewer test-takers have much more bouncy averages. Read: these counties are subject to larger amounts of change from year to year. Maybe next year, Huntington County will be a lighter red, and Spencer county will be a lighter green.


Take a look, too, at the top left corner: Lake County. They have 94 observations, a relatively large number, and an average that puts them right in the middle. But their spread is enormous; the minimum difference between actual and predicted score of -33 points and a maximum difference is 90 points! Here, too, the average can be a bit misleading. The country on average may be neutral, but it is clear that it has some exceptional performers. The minimum and maximum data points help to interpret the accuracy of the average. This map is mostly helpful in getting a general sense of how well schools in a certain area perform.


The same is true for the next map, which depicts the median differences in schools' actual and predicted ISTEP score--they are subject to large fluctuations. Huntington County stands out in dark red for this map as well due to its single observation.



But you can see that generally, the median map produces much more nuance. This is because no outliers can bring down a median--it's simply the number in the middle of the distribution of differences. This tells you the most out of the three maps in this post about the county's third grades. How does your county do now?


The last map below I threw in here because I made it and it is still somewhat interesting. I rounded the differences in each county to the tens place to ensure that a mode exists, but that left me with a very small number of bins. You can see that Shelby County has the county with the best mode difference, but again has only 7 schools. Crawford County has the worst mode difference, but it has only 5 schools. Also, the differences used in this map are rounded to the tens place, so they are not particularly precise. Still, you can see that many counties have multiple large differences, which is concerning for the 2015-2016 score reports.



In an ideal world, you would have been able to see each individual school's difference. Maybe I will add a map like that after I improve my Python skills a bit more.


Regardless, take this idea of regression and result-oriented visualization, and imagine that we apply it in every state. Imagine that we collect more variables over longer periods of time. Imagine the impact this could have on discerning responsibility and performance of particular classrooms and administrations.


***


Let's say that after all this math and regression nonsense, this new accountability grading system is much more accurate. Schools that are actually doing poorly because of factors within the school’s control—teacher efficacy etc.—are affecting test scores. Let’s also say that poor Eagle Creek Elementary school got their fourth F in a row and received probationary status, and couldn’t raise that F in the following year. The Indiana State Board of Education (ISBE) steps in to take control, firing and *trying* to replace faculty and administration they deem to be the issue. Are they actually fixing anything, or are they making the problems worse?


According to The Atlantic’s Emily Richmond, “[school] takeovers… often focus on reorganizing the central office, which can have little direct impact on whether schools do a better job of meeting the needs of struggling students critics contend. Another potential problem: handing over control of the schools to an administrator who might have little or no experience in education."


How does the ISBE get better teachers, administrators, and staff? Where do they search for them? How do we, the taxpaying public, ensure that the ISBE is committed to finding the best people for the job? And even if they are committed, what do we do when there is no one better to replace them because the system has driven out all of the most qualified people? Now that, that is a problem for a separate blog post.


If you would like access to the full thesis, please contact emilytroyer_2019@depauw.edu.

 
 
 

Recent Posts

See All

Comments


SIGN UP AND STAY UPDATED!
  • Grey Google+ Icon
  • Grey Twitter Icon
  • Grey LinkedIn Icon
  • Grey Facebook Icon

© 2023 by Talking Business.  Proudly created with Wix.com

bottom of page