A Potential Problem with BPS's Quality Measurement
As discussed in a previous post on measuring quality, BPS is currently using only MCAS scores to measure the quality of schools when evaluating potential new student assignment plans. They're calling the measure they're using a quadrant analysis. For each MCAS test BPS looks at two measures. One is a performance measure based on the percent of students who fell into the "advanced" or "proficient" categories. The other is a growth measure using the Student Growth Percentile (SGP) calculated by the state. SGP measures how much each student "grew" academically relative to other students in the state. This is an important measurement because it captures what the student actually learned in a year. This measurement levels the playing field somewhat as far as demographics because SGP is based on how much each student learned during the year rather than largely measuring what the student already knew.
Using these two measures, BPS puts each school in one of four quadrants on a chart (see the "quadrant analysis" link above for the 2012 charts). Schools with above average performance are placed on the right-hand half of the chart. Schools with above average student growth are on the top half. So the higher and farther to the right a school is on the chart, the better it's doing.
BPS then assigns a number to each school. Schools in the upper-right quadrant get a one. Schools in the lower-left get a three. Schools in the other two quadrants get a two. Lower scores indicate higher quality. This yields four numbers from one to three. One for each MCAS test used (English Language Arts and Math) for each year included (2011 and 2012). These four numbers are averaged to create the single number that is being used to measure quality.
The problem is that the while the two measurements used to place schools in quadrants can vary widely, BPS is boiling them down to two values: above average or below average. This means that there's no distinction between an exceptional school and one that is slightly above average on a particular measurement. Also, there can be a very large distinction between a slightly above average school and a slightly below average school. This can cause distortions and make it hard to rank schools accurately or measure relative difference in quality among schools.
As an example, the image below shows three schools that were included in the quadrant analysis. The y-axis is the student growth percentile. The x-axis is the percentage of students scoring proficient or advanced. The four dots for each school represent their Math and English scores for 2011 and 2012. Keep in mind that dots farther to the right and closer to the top of the chart indicate better scores. Take a look and see which schools you think are better.
School A is the Curley which is the 5th ranked school based on the quadrant analysis. School B is the Kilmer, ranked 28th, and School C is the Sarah Greenwood, ranked 10th. Why does the Kilmer rank so much lower? Two of its scores rank a little below average for student growth, but its over-all performance is excellent. Notice that the Kilmer's worst performance ranking (percent of students scoring proficient or advanced) is much better than the best rankings for the other two schools. But the other schools are still mostly above average so Kilmer gets no credit for this difference.
The problem is exacerbated in the latest BPS analysis. This is because the analysis designates all schools as either a quality school or non-quality school. This means that even a small change in a school's ranking can move it from one category to the other.
A better way to create the quality ranking from the same raw data would be to take each of the numbers that goes into the current rating and calculate how many standard deviations above or below the mean it falls. These numbers could be averaged to create a single score for each school. This score would still be measuring the same growth and performance factors relative to the district average. But it would also make a distinction between small differences and large ones, without having any artificial breakpoints.
Also, BPS shouldn't try to choose an arbitrary point to designate a school as being a quality school. No matter what point they choose, it will result in schools that are essentially of the same quality being split. Instead, the analysis should look at the weighted probability of getting into a higher or lower quality school. For example, if a student has a 40% chance of getting into a school rated 1.0 and a 60% chance of getting into a school rated 3.0, that students weighted probability would be 2.2, calculated as follows: (0.4 x 1.0) + (0.6 x 3.0). This measure will allow us to see differences in access to quality without making arbitrary distinctions.