The Ofqual scandal was the tip of the iceberg. Time to restore controlby Helen Mountfield and Josh Simons / October 23, 2020 / Leave a comment
The students who stood outside the Department for Education this summer protesting the Ofqual algorithm did something few before have managed: they made algorithms political. They showed the unfair human impact of “predictive” technologies. Instead of passively describing how AI will reshape our world, these students injected human agency into the debate. They reminded us that we must choose what we ask AI tools to do for us. We must decide how we want algorithms to reshape our world, not reshape our world to suit algorithms.
Two things had gone wrong in the Ofqual case, both of which suggest further struggles to come. First, using statistical prediction from past data to make decisions about individuals’ futures often has inequitable consequences. The Ofqual algorithm adjusted the individual grades awarded to students using data about the past grades of other students in the same subject and school, to predict how they would have performed had Covid-19 not cancelled the exams. To students from underperforming schools, often in low-income areas, this felt like having their destiny pre-determined, because it removed from their control the possibility of doing better than their predecessors. While exceptions are by definition statistically unlikely, from the perspective of an individual, unlikely is still possible.
The Ofqual algorithm wasn’t “biased,” it was simply trained on data that represents real and persistent disparities in educational attainment. The “model” didn’t in itself “disadvantage young people from poorer families,” as one minister put it; the model reflected that young people from poorer families are in fact disadvantaged, and assumed that this reflected their likely future too, no matter their own individual aptitude or effort judged on their own actual past work. Using school-based averages to assign individual grades disempowered pupils whose future rested on their capacity to buck the trend.
The second problem was about accountability. As members of the Institute for the Future of Work’s Equality Task Force, whose report, Mind the gap: How to fill the equality and AI accountability gap in an automated world will be published next week, we have spent the past year exploring how to ensure those who build and use data-driven technologies are held to account for the impact of these tools. We have witnessed few processes as confusing as that by which Ofqual and the government built and deployed the A-level algorithm, and read few…