Technology

We need an Accountability for Algorithms Act

The Ofqual scandal was the tip of the iceberg. Time to restore control

October 23, 2020
Photo:  Dominika Zarzycka/NurPhoto/PA Images
Photo: Dominika Zarzycka/NurPhoto/PA Images

The students who stood outside the Department for Education this summer protesting the Ofqual algorithm did something few before have managed: they made algorithms political. They showed the unfair human impact of “predictive” technologies. Instead of passively describing how AI will reshape our world, these students injected human agency into the debate. They reminded us that we must choose what we ask AI tools to do for us. We must decide how we want algorithms to reshape our world, not reshape our world to suit algorithms.

Two things had gone wrong in the Ofqual case, both of which suggest further struggles to come. First, using statistical prediction from past data to make decisions about individuals’ futures often has inequitable consequences. The Ofqual algorithm adjusted the individual grades awarded to students using data about the past grades of other students in the same subject and school, to predict how they would have performed had Covid-19 not cancelled the exams. To students from underperforming schools, often in low-income areas, this felt like having their destiny pre-determined, because it removed from their control the possibility of doing better than their predecessors. While exceptions are by definition statistically unlikely, from the perspective of an individual, unlikely is still possible.

The Ofqual algorithm wasn’t “biased,” it was simply trained on data that represents real and persistent disparities in educational attainment. The “model” didn’t in itself “disadvantage young people from poorer families,” as one minister put it; the model reflected that young people from poorer families are in fact disadvantaged, and assumed that this reflected their likely future too, no matter their own individual aptitude or effort judged on their own actual past work. Using school-based averages to assign individual grades disempowered pupils whose future rested on their capacity to buck the trend.

The second problem was about accountability. As members of the Institute for the Future of Work’s Equality Task Force, whose report, Mind the gap: How to fill the equality and AI accountability gap in an automated world will be published next week, we have spent the past year exploring how to ensure those who build and use data-driven technologies are held to account for the impact of these tools. We have witnessed few processes as confusing as that by which Ofqual and the government built and deployed the A-level algorithm, and read few documents as long-winded and impenetrable as Ofqual’s report. People will not, and should not, accept algorithmic decisions unless the technology is built through fair, open and participatory processes.

The lesson we learned was this. As a society, we spend too much time prophesying what AI will do to us and not enough time thinking about what laws and regulations we should put in place to ensure those who build and use AI are responsible for its outcomes. As the UK leaves the European Union, we have choices about this which combine issues of geopolitics and AI governance: should the UK continue to adopt and implement the GDPR law? If not, how should we do things differently? What might be the consequences? How should UK equality and anti-discrimination law capture cases like the Ofqual algorithm? Should it follow European or American models?

It is hard to overstate the importance of these questions. They matter not just for innovation and productivity, but for how we exert control over the technologies that will shape our future.

We believe the UK should grasp the opportunity to develop a new Accountability for Algorithms Act. This would establish a unified framework for the obligations and duties of public and private sector bodies when building, testing, deploying and monitoring data-driven decision-making tools. The act would draw on the best ideas from the US, including in draft US Senate legislation and think tank reports, like treating big tech platforms as public utilities; and from Europe, like GDPR and the impending Digital Services Act, and apply them to a UK context.

Consider one example: ongoing accusations that Facebook’s advertising delivery system discriminates on the basis of race and gender. Appropriate legislation would impose positive duties on private and public bodies to consider how best to advance equality in the design and deployment of AI. This would not only prevent these systems from compounding existing disadvantage, it would create powerful incentives for companies like Facebook to explore how to harness machine learning and AI to advance equality. Facebook would be required to monitor and report on the impact of its systems on different social groups and where necessary, make reasonable adjustments to reduce persistent inequities.

Algorithms invite us to think harder about what we want decision-making systems to achieve and what obligations we wish to impose on those who build them. The Black Lives Matter movement has reminded us of an important lesson. “The opposite of ‘racist’ isn’t ‘not racist’,” argues the writer Ibram X Kendi, “it is ‘anti-racist.” Or as Justice Sonia Sotomayor of the US Supreme Court put it: “The way to stop discrimination on the basis of race is to speak openly and candidly on the subject of race,” and to develop and apply the law “with eyes open to the unfortunate effects of centuries of racial discrimination.”

In the age of algorithms, it is more important than ever to hold those with power answerable for how they make decisions. As the UK contemplates its post-Brexit future, it’s time to start the serious political and policy debates about how we use AI to build a better society. A new Accountability for Algorithms Act would not only give effect to the much-touted levelling-up agenda; it would help to restore a sense of human control at a moment when so many of us feel it is sorely lacking.

 

Helen Mountfield is a QC at Matrix Chambers and Principal of Mansfield College, Oxford. She chairs the Institute for the Future of Work's Equality Task Force. Josh Simons is a graduate fellow at the Edmond J Safra Centre for Ethics, Harvard, and member of the Equality Task Force.