Politics

Human oversight is crucial for automated decision-making. So why is it being reduced?

The plan to scrap Article 22 of the GDPR will mean decisions that change people’s lives could be made solely by computers

December 06, 2021
Students protesting after their A-level results were decided by Ofqual's algorithm. Photo: Jacky Chapman / Alamy Stock Photo
Students protesting after their A-level results were decided by Ofqual's algorithm. Photo: Jacky Chapman / Alamy Stock Photo

Government decisions can be life-changing for those they affect, and with ever-greater frequency they are made by computers. A-level results, investigations into sham marriages, and fraud checks on disabled benefits claimants were all decided—at least initially—by systems with algorithms at their heart. The Home Office’s new plan for immigration will use an algorithm to “identify and block the entry of those who present a threat to the UK.” If this is not managed carefully, it does not take a huge leap of imagination to foresee a damaging hike in racial profiling at our borders.

The brave new world of automated decision-making (ADM) promises greater efficiency and lower costs—but it comes with inherent risks. Incorrect or discriminatory data fed into the system can result in irrational feedback loops that reinforce existing biases. The algorithms that process that data are often highly complex and can be virtually impossible for lay people to understand. Human oversight is crucial. That is why the government’s current proposal to get rid of Article 22 of the General Data Protection Regulation (GDPR) is so concerning.

Article 22 provides that a person shall have the right “not to be subject to a decision based solely on automated processing” which produces “legal” or “similarly significant” effects on that person. Article 22—or something like it—is essential for ensuring human oversight in public decision-making. But this important safeguard could be jettisoned as part of the overhaul of data protection law proposed by the Department for Digital, Culture, Media and Sport (DCMS) in its recent consultation, “Data: a new direction.” You can read Public Law Project’s response to the consultation here.

Admittedly, Article 22 as currently drafted is not perfect and there is a strong case for reform—but to make oversight stronger, not weaker. The problem is that Article 22 is open to a very narrow interpretation, under which many if not most ADM systems would be excluded from its scope. Lilian Edwards, Rebecca Williams and Reuben Binns give this example: in summer 2020, Ofqual developed a standardisation model to allocate GCSE and A-level grades to students whose education was disrupted by the Covid-19 pandemic. They argued that decisions made using this model were not solely automated, because some of the inputs were generated by teachers. However, virtually all real-world ADM systems will use human-generated inputs in some form. So on Ofqual’s interpretation, the scope of Article 22 would be very limited.

In the face of widespread criticism Ofqual’s model was scrapped, so this issue became moot. Even so, the example raises a concern that the drafting of Article 22 renders it unfit for purpose. The terms used in Article 22—especially “a decision based solely on automated processing—need to be clarified to ensure its broad practical application.

The line between “solely” and “partially” automated can be difficult to draw. One complication is the problem of automation bias: a well-established psychological phenomenon whereby people put too much trust in computers. This could mean that officials over-rely on automated decision support systems and fail to exercise meaningful review of an algorithm’s outputs. What looks like partial automation may in practice amount to full automation.

Article 22, properly defined, should in practice prohibit automated decision-making where, due to automation bias or for any other reason, the human official is merely rubber-stamping a score, rating or categorisation determined by the computer. It should require genuine human oversight, rather than a token gesture.

Alongside the proposal to remove Article 22, DCMS has made a number of other worrying proposals—including removing the requirement for organisations to undertake Data Protection Impact Assessments and limiting people’s ability to find out about how their data is being used—which would erode the transparency, accountability and protections currently enshrined in data protection law.

One glimmer of hope is the proposal for compulsory transparency reporting on the use of ADM systems in government. This could be a positive step. The stressful and exhausting experiences of disabled people subject to a secret benefits fraud detection algorithm have shown how important transparency can be. However, there are two important caveats. First, for the proposal to offer meaningful transparency, a sufficient level of information must be provided. This might include a version of the system that people can run and test for themselves, as well as an explanation of how it works. Second, transparency alone is not enough. Among other things, there must also be accountability for the use of ADM systems. There should be adequate avenues for people to challenge their development and deployment, together with effective enforcement mechanisms and the possibility of sanctions.

In order to reap the benefits offered by ADM technology while mitigating its very real risks, a robust data protection regime is vital. Compulsory transparency reporting might be one step in the right direction but, overall, DCMS’s proposals are going to take us the wrong way. ADM systems should not be a black box. We need to know what is going on inside.