Politics

Time to let in the light on the government’s secret algorithms

Automated decision making is too important to be conducted in the dark

March 02, 2022
Photo: Guy Corbishley / Alamy Stock Photo
Photo: Guy Corbishley / Alamy Stock Photo

Government-by-algorithm is on the rise. Automation is used in contexts ranging from policing to welfare to immigration. So why don’t people know more about it? The answer is simple: government departments are keeping us in the dark.

When human civil servants make decisions which impact on our rights and entitlements, the process they must follow is clearly defined by legislation, case law and published policy and guidance. But the same transparent rules and constraints are not being applied when ministers and civil servants delegate their decision making to robots. 

Standards of procedural fairness have developed in this country over centuries and are now well established in public law: a person affected by a government decision should know in advance how the process will operate, and so how to prepare for and participate in it. And once the decision has been made, an affected person should be able to find out what that decision is. The courts have recognised that there are also good arguments for giving the reasons behind a decision. Clearly, where a decision relates to a person’s rights, there is a particularly strong duty to explain it. 

Giving reasons helps decision makers to show that they are acting fairly, rationally and for lawful purposes. So we at Public Law Project are asking the government to make sure that the principles of procedural fairness are not jettisoned as algorithmic decision making is increasingly adopted. 

Public Law Project (PLP) has looked at numerous algorithms being used by government departments, particularly the Home Office and the Department for Work and Pensions. The algorithms we have studied don’t follow the government’s own rules for how decisions should be arrived at. In our experience, most people do not know how the system operates: the algorithm’s rules or criteria are not published, unlike the equivalent policies or guidance used in an analogue world. Nor do they know what information has been considered, or the reasons for the decision. Most people do not even know that an algorithm has been used in deciding their case. In our view, this contravenes the basic rules of procedural fairness and undermines good governance. 

One such algorithm is the Home Office’s tool for detecting suspected “sham marriages”—marriages for the purpose of circumventing immigration rules. The information we have patched together suggests that when a foreign national intends to get married, their information is processed by the Home Office and assessed against eight risk factors unknown to the individual. Couples who fit these risk factors are flagged for investigation. The Home Office investigations are unpleasant and intrusive. Individuals involved do not know that they were identified by an algorithm—let alone what criteria they were assessed against or the reasons why they are being investigated.

What is most troubling is that the Home Office is refusing to provide clear information around its sham marriages algorithm, including in response to our FOI requests. Keeping the way the algorithm works a secret is not sustainable. The opacity around this tool—and many others currently in operation—breaches fundamental public law standards. We are pursuing access via an appeal to the Information Commissioner’s Office. But given the proliferation of government algorithms, transparency obligations urgently need to apply across the board. 

The Cabinet Office is currently piloting an optional Algorithmic Transparency Standard. Lifting the veil on automated systems is essential for ensuring that the robots work in a fair and non-discriminatory way. But PLP is seriously concerned that the Standard does not go far enough. 

Why? There are two major problems. First, participation is not compulsory. Under the Cabinet Office plan, the Home Office could choose to keep the details of its sham marriages algorithm out of the public eye. PLP calls for transparency obligations to be put on a statutory footing. Failure to do so will, we believe, mean that the use of algorithms remains largely opaque. 

Second, even if the Home Office chooses to participate, the Standard does not ask for sufficient detail. We would get some high-level information, but not enough about the nuts and bolts. And people really need to understand the nuts and bolts if they are going to challenge automated systems when they go wrong. This is just straightforward, bog-standard procedural fairness. 

That procedural fairness is a key component of the Rule of Law. There is no logical reason to abandon it now, and a high democratic price will be paid if we do. Failure to apply the same high standards to robots as we do to human decision makers is leading us into a legal mess.