Politics

The A-Level disaster is an unnerving sign of things to come

The obvious injustices of the algorithm resulted in a swift government U-turn. But there are many other discriminatory technologies shaping our lives—and they remain hidden

August 20, 2020
Secretary of State for Education Gavin Williamson in his office at the Department of Education in Westminster, London, following the announcement that A-level and GCSE results in England will now be based on teachers' assessments of their students, unless
Secretary of State for Education Gavin Williamson in his office at the Department of Education in Westminster, London, following the announcement that A-level and GCSE results in England will now be based on teachers' assessments of their students, unless

Do you remember being told by teachers at school that if you consistently worked hard and turned up on exam day you would get the grades that you deserve? For school leavers of 2020, especially exceptional students from poor performing schools in England, this advice was cruelly made redundant not only by the virus which prevented them from sitting their A-level exams, but also by the government regulator Ofqual who were responsible for determining grades in lieu of them.

We live in an algorithm age, where humans have increasingly turned to algorithms to solve problems or complete tasks at the expense of meaningful human interaction and understanding. Calculating A-level grades proved no different this year. That is until the government made a U-turn in favour of teacher-assessed grades, recognising that the downgrading of nearly 40 per cent per cent of grades was causing personal distress for students as well as further eroding the nation’s declining trust in the government. Ironically, the chaos caused by the government’s initial decision to back the use of algorithms and robotically defend the award system as “robust” has made it look inhumane and insensitive to the very real and individual life changing outcomes that the catastrophe has caused.

The age of the victims of this incident of algorithm discrimination has led to declarations of generational unfairness from people of all ages: Labour Party leader Keir Starmer argued that it was “robbing a generation of their future.” We witnessed how this year's school leavers were cheated of their chance to control their destinies, which were left instead in the hands of an emotionally absent algorithm. In this instance, the government stepped in to rectify it.

But the age of algorithms is already here. For these students—and perhaps for us all—this is just a taste of what is to come. We live in a world defined by a digital ecosystem dominated by algorithm discrimination. So why are we not collectively outraged by the rising level of automated discrimination that hinders chances during all ages and aspects of our lives?

Such discrimination is widespread and comes in a variety of different forms. One common occurrence includes deciding which candidates should reach the interview stage in a job hiring process by using applicant tracking systems (ATS). This system can inadvertently reject candidates based on bias-informed data and discarding certain psychological traits—all before a candidate’s CV has even been seen by a human being.

According to Jamie Susskind, algorithm injustice in the hiring process enables more advertisements for high-income jobs to be shown to men than to women. Would this be the case if positive discrimination was used for candidates who belong to groups who have previously been discriminated against? Such a task ought to be conducted and implemented by humans during the first stage of the recruitment process alongside the use of algorithms. Giving autonomy and authority to algorithms to restrict employment opportunity and entry into an organisation may save time by sifting candidates, but it can also hinder innovation in an organisation that would otherwise benefit from a candidate who rises above homogenous expectations.

The government’s A-level catastrophe and subsequent potential breach of anti-discrimination legislation is not the only recent example where algorithms have proved problematic for the government. Earlier this month the Home Office agreed to stop using an opaque algorithm to help decide visa applications, which critics have described as “institutionally racist.” The algorithm automatically processed information of visa applicants and gave red flags to applicants from a select group of nations—all without a human seeing them first. It had been in use for five years since 2015. Within those five years, how many people with their own individual stories have been unheard and discriminated against, who could have otherwise added to British society if they hadn’t been restricted entry based on their nationality—and, as critics allege, the colour of their skin?   

Since the 1980s, algorithms have also determined the credit scoring of loans and who should receive them, a far cry from previous lending transactions which were traditionally based on personal human relationships. The direct consequence of this, according to Paul Collier and John Kay in their book, Greed is Dead: Politics After Individualism, was the 2008 global financial crisis. Putting life changing decisions in the hands of algorithms has consequences for us all if we fail to permit space for individual attention and moral engagement—traits that can never be effectively fulfilled by computers. 

Imagine how different your life could be had an algorithm been used to determine your own opportunities. The A-level results fiasco is a vivid reminder of the limitations of decision making without human connection. Algorithms have their benefits and place, but humans’  ability to innovate and advance despite error and adversity should always remain at the forefront when making decisions about our individual futures.