Technology

It's time we faced up to AI's race problem

As facial recognition is rolled out across our cities, we can no longer ignore the racial bias embedded in such technology

August 14, 2019
Use of facial recognition technology by police forces has shed a light on problems. Photo: Prospect composite
Use of facial recognition technology by police forces has shed a light on problems. Photo: Prospect composite

In the last few months, we have heard how facial recognition systems are being rolled across London. King’s Cross station has cameras already; Canary Wharf is considering a trial  offace recognition technologies soon. Two police forces, London’s Metropolitan Police and the South Wales Police, have trialled facial recognition systems on unsuspected citizens, without their explicit consent (leaflets and signs were provided to inform passers by). Many supermarkets and bars have also installed these cameras, and while people might think that these are just the low-resolution analogue cameras, many of them have sophisticated face recognition technologies that can also be accessed and controlled remotely and connected to the internet.

While heralding a new era in personalised shopping, as well as potentially lowering shoplifting threats and aiding public safety and security, such moves raise concern about privacy. Aside from existential questions about ‘big brother’ monitoring, there is little discussion of what racial and gender bias these technologies will perpetuate, and particularly how this will affect people of colour. Most of these companies and organisations that are installing these systems are not aware of how aversively racially biased they can be.

Facial recognition software is not free of error. FBI co-authored research in the USA suggests that these systems may be least accurate for African Americans, women, and young people aged 18 to 30. In 2015, Google had to apologise after its image-recognition photo app initially labelled a photograph of a group of African American people as “gorillas.” Joy Buolamwini, the founder of the Algorithmic Justice League, found that the robots at the MIT Media Lab where she worked did not recognise her dark skin, and that she had to wear a white mask in order to be recognised.

Along with Timnit Gebru, a scientist at Microsoft Research, she studied the performance of three leading face recognition systems, classifying how well they could guess the gender of people with different skin tones. The analysis of face recognition software from Microsoft showed an error rate for darker-skinned women as 21 percent, while IBM’s and Megvii’s rates were nearly 35 percent. They all had error rates below 1 percent for light-skinned males.

 

"A 14-year-old black schoolboy was fingerprinted after being misidentified"

 

We are already seeing some of the repercussions in the UK. Face recognition was introduced in 2014 on the Police National Database (PND), which includes around 13 million faces. Eight trials carried out in London by the Metropolitan Police between 2016 and 2018 resulted in a 96 per cent rate of “false positives,” where software wrongly alerts police that a person passing through the scanning area matches a photo on the database. The system had a particular problem with racial profiling, made public after a 14-year-old black schoolboy was fingerprinted after being misidentified.

At an April 2014 meeting, Durham Police Chief Constable Mike Barton noted "that ethnicity can have an impact on search accuracy," and asked the Canadian company managing the police's facial image database to investigate the issue (subsequent minutes do not mention a follow-up). An assessment by Cardiff University researchers found that the effect of racial discrimination was not tested during the trial evaluation period, and an interim report of the Biometrics and Forensics Ethics Group Facial Recognition Working Group in February 2019, an advisory group to the government highlighted concerns about the lack of ethnic diversity in datasets.

Automated systems are racist, or can be, because they imitate the bias and prejudice in the society or the designers who design these systems, and the dataset that the system algorithms are trained on. There are two primary facets to implicit bias in technology. One, how the data and design reflect and mirror the biases existing in the real-world; and, two, how technology in turn contributes to the biases in the real world.

Ethnic minorities are three times as likely to have been thrown out of or denied entrance to a restaurant, bar or club. In one survey, 38 per cent of respondents from ethnic minorities said they had been wrongly suspected of shoplifting, compared with 14 per cent of white people. I can personally vouch for how this kind of racial profiling and prejudice can feel like terror, shame and confusion.

The data sets that AIs are trained to reflect the same ingrained biases that lead to such statistics. Bias can occur the data collection stage—if, for instance, sample sets are not diverse—or can be introduced in the way data is structured and labelled. People who live on the margins of society, or are routinely under-represented in other areas of public life, are more likely to be missed.

If the systems are being trained on previously existing mug shot databases that are likely to include a disproportionate number of black and other minority individuals, then it is highly likely to have inbuilt racial bias. The use of such facial recognition systems will therefore replicate this societal bias, reinforce and compound it. It will be yet another mechanism that is going to contribute to racial profiling of people of colour.