As facial recognition is rolled out across our cities, we can no longer ignore the racial bias embedded in such technologyby Pragya Agarwal / August 14, 2019 / Leave a comment
In the last few months, we have heard how facial recognition systems are being rolled across London. King’s Cross station has cameras already; Canary Wharf is considering a trial offace recognition technologies soon. Two police forces, London’s Metropolitan Police and the South Wales Police, have trialled facial recognition systems on unsuspected citizens, without their explicit consent (leaflets and signs were provided to inform passers by). Many supermarkets and bars have also installed these cameras, and while people might think that these are just the low-resolution analogue cameras, many of them have sophisticated face recognition technologies that can also be accessed and controlled remotely and connected to the internet.
While heralding a new era in personalised shopping, as well as potentially lowering shoplifting threats and aiding public safety and security, such moves raise concern about privacy. Aside from existential questions about ‘big brother’ monitoring, there is little discussion of what racial and gender bias these technologies will perpetuate, and particularly how this will affect people of colour. Most of these companies and organisations that are installing these systems are not aware of how aversively racially biased they can be.
Facial recognition software is not free of error. FBI co-authored research in the USA suggests that these systems may be least accurate for African Americans, women, and young people aged 18 to 30. In 2015, Google had to apologise after its image-recognition photo app initially labelled a photograph of a group of African American people as “gorillas.” Joy Buolamwini, the founder of the Algorithmic Justice League, found that the robots at the MIT Media Lab where she worked did not recognise her dark skin, and that she had to wear a white mask in order to be recognised.
Along with Timnit Gebru, a scientist at Microsoft Research, she studied the performance of three leading face recognition systems, classifying how well they could guess the gender of people with different skin tones. The analysis of face recognition software from Microsoft showed an error rate for darker-skinned women as 21 percent, while IBM’s and Megvii’s rates were nearly 35 percent. They all had error rates below 1 percent for light-skinned males.
“A 14-year-old black schoolboy was fingerprinted after being misidentified”
We are already seeing some of the repercussions in the…