Technology

The dangerous rise of policing by algorithm

Minority Report-style predictive policing is not the answer to problems of police bias

March 19, 2021
Hotspots: a police crime map of Birmingham. Credit: Alamy
Hotspots: a police crime map of Birmingham. Credit: Alamy

In the wake of George Floyd’s killing in America, there was a wave of renewed criticism of the police. In the US and the UK, deep mistrust fed into calls for budget cuts—“Defund the Police” was the rallying cry. Black Lives Matter UK placed police reform at the centre of the movement’s demands, emphasising the urgency in tackling racial discrimination in the criminal justice system. This week the heavy-handed response to protests after the murder of Sarah Everard has once again put the police—and their potential biases—under the spotlight. 

In response to BLM protests last summer, providers of algorithmic policing technologies took the chance to pose as more cost-effective and less “prejudiced” solutions that could “innovate” policing. US predictive policing company PredPol (now rebranded Geolitica) said the following: “Geolitica was founded on the audacious premise that we could help make the practice of policing better… By “better” we mean providing less bias, more transparency, and more accountability.” The statement continues: “We believe that the starting point is data: objective, agreed-upon facts that can be used to guide the discussion.” 

What should we make of this claim to scientific objectivity? How does AI impact the role of police forces within local communities? And how do we situate the rise of algorithmic policing against claims of structural bias and racial discrimination?  

Geolitica is widely distributed across the US and the technology has also been used by the Met Police and police forces in Kent. The technology provides officers with a map of their local area which alters them where to patrol. The map shows a number of red squares—each roughly 150m by 150m—that highlight specific street corners, buildings or intersections that have been forecast as “crime-hotspots.” The software is similar to technologies currently being used, or trialled for future use, by police forces across the UK. If it sounds similar to the specialised police department in the Tom Cruise film Minority Report—PreCrime—that’s because it has some sinister similarities.  

As police officer numbers fell by 20,000 between 2010 and 2016, we have witnessed a wider turn towards “Police Science”—adopting technological systems and data-driven solutions to support decision-making. In recent years, the police have adopted a wide range of other scientific and technological tools: body cameras, facial recognition software, mobile fingerprint scanners, mobile phone extraction, automatic number plate recognition, speaker identification and social media monitoring. As our big data society continues to expand, so too do the databases feeding algorithmic programs. The Home Office is currently creating a policing “super data-base,” bringing the Police National Computer (PNC) and the Police National Database (PND) onto one mega-platform. 

In a report called Policing by Machine, Human Rights group Liberty found that in 2018 at least 14 police forces in the UK had used or were planning to use predictive policing software. Several forces are also involved in a £48m project funded by the Home Office called National Data Analytics Solution (NDAS) that intends to analyse vast quantities of data from force databases, social services, the NHS and schools to calculate where officers can be most effectively used.  

Part of the appeal of “intelligence-led” policing is technology’s guise of impartiality. Patrick Williams of Manchester Metropolitan University explores the “seduction of technology” within policing. He argues that throughout society science and technology assume a certain authority based on claims of objectivity. Within law enforcement, proponents of data-driven policing claim that algorithmic programs offer the opportunity to sidestep unconscious biases: for example, the mathematics coded into facial recognition technologies would inform stop and search rather than the prejudice of an individual police officer.    

In light of rising criticism from a range of sources, including Black Lives Matter UK and Shadow Justice Secretary David Lammy, police forces are beginning to feel the need to enhance a kind of police “objectivity.” Author of Carceral Capitalism, Jackie Wang, argues that “by appealing to “fact” and recasting policing as a neutral science, algorithmic policing attempts to solve the police’s crisis of legitimacy.” She continues: “the rebranding of policing in a way that foregrounds statistical impersonality and symbolically removes the agency of individual officers is a clever way to cast police activity as neutral, unbiased, and rational.” But Wang suggests this is nothing but a rebranding: data-driven solutions contain the same biases that have always existed within policing.  

Take the Harm Assessment Risk Tool (HART), developed in 2016 by Durham Constabulary. It is a form of algorithmic policing that informs officers whether or not a suspect should be kept in custody, by calculating how likely a detained person is to reoffend based on past criminal history, as well as characteristics such as age, gender and postcode (with postcode being a significant factor in “community deprivation”). It has been fed by historical data from 104,000 custody events that occurred in Durham Constabulary from 2008 to 2012. So risk assessment algorithms merely recriminalise those who have already been criminalised, creating a feedback loop.  

HART has been designed to overestimate the risk that detained individuals pose. According to Sheena Urwin, responsible for Durham Constabulary’s adoption of the AI, “The worst error would be if the model forecasts low and the offender turned out high.” As a result, the system exaggerates the likelihood of reoffending. Framed as minimising risk, this strategy amplifies bias because it increases the chances of “False Positives.” In many ways, these systems perpetuate a discriminatory system instead of “reforming” it. 

Geolitica claims to avoid the bias programmed into individual risk assessment softwares by excluding personal characteristics. Its data is primarily based on location. But Geolitica perpetuates racial and class discrimination by proxy. As people from BAME communities are disproportionately more likely to be arrested, the algorithm wrongly assumes that the areas in which they live are areas where there is more crime—where what is considered “criminal” has an intentionally narrow scope. If the data also accounted for locations where white collar crime was committed, the algorithm would advise police officers to patrol different locations. Indeed, this is the point that the White Collar Crime Early Warning System makes—a parody project that predicts financial crime. When historical data is used to predict future crime, existing biases are codified: postcodes become proxies for race. 

The rise of Police Science does not signal a new era for policing. Rather, predictive policing technologies present officers with a picture of how police have responded to crime in the past, including all the pre-existing patterns of exclusion. If police forces previously discriminated on the basis of race, the algorithm will advise them to continue to do so. Biased data in, biased data out. Despite the claims made for it, policing by algorithm is not impartial. As Ruha Benjamin writes in Race After Technology: “Crime prediction algorithms should more accurately be called crime production algorithms.”