The Met police's decision to use facial recognition not only harms our right to privacy—it damages our democracy, too

Both the police and technology companies talk about public "consent" in their work. But what if express consent no longer has to be sought?

February 03, 2020
A camera being used during trials at Scotland Yard for the new facial recognition system, as police in London will soon start using facial recognition cameras for the first time.
A camera being used during trials at Scotland Yard for the new facial recognition system, as police in London will soon start using facial recognition cameras for the first time.

We live in an age of data. The average web site shares information with dozens of third-party companies based on your clicks. Business and concert organisers use your phone’s Bluetooth and Wi-Fi networks to count, track, and collect smartphone information from the crowds. And, of course, we upload content to Facebook, Twitter, Instagram, and a myriad of other web sites.

It’s almost inevitable, then, that a company like Clearview AI would come into existence. As reported by the New York Times, Clearview is a US technology company that scrapes photos from social media sites and uses algorithms to build a giant searchable database of faces.

Clearview's service allows users to upload an image of a person, and search through the app's database of three billion images to find public photos of the same person. This technology has already been used by numerous police departments, who have already spent tens of thousands of dollars on it, though the New York Times notes that many representatives of sites used by Clearview "said their policies prohibit such scraping."

A new age of policing

The revelation that police forces are using cutting-edge identification technology should not be surprising. Last month, the Metropolitan Police said it would be rolling out live facial recognition technology in select areas of the capital based on “bespoke” lists to "locate serious offenders."

Not only are there valid debates to be had about the accuracy of such systems and concerns about racial profiling: there are also major concerns about what happens when the technology industry becomes intertwined with the police.

Policing in the UK has, at heart, been philosophically legitimated on public consent. From 1829, in the United Kingdom, every new police officer was issued "General Instructions" which states: “The power of the police to fulfil their functions and duties is dependent on public approval of their existence, actions and behaviour and on their ability to secure and maintain public respect.”

In September last year, the Metropolitan Police revealed that it had been supplying images for a facial recognition database then used to scan people who visited a local estate between 2016 and 2018. The police initially denied any involvement with the scheme, but later admitted that was “incorrect.” The estate’s developer Argent issued a statement saying “The system was used only to help the Metropolitan Police and British Transport Police prevent and detect crime in the neighbourhood and ultimately to help ensure public safety.”

Public consent

When the police use technologies that rely on covert operations without the knowledge of the people (or companies) from whom they’re scraping data, it’s necessary to think about how we “consent” to being governed when our consent is not sought. We’ve learned to accept this governance, seduced by an ever-present “I Agree” button that stands between us and a social media account, a shopping site, and many other ubiquitous services. How can a user be giving their informed consent when they’re asked to accept a 6000-word, three-part agreement from their smartwatch?

One would think that the most reasonable solution to this issue would be greater legislation against facial recognition, or at least a public dialogue, so people could better understand such projects and their ramifications. Unfortunately, these efforts would ultimately fail for two reasons: the first is that law enforcement can easily bypass the “consent/dissent” binary.

In the United States, police have used consumer goods to perform surveillance. Last July, Vice's technology and science vertical Motherboard reported  a secret agreement between Amazon’s home security company Ring and police in Lakeland, Florida (one of at least 200 departments that have partnered with Ring) which showed that police were encouraged to push locals to adopt Ring’s video doorbell system. In return, police were to be given free Ring products, access to a map of all active Ring cameras in a neighborhood (exact locations hidden) and the ability to speak directly to the camera owner and request footage—all without a warrant. In response to the findings, the Lakeland Police told Motherboard that it did not distribute the cameras that Ring provided to the department, while Amazon Ring said “Through these partnerships, we are opening up the lines of communication between community members and local law enforcement and providing app users with important crime and safety information directly from the official source.”

An unending mission

Legislation to curb facial recognition would struggle to cover the full range of surveillance systems. Facial recognition is concerning, but if that is blocked, law enforcement could simply move to tracking how you walk, which has been in development since 2000. If that is shut down, it could be your heartbeat that’s measured via long-range biometric tracking already available to the Pentagon. After that it could be your smartphone’s MAC address, a unique identifier for any device connected to an internet network. Protesters have to fight these numerous battles. Advocates of surveillance, however, need only one victory to win the war.

Legislation and transparency can be a solution, but it needs to be on a much more fundamental level and on an international scale. The state—a citizen’s only weapon against rich, multinational corporations—should ensure better defences are in place to stop its own law enforcement from taking advantage of undemocratic practises and threaten substantial fines against companies developing them. While the march of “innovation” may be inevitable, it should be subject to laws and regulations that think ahead instead of playing catch-up. At a time when the boundaries of consent feel blurrier than ever, we ought to guard against it being dismantled entirely piece by piece.