Statistics is all about uncertainty—and with AI there is always the chance of errorby Hannah Fry / December 13, 2018 / Leave a comment
After two full decades of our data being harvested, algorithms analysing our habits and humanity ceding control to machines, it is only in the last 12 months that we finally started to see the very best—and worst—of a future entangled with technology. Artificial Intelligence hype went into overdrive, at the same time as fears about the algorithmic manipulation of politics and the smothering of privacy reached fever pitch.
The Cambridge Analytica scandal in March revealed how a private company had been using Facebook newsfeeds to spread misinformation, manipulate emotions and advantage one political candidate over another—and a highly controversial candidate at that. It was a wake-up call for two reasons. “Algorithm” used to be a word you’d associate with computer scientists or mathematicians like me. But here was a story which brought home, first, just how far algorithms have permeated the fragile structure of our society; and, secondly, just how dramatic the unintended consequences of a badly thought through algorithm can be—when millions of people are under its spell.
Facebook wasn’t the only tech giant to be panned this year for failing to think through the implications of its inventions. Back in 2016, Amazon started selling Rekognition, a facial recognition tool, to police forces. It was the same technology used by Sky News during its coverage of the 2018 Royal Wedding to spot famous faces in the crowd, but when deployed by law enforcement in a city like Orlando, an early adopter of the tech, it could use the feed from a network of cameras to track an individual across the city.
This summer, the American Civil Liberties Union (ACLU) led two dozen civil rights groups to call for Amazon to cease providing “surveillance systems” to the government. Cue everyone with an interest in Amazon’s future, from shareholders to employees, calling for the tech giant to stay away from law enforcement.
There are serious concerns about this kind of technology—and it’s not just the question of snooping, it’s the question of how well it actually works. To illustrate, in July, ACLU used the software on the faces of 535 members of Congress and checked them against a database of 25,000 criminal mugshots. If the algorithm was working perfectly, it would have uncovered the truth—that none of the congressmen were in the criminal database. Instead, it found…