When it comes to technology and security, politicians are caught between a dream and a nightmareby Tom Chatfield / January 17, 2015 / Leave a comment
Police officers stand in front of a house in which two terror suspects were shot on 15th January in Verviers, Belgium ©OLIVER BERG/dpa Yesterday’s headlines give a pretty good idea of what a successful surveillance strategy looks like: “Two dead in Belgium as police foil ‘grand scale’ terrorist plot”, “Belgium Thwarts Terror Plot.” When information is accurate, timely and actionable, it can and does save lives. Good news around terrorism is rare. Most prevention success stories can’t be boasted about; the worst possible news, as in the case of the Paris attacks, blots out all other events while begging the most painful of questions: what might have prevented this from happening—and why wasn’t it done? In this sense, the Prime Minister deserves some sympathy for his comments on 12th January that “in extremis, it has been possible to read someone’s letter, to listen to someone’s call, to mobile communications… The question remains: are we going to allow a means of communications where it simply is not possible to do that? My answer to that question is: no, we must not. The first duty of any government is to keep our country and our people safe.” In every practical sense, however, the argument is alarming and ill-conceived. While the case in favour of encryption shouldn’t need rehearsing again here, there remain questions on its flip side that are trickier to dismiss. Given that introducing government-mandated security vulnerabilities into every piece of software used in a country—from messenger services like WhatsApp to every potentially secure installable program for every operating system in use—is a daft idea, what might an intelligent digital surveillance strategy look like? When it comes to technology and security, politicians are caught between a dream and a nightmare. One the one hand, there’s the seductive dream of algorithmic anticipation: a total knowledge vortex in which every future wrongdoing is flagged up, every suspect constantly monitored, every atrocity pre-empted. One the other hand, there’s the nightmare of untraceable technology facilitating any crime you care to mention: attacks planned with perfect impunity, drugs and guns and bombs freely traded. Neither of these scenarios quite reflect reality (although the second comes closest), but it’s easy to see the political incentives they embody. The capacity covertly to access information is an unignorable asset in fighting the good fight, while the existence of perfectly inaccessible communications seem an unacceptable liability. What is a government to do? For a start, there’s a need to be precise about what actually works. Any intelligence-led security success, like the Belgian operation, is a victory of signal over noise. Three heavily-armed gunmen—all Belgian nationals—had been under surveillance for a fortnight, apparently following their return from fighting in Syria. Identifying and tracking them entailed the cross-referencing of expert knowledge and leads with the bugging of their cars and homes, followed by the rapid and carefully co-ordinated deployment of substantial manpower. Fifteen other suspected jihadis were arrested in Belgium and France at the same time—all under the aegis of surveillance and search warrants. Unfortunately, the dream of big data snooping pulls in quite another direction. Most modern security services are drowning in data and noise: in watch-lists of people numbering in the thousands, in false alarms and red herrings and disinformation. Acquiring information is one thing. But understanding that information and turning it into knowledge is a massively labour-intensive process—and one whose effectiveness can diminish exponentially with scale. The more draconian you’re prepared to be in your approach, of course, the more broadly you can define risk and suspicion. Fighting technology with technology is a seductive approach for authoritarian regimes precisely because it’s so suitable for creating new categories of the guilty or dangerous or merely inconvenient (China makes its arrests of those deemed to be spreading “fraudulent messages” online ten thousand at a time; but it’s hardly the only offender). Once you’ve built a system whose very purpose is to calculate degrees of suspicion, you’re in the business of manufacturing suspects for crimes that haven’t yet been committed; and, like all systems, it will gather a momentum of its own. The question of security also becomes rather different when you step back from exceptional events and their usefulness in justifying any action or expenditure. At the moment, most security services are more interested in keeping digital backdoors open than in helping ordinary people to protect their privacy and rights: anything to prevent the next horror. The question of what keeping us safe means, however, has to be asked within the larger context of freedoms and responsibilities. From what and whom are we to be protected, and at what price? While the dream may be some total algorithmic knowledge of earthly evil—and the nightmare a dark net of untraceable whispers and plots— the reality is that we are likely to be helped only by putting specific political events and motivations, and an understanding of consequences, at the heart of any debate on security. How and why are people like the Kouachi brothers being radicalised? Tracing the place of technology in such a process is important; knowing the digital geography and networks of influence to monitor, and how best to monitor them, is vital. But such a strategy is at its most robust when tech isn’t treated as black or white magic, when oversight is extended rather than diminished to match the growing power of the tools both sides can deploy—and the price of failure isn’t making every citizen a suspect.