New technology helps our enemies as well as us and raises new questions about providing security and preserving freedomby John Sawers / January 21, 2016 / Leave a comment
Read a second piece on the tension between security and privacy, also published in the February 2016 issue of Prospect, here
Sir John will be joining Prospect on 25th February to talk about his time as Chief of the Secret Intelligence Service (MI6). Tickets are going fast, click here to buy yours now
Can we stop a Paris-style attack happening in London? The honest answer is yes—most of the time.
As MI6 Chief, my top priority was identifying terror attacks against Britain planned from abroad. Working with intelligence partners in the United States and the Middle East, we had significant successes. You do not know about the attacks we prevented because they did not occur, and we don’t talk about them. Why give our enemies clues to how we stop them?
When I joined MI6, I was trained to spot people tracking me by tapping my phone, intercepting radio communications or following me by car or on foot. Today those techniques are used against terrorist suspects, supported by technologies like face or footstep recognition. But you have to know which people pose a threat—and first, you have to find them.
One method we use is the new science of data analytics. Every time you use your mobile, post a Tweet, shop online, drive past a CCTV camera, tap your Oyster card, or watch a YouTube cat video, you create data. Everything you do digitally—everything anyone does—makes these data oceans bigger, richer and deeper.
“These days, terrorists are scheming in cyberspace. If terror suspects are operating on the internet, it is essential that the police and security services have the legal power to track them”
So we dive into these data oceans and look for patterns. We search for snippets of information that warrant a closer look. Then we have to work out who, among several thousand possible extremist sympathisers, might launch an attack in Britain next week.
We need to follow suspects wherever they go. If a terror suspect enters a pub, it is reasonable if not vital that the police and security services have the legal power to enter and monitor him or her there. These days, terrorists are scheming in cyberspace. If terror suspects are operating on the internet, it is essential that the police and security services have the legal power to track them online and identify who they are communicating with.
As citizens, we want maximum privacy and maximum security. Unbreakable encryption is at the centre of the argument. Intelligence agencies focus on security; technology companies focus on privacy. They each accuse the other of ignoring the public interest they are protecting, but both have a point. We want world-class encryption to keep our data secure. But terrorists and extremists use this encryption against us, keeping their identities and communications secret. There is nothing new here. Every technological advance—guns, cars, telephones—has quickly been used by the enemies of society. And like these advances, unbreakable encryption cannot be uninvented.
The big technology companies have a crucial role—and a unique responsibility—in building the security that keeps us free and safe. We trust them in part because they are private. Co-operation is much preferable to legislation. The next step is for all parties to collaborate on a way forward to benefit from new technologies while doing what we can to stop those who would do us harm. This kind of co-operation between public and private sectors is needed in free societies where security underpins our privacy, private enterprise and liberal democracy.
How, though, should we set clear limits on how the state can acquire data?
Say that you do not trust the government and intelligence agencies, but you also do not want to live in fear of terrorism. You grudgingly accept that agencies need to look at internet data to find and track terrorist networks. Then you sit down to devise laws and come up with something like the following.
First, privacy is the norm. Exceptions are allowed only when a minister decides that intrusion is necessary. Second, while agencies can look for patterns in data, high-level authorisation is needed to track individuals. Next, those doing the work must be tightly vetted and alarms should go off over improper searches. Then, while we should share intelligence with other governments, we use extreme caution if they have a bad human rights record. Finally, there must be oversight by MPs and judges, frequent spot-checks, and checks and balances on every level.
“Is it better to shut down this material, even if you drive it deeper into the dark web?”
Guess what? That is more or less what we have now.
There is rarely a good time for these debates. New laws rushed through after a major attack will not strike a wise, principled balance. Fortunately, this is not the case with the new Investigatory Powers Bill before parliament. This is based on the recommendations of David Anderson QC, the Independent Reviewer of Terrorism Legislation, and is designed to strike such a balance.
When you put all the powers of the agencies into one codified legal framework, the overall package might look ominous, if not alarming. Do our agencies really need to be able to do all this?
Some people also argue that if state surveillance did not stop the Paris attacks, what good is it? But, to make an analogy, no goalkeeper has a perfect record. Even the finest can be beaten by a top-class shot or a freakish deflection. That does not make them a bad goalkeeper, or the idea of goalkeeping redundant.
I do not want to downplay reasonable concerns. But technologies that empower us also empower our enemies. We can track down people like Mohammed Emwazi, known as “Jihadi John.” But you and your children are only a few clicks away from people who use 3D printers to create replica guns, those who make synthetic drugs, or from Islamic State (IS) and al-Qaeda and their propaganda.
This presents an acute dilemma. Is it better to shut down this ghastly material, even if you drive it deeper into the dark web? Or should we accept that this poison is in society’s bloodstream and quietly watch what is happening and who might be infected?
Those in the intelligence and security services face this dilemma all the time. You can trust the skill and restraint of the people working day and night to protect you. Or you can further limit their powers—and pray the people working day and night to destroy our societies do not hit you, your family or your town.
Today’s security requires the use of technology to guarantee huge areas of freedom for all of us, by making difficult compromises on the margins. This is not an attack on privacy, but the only way to safeguard it while combatting the enemies of free society.
Technology is changing foreign policy as well. In 1982, under President Hafez al-Assad—the father of Syria’s current President, Bashar al-Assad—the Syrian army attacked Hama, Syria’s fourth largest city, to put down an Islamist uprising. They killed over 20,000 people—three times the death toll of Srebrenica. The attack went on for weeks, but barely any news seeped out. When it did, global reaction was muted. There was little public pressure and it suited most governments to look away.
Compare that with the reaction to Malaysia Airlines flight MH17, shot down over Ukraine in July 2014. Swarms of amateurs and experts from everywhere in the world took to the internet. Drawing on live satellite imagery and other open-source websites, they pinpointed the probable launchpoint of the missile, the type of missile used, and the likely people responsible. They punched big holes in the official story coming from Moscow and pointed the finger of guilt at Moscow-backed separatists.
“what may be needed is real leadership, taking people along a path that is tough, slow and unpopular to achieve a greater goal.”
In BloombergView, James Gibney called this a “citizen-driven open-source intelligence revolution.” In October, bombs fell on a Médecins Sans Frontières hospital in Afghanistan. Crowd-sourced investigation quickly forced the US to accept responsibility. All very admirable—but the immediacy and transparency of today’s technology is giving our leaders serious problems.
The first problem is time. Events and disasters now come thick and fast. The 24/7 media cycle and incessant clamour of the internet puts politicians under pressure to respond quickly. Often, their actions are aimed at shutting up their noisiest critics. Yet what may be needed is real leadership, taking people along a path that is tough, slow and unpopular to achieve a greater goal.
Take Syria again. In 2011, Syrians demonstrated against Bashar al-Assad’s rule and he turned the army on them. The west was torn, but did not intervene. Then in 2013, he used chemical weapons against his people in a breach of international conventions. This war crime demands a swift, strong response: it is vital to hold the line against these weapons. The British government took a clear position that military action is required, but chose to seek approval from parliament. Reflecting public unease about another Middle East intervention, parliament said no. President Barack Obama then had doubts whether he could act without the support of Congress. This left the west in a hopeless position, “demanding” the departure of Assad without tackling him.
Since then, Syria’s civil war has created space for the rise of IS, who pose the worst terrorist threat in living memory. Syrian refugees are coming in unmanageable numbers, undermining European solidarity. And now Russia is involved, unconstrained by democratic pressures or concern for civilian casualties, using air power and missiles to prop up the dismal Assad regime.
We all share some responsibility for these grim outcomes. But when timelines are so short and technology gives a deafeningly loud voice to all sorts of critics, well-intentioned or not, thinking strategically becomes next to impossible in a modern democracy.
In the wake of the Paris attacks, we need a strategy to help the Syrian people and remove IS from its strongholds. A new diplomatic process for Syria has begun, but its outcome will be shaped by the strength of forces on the ground. If we want moderates to have a voice, we need to support them militarily.
The second problem is trust. Technology makes us all more accountable. Scandals such as MPs’ expenses or media phone tapping are healthy exposures of abuse. But examples like these can lead to unbridled cynicism, in which anything secret is a cover-up.
Yet patient diplomacy relies on confidentiality. For years, the Iran nuclear talks were stuck. Both the US and Iran faced forces at home rejecting compromise. Then the Obama Administration made a sustained effort with Iran through secret meetings in Oman. It led to a breakthrough and then an agreement. At times, transparency has to sit back and give diplomacy a chance.
The final problem is disruptive change. Every leader, good or bad, wants to reap the benefit of new technologies and big data. But what if today’s technology is too disruptive for free societies, making democracies look weak or uncertain?
In contrast, autocratic or oppressive systems may avoid the worst disruptions. They are already skilled at closing down debate and manipulating public opinion. And they do not worry about transparency, so they can think strategically and act decisively. No country is more strategic than China. I have met some of China’s leaders and they plan in decades, even centuries: they are surprised that we don’t. As we saw in Ukraine and now Syria, President Vladimir Putin is using his power to create new realities. Autocratic states may start to look stronger, more effective, more orderly than democracies.
But, for all their fumbling, scandal and confusion, democracies have one huge advantage. They are flexible and open. They embrace new ideas and opportunities. It is our greatest strength. Yet we cannot take success for granted. We are at a moment in history like the industrial revolution. Who will get first mover advantage, as Britain did in the 18th and 19th centuries?
Societies that master big data will enjoy a head start, whether they are democratic or not. They will lead the way in artificial intelligence and robotics, reaping benefits in health and education simply by knowing more. They will adjust faster to change. Nations that veer away from new technology will fall behind, and radical new inequalities in wealth and power emerge.
Soon, self-learning computers will start displacing people. Scientists like Stephen Hawking urge us to consider the ethical implications of this now, rather than wait until they are upon us. We need to work through the implications for our politics, too. To make technology support our freedoms won over centuries, and not erode them, we must think ahead, and not leave the next generation with a stark choice between security or freedom.
Back in 1973, I went to university to study physics. Computer science looked too hard so I took philosophy modules instead. That led me into my career in foreign affairs and security. I drew on my nuclear physics when negotiating with the Iranians—much to their surprise. Foreign policy and intelligence work have echoes of physics—they are both about the balance of forces, momentum, pressure, optics. And parallel worlds, for instance. Or things working on one scale, but not another.
My whole career has been geared around the issues of freedom and security. Neither can be absolute or guaranteed: and each depends on the other. Oppressive security undermines freedom. But freedoms evaporate if there is no security we can rely on to uphold them.
The longer term issues raise by new technologies for our societies and political systems are much greater and more profound than the short term trade offs needed to combat terrorism. No one knows where technology will take us. In a free society we have the advantage of dynamism and flexibility. We’re going to need that to ensure the technologies are harnessed to reinforce both freedom and security. We don’t want to wake up one day and discover that new technology has pushed us in a direction we never wanted to take.
DH Lawrence once wrote: “If only we could have two lives. The first in which to make one’s mistakes, which seem as if they have to be made. And the second in which to profit by them.” With new technologies, perhaps we will soon have that luxury.