©CBP photo/Alamy stock photo

The new ethics of war

What happens when our military machines are not only unmanned but autonomous?
May 17, 2017

On Wednesday 25th October 1854, the 4th and 13th Light Dragoons, 17th Lancers and 8th and 11th Hussars combined to create a cavalry unit known as the Light Brigade. Led by James Brudenell, the seventh Earl of Cardigan, they undertook an action so disastrous that it entered the annals of heroism and—courtesy of Alfred Tennyson—poetry. A series of mistakes made the Light Brigade, with the earl galloping at its head, charge the length of a valley directly into the mouths of more than 50 cannon and 20 battalions of Russian infantry. It is a part of the Crimean War just as memorable as Florence Nightingale walking the wards of Scutari Hospital, shedding the beams of her lamp into the painful nights.

The Light Brigade was shot to pieces; there were 278 dead, missing and wounded; 335 horses were killed; only 195 men survived with their mounts—less than half the force. Cardigan, who miraculously survived despite galloping the length of the “Valley of Death” in both directions, hacking at Russian troops as he went, afterwards took himself aboard his yacht in Balaclava harbour and had a champagne dinner.

Marshal Pierre Bosquet, a French commander who witnessed the action, famously remarked, “C’est magnifique, mais ce n’est pas la guerre: c’est de la folie.

This incident offers a number of startling contrasts 163 years on, over a century and a half in which war has continually evolved. Back then, in the mid-19th century, a group of lords, pursuing the traditional aristocratic occupation of war-making, were leading men born to the plough, the sheepfold and the forge—or, increasingly, the factory—into the mouths of cannon.

In 1914, the British Army was still divided on class grounds between officers and men. But the First World War was not a galloping war: weapons had changed, and machine guns and cavalry did not mix. Tanks were introduced late on in that conflict, but their full impact on the battlefield was only felt in the Second World War, where their mobility shaped battles in the Western Desert and on the Eastern Front, and—of course—their speed enabled the Wehrmacht’s initial Blitzkrieg.

Vietnam then brought the helicopter to the frontline; it has been integral to troop movement and offensive operations ever since. In the battles fought across South-East Asia in the 1960s, technology took a more than usually sinister turn when the jungles were sprayed with Agent Orange to strip their foliage in the hope of revealing Viet Cong troops and supply lines. The television-reported Gulf wars of 1990–91 and 2003–11 showed a new version of the soldier—a man wearing armour once more, but highly technologised, wired up, in full communication with comrades and commanders, donning night-vision goggles and carrying weapons of incomparably greater power than his predecessors, and indeed his opponents.

War, then, has changed in dramatic respects, technologically and, consequentially, in character too. But in other fundamental respects it is as it ever was: people killing other people. As Theodor Adorno said, thinking of the development of the spear into the guided missile: “We humans have grown cleverer over time, but not wiser.” Every step of this evolution has raised its own ethical questions, but the next twist in the long story of war could very well be autonomous machines killing people—something that could well necessitate a more profound rethink than any that has been required before.

As well as posing their own particular ethical problems, past advances in military technology have—very often—inspired attempts at an ethical solution too. The 1868 Declaration of St Petersburg outlawed newly-invented bullets that split apart inside a victim. The 1899 Hague Conference outlawed aerial bombardment, even before heavier-than-air flight had become possible—it had in mind the throwing of grenades from balloons. After the First World War, chemical weapons were outlawed and following the Second World War much energy was devoted to attempts at banning or limiting the spread of nuclear weapons. When Bashar al-Assad gassed his own people in Syria, President Donald Trump enforced the world’s red line with an airstrike.

So, just as the continuing evolution of the technology of combat is nothing new, nor is the attempt to regulate its grim advance. But such attempts to limit the threatened harm have often proved to be futile. For throughout history, it is technology that has made the chief difference between winning and losing in war—the spear and the atom bomb both represent deadly inventiveness prompted by emergency and danger. Whoever has possessed the superior technology has tended to prevail, which—if it then falls to the victors to enforce the rules—points to some obvious dilemmas and difficulties.

War could become an entirely new kind of phenomenon, both technologically and ethically-—and that change is already underway. The world is familiar with military hunter-killer drones such as the Predator and the Reaper, used in Afghanistan, the border territories of Pakistan and in Iraq, to “find, fix and finish” (as the military patois has it) human targets. These devices suggest a future of war in which the fighting is done by machines increasingly free of human control.

There are implications for the perceived justice of combat, too. Asymmetric warfare, in which small groups of insurgents can encumber larger and better-equipped conventional forces, have traditionally challenged the presumption that the army with the best kit is bound to win. Through drones, however, the technologists are finding new ways to lock in the advantages of the powerful—they can be used for surveillance and offensive engagement in circumstances and terrains where conventional forces are disadvantaged, and the risk to human life is too great. The badlands of the Afghanistan-Pakistan border provide a classic example of where they best do their work. Able to stay aloft for long periods, hard to defend against, difficult to detect and formidably armed, they are effective weapons that put no operating personnel at risk—a very desirable situation for those who wield them.

The fact that drones are controlled from thousands of miles away by operators sitting safely before a screen seems to make them more sinister, less “fair” and less right. Intensifying the distaste is the connection between drones and video games, with the military actively seeking those with gaming experience to pilot unmanned aircraft. In particular, the move from violent video games to the dreadful reality of killing actual human beings seems to cast a deeper moral shadow over their use, trivialising the deaths caused, and making cold and unfeeling the acts and actors that cause them.

One is reminded of the global press reaction to the first aerial bombing that took place in 1911, when an Italian airman threw grenades out of his plane onto Ottoman troops in North Africa. There was outrage at the “unsporting” nature of the venture on the grounds that the victims were unable to retaliate. This was quickly proved wrong: Ottoman troops shot down an Italian aircraft the following week, with rifle fire. Less than 40 years later the British and Americans were dropping hundreds of tons of high explosives on German and later Japanese civilian populations nightly.

The drone reprises an ancient form of killing that has always recommended itself: it embodies the same principle as stoning to death—distancing the killer from the victim at a sanitary remove. In this it does not represent any great ethical break with the recent past: much the same could be said of high-level carpet bombing. Indeed, and somewhat paradoxically, drone activity is at the less bad end, if there can be such a thing, of causing death from the air. It is more selective and precise, and therefore marginally less likely to cause collateral damage than conventional bombing.

The ethical twist, however, comes from the seemingly inhuman nature of drones—the deadly machine without a person in it, faceless, and remorselessly homing in on its target. This is a prompt for extra dislike.

Yes, RAF bomber pilots during the Second World War were detached from the victims on their bombing campaigns, purely because of their distance from the victim. Yes, too, they released huge volumes of bombs, while not touching the victim, not being in the same space, and perhaps that may have served as a sop to the conscience. Those pilots, however, were themselves in danger: they could crash or be shot down.

By contrast, the screen-gazers who steer their drones to targets have the advantage of guaranteed safety as well as the stone-thrower’s remove. If only one force in a conflict faces physical danger, are we now in the era of one-sided war?

The history of drones is surprisingly long. They have been an important part of many air forces for decades, with Unmanned Aerial Vehicles (UAVs) undertaking tasks considered “too dull, dirty or dangerous” for human beings. UAVs were in rudimentary use before the First World War where they served as target practice. During both the First and Second World Wars they served as flying bombs, before becoming decoys and surveillance devices in the Arab-Israeli Yom Kippur war of 1973. In Vietnam, they undertook more than 3,000 reconnaissance missions.

But it was only with the development of global positioning technology and the miniaturisation of these systems that it became possible to deploy and control a remote aircraft at extreme distance. After 2001, military UAVs increasingly became central to US operations in the Middle East and Afghanistan in hunter-killer roles. The Predator drone became operational in 1995, the Reaper in 2007; since then they have grown in number to constitute almost a third of US military aircraft strength, and have been used in missions around the world.

Drones over Afghanistan are remotely operated from bases in the US such as Creech Air Force Base near Las Vegas. In the terminology of remote warfare, drones are described as “human-in-the-loop” weapons, that is, devices controlled in real time by humans. Another development is “human-on-the-loop” systems, which are capable of selecting and attacking targets autonomously, though with human oversight and ability to override them. The technology causing most concern is “human-out-of-the-loop” systems, which are completely autonomous devices programmed to seek, identify and attack targets without any human oversight after the initial programming. At this point the ethical questions become even more acute.

The more general term used to designate all such systems is “robotic weapons,” and for the third kind “lethal autonomous weapons” (LAWs) or—colloquially—killer robots, which take full charge of where and whom to shoot. The acronym LAWs is chillingly ironic. Expert opinion has it that they could be in operation before the middle of the 21st century. It is obvious what kind of concerns they raise. The idea of delegating life-and-death decisions to unsupervised armed machines is inconsistent with humanitarian law, especially given the potential to put everyone and everything in their field of operation at risk, including non-combatants. Anticipating the dangers, human rights organisations are seeking to pre-empt them by banning LAWs in advance.

"Drones embody the same principle as stoning—distancing the killer from the victim at a sanitary remove"
International humanitarian law already has provisions that outlaw the deployment of weapons that could be particularly injurious, especially to non-combatants. LAWs are not mentioned in the founding documents, but the implication of the appended agreements is clear. They provide that novel weapons systems, or modifications of existing ones, should be examined for consistency with humanitarian law. One of the immediate problems with LAWs is whether they could be programmed to conform to the principle of discrimination: that is, to be able to distinguish between justified military targets and everything else. Could they be programmed to make a fine judgment about whether it is necessary for them to deploy their weapons? If so, could they be programmed to adjust their activity so that it is proportional to the circumstances in which they operate?

An affirmative answer to these questions requires artificial intelligence to be developed to a point where analysis of battlefield situations and decisions about how to respond to them is not merely algorithmic but has the quality of evaluation that, in human beings, turns on affective considerations. What this means is best explained by considering neurologist Antonio Damasio’s argument that if an almost purely logical individual such as Star Trek’s Spock really existed, he would be a poor reasoner because he lacks an emotional dimension to his thoughts. A machine would need subtle programming to make decisions in the way humans do. In particular, creating a machine analogue of compassion would be a remarkable achievement; but a capacity for compassion is one of the features that intelligent application of humanitarian principles requires. Grasping what a person intends or desires by interpreting their actions is a distinctive human skill. Is that soldier surrendering, calling for help, or threatening? To programme killer robots with such capacities would be yet another remarkable achievement.

And who would be held accountable if a LAW went haywire? Would it be the military’s most senior commanders? The programmers? The manufacturers? The government of the state using them? Identifiable accountability is an important feature of humanitarian protection in times of conflict, because it imposes some restraint on what is done by militaries, and lack of clarity about it or its absence affords too much licence.

Does talk of LAWs sound like science fiction? In 2004 the US Navy produced a planning paper on Unmanned Undersea Vehicles (UUVs) saying that although “admittedly futuristic in vision, one can conceive of scenarios where UUVs sense, track, identify, target, and destroy an enemy—all autonomously.” Human Rights Watch quotes a US Air Force document predicting that “by 2030 machine capabilities will have increased to the point where humans will have become the weakest component in a wide array of systems and processes.” The UK Ministry of Defence estimated in 2011 that artificial intelligence “as opposed to complex and clever automated systems” could be achieved in five to 15 years and that fully-autonomous combat aircraft may be available in 2025.

While attention is fixed on the idea of futuristic killer robots, it is easy to forget that fully-automatic weapons systems are already in service. One example is the US Navy’s Phalanx system, which in the Navy’s words, is “capable of autonomously performing its own search, detect, evaluation, track, engage and kill assessment functions.” Another is Israel’s Iron Dome system, which automatically intercepts incoming rockets and shells fired from Israel’s neighbours and Palestinian territory. Both Israel and South Korea—currently on guard as North Korea threatens more nuclear tests—have automated sentry systems whose heat and motion sensors inform human monitors back at base if they have detected people in their vicinity. How long before they shoot autonomously too?

In short, no one doubts the feasibility of automated systems, even if it is not yet clear how human-like decision-making can be programmed into them. At present the governments of countries developing unmanned weapons and robotic systems say that they have no intention of allowing their use without human supervision. But one knows what can happen to the best intentions.

Even as I write these words there must be researchers developing devices whose role in future wars we cannot yet anticipate. So long as war continues, there will be a race to gain technological advantages over real and putative enemies. Drones are only one of many places where the frontiers of technology are pushing forward; cyber-warfare is another. The reliance on computing in managing almost every aspect of military activity makes the targeting of army and navy command-and-control systems an effective alternative to bombing installations. According to reports the US has targeted both the North Korean and Iranian nuclear systems with cyberattacks. Where the aim is to unleash chaos or undermine civilian morale, there is an increasingly wide range of civil networks—from power suppliers to policing organisations—whose targeting may be more effective than bombing industrial centres, or civilian populations. Technology may drag more of modern life into the domain of war, with who knows what ethical implications.

Efforts at regulation can do some good, as the partial successes of the chemical weapons ban and the nuclear proliferation ban suggest. More generally, however, belligerent technology cannot be contained. We might do better if we turned our minds instead to how to reduce outbreaks of war overall.

War seems to be far more a matter of how we arrange ourselves politically than it is an outcome of human nature. War is not an expression of human nature: anger, aggression and a willingness on occasion to fight are human characteristics, but the overwhelming evidence of cooperation and mutual interest in our essentially social species puts this feature of human psychology in its place. We can all be selfish, we can all be generous; we can all be kind and sometimes unkind; but look around at the streets and buildings of any city and one sees the marks of mutuality and cooperation more enduringly displayed—bridges, schools, hospitals, civilisation itself—even if the marks of humanity’s less appetising sides are visible too. It is nation states and tribes, the organised groupings of people, between whom war flares up. Ties of trade and cooperation across borders are thus prophylactics for war, as was observed by Richard Cobden in the 19th century, Thomas Paine in the late-18th century and even before. The United Nations and—yes—the European Union are examples of such cooperation, which is one reason why the question of what would happen if the EU fell apart presses so hard.

 

Beyond the promotion of amity and international concord, we should think about how to deinstitutionalise war. The raising, training and supplying of military forces is a given in almost all states, as if it were as natural as breathing. It is government defence contracts which typically provide the means of advancing the technology of war. And these play an important part in economies. Military personnel are respected, honoured, applauded: quite rightly, in cases where the defence of the nation was achieved by their courage. But the encouragement of positive social attitudes to those whose business is war results in the idea of war being  built into the DNA of a society and its economy. Not for nothing was US President Dwight D Eisenhower, a former general, moved—at the height of the Cold War—to warn against the “military-industrial complex.”

War is romanticised in novels, films and in reportage. See, for example, the macho posturing shown in the name of the US’s $16m “Moab” or “Mother of all bombs,” a powerful device that was recently deployed against Islamic State in Afghanistan. War is cosmeticised; television news broadcasts do not show the blood and guts or the blown apart children. How about truth as an antidote to war?

Technology is almost bound to keep reinventing the form of war. It may be harder to change that than to reduce the fact of war taking place. If there is one key to the entire question of war, it is justice. A fair world would be a far less conflicted one. In seeking to contain the damage done by the next generation of military technology, it may be more fruitful to reflect upon that than to dream up rules that aim—against all experience—to hold back the technological tide. For if drones defeat and maim human beings, it will be because of human motivations and through very human means. So let us use human insight into our own frailties to control how many drones get dispatched. Let us wean our economies and societies off the addiction to the idea that things military are a commonplace necessity.

And let us be hard-nosed about it: that it is only the presence of bad people elsewhere that requires us to be ready for our defence, but that war as an instrument of anything but defence is totally, completely, humanly, morally unacceptable: a crime of the blackest kind on the part of those who cause it.

The work of ending war is in hand; it is long and arduous. It takes, and will continue to demand, even more resolution, courage and determination than it takes to declare war. That is a fact. It is where the real heroism of the human species will be displayed.

Meanwhile we are still having to live with war, and therefore have a battle to fight: to prevent it whenever possible, to limit it if not, to press for humanitarian restraint when it happens, to hold warmakers to account, to argue and educate against it always.

As the technologies of war grow ever more sophisticated and destructive so the truth enunciated by John F Kennedy comes ominously closer: that if we do not end war, it will end us. Perhaps one day it will.

Or perhaps one day everything considered here will be the stuff of old and outdated things, as witchcraft or astrology might now be—past nonsense, from irrational times, when folly too often reigned. I hope so.

This article is drawn from Grayling's new book, "War: An Enquiry," published by Yale