World

We are on the cusp of one of the most dangerous arms races in human history

Fully autonomous weapons threaten to outpace our ethical frameworks for what is permitted in war

September 06, 2021
article header image
The MQ-9 Reaper. Do we trust AI enough to put it in control of hunter-killer drones? US Department of Defense / Alamy Stock Photo

War has always been a driver of technological development. Indeed it might well be the driver of civilisation itself, over the course of history making greater levels of social organisation necessary to muster, equip, train and organise fighting forces, together with the central authority needed to raise taxes to pay for it all. Often the factor that tips a conflict one way or the other is the extent to which one side is capable of deploying superior weaponry. Improvements to the spear, the metallurgy of swords and armour, rapid-fire small arms, aircraft and logistical equipment have all contributed towards military victories over the course of human history.

Although the aim of conflict is as it ever was—to destroy or degrade the enemy’s capacity and will to fight—at every level from the individual enemy soldier to the economic and political system behind them, war has changed in character. This evolution is prompting new and urgent ethical questions, particularly in relation to remote unmanned military machines. Surveillance and hunter-killer drones such as the Predator and the Reaper have become commonplace on the modern battlefield and their continued use suggests—perhaps indeed presages—a future of war in which the fighting is done by machines independent of direct human control. This scenario prompts great anxieties.

Almost all technological advances in weaponry bring new ethical problems. The St Petersburg Declaration of 1868 outlawed bullets, then newly-invented, that expanded and fragmented on penetrating a victim’s body to increase their incapacitating effect. Move on 150 years and this style of bullet is now widely available in gun shops in the United States. The Hague Convention of 1899—before heavier-than-air flight was possible, note—outlawed aerial bombardment, for example throwing grenades or dropping bombs from manned balloons and dirigibles. Chemical weapons such as mustard gas were outlawed after the First World War, and since the Second World War numerous attempts have been made to ban—or at least limit—the spread of nuclear weapons. These are all examples of futility. The most difficult kind of race to stop, or even slow, is a weapons race. The development of military technologies is the purest example of the law that “what can be done will be done if it brings advantage to those who can do it” (which I have elsewhere dubbed “Grayling’s Law”).

Unmanned drones, used in terrains and circumstances where conventional forces are at a disadvantage, are among the more recent developments. Yet this is not an issue at the margins of warfare—serious analysts suggest that if 9/11 had happened just a few years later, there might not have been a Nato invasion at all: it might have been possible to do everything using drones.

Paradoxically, drone activity is at the less bad end, if there can be such a thing, of causing death from the air. It is more selective, more precisely targeted, and therefore marginally less likely to cause collateral damage than conventional bombing. The seemingly inhuman nature of drone operations—a deadly, faceless, remotely-controlled, unmanned machine, weighed down with missiles, remorselessly homing in on its target—is a prompt for extra dislike; yet it reprises a form of killing that anciently recommended itself, embodying the same principle as stoning to death, placing the killer at a sanitary remove from the victim. Not touching the victim, not being physically nearby, is a sop to the conscience. Drone pilots in locations such as Creech Air Force base near Las Vegas have the advantage over bomber pilots of guaranteed safety, as well as the stone-thrower’s remove.

In the terminology of remote warfare, mostdrones are described as “human-in-the-loop” weapons—that is, devices controlled by humans who select targets and decide whether to attack them. Other “human-on-the-loop” systems are capable of selecting and attacking targets autonomously, but with human oversight and the ability to override them. Examples include the Phalanx CIWS air-defence system used by the US, British, Australian and Canadian navies and described as “capable of autonomously performing its own search, detect, evaluation, track, engage and kill assessment functions,” and Israel’s Iron Dome system, intercepting rockets and shells fired from Palestinian territory.

Where the ethical battle is hottest, however, is in relation to “human-out-of-the-loop” systems: completely autonomous devices operating on land, under the sea or in the air, programmed to seek, identify and attack targets without any human oversight after the initial programming and launch. The more general term used to describe these systems is “robotic weapons,” and for the attacking kind “lethal autonomous weapons” (LAWs). There is a widespread view that they could be in standard operational service before the mid-21st century. Hundreds of billions of dollars are being invested in their development by a mixture of the US, China, Russia and the UK.

The areas of concern over their use are clear. The idea of delegating life-and-death decisions to unsupervised armed machines is inconsistent with humanitarian law, given the potential danger that they would pose to everyone and everything in their field of operation, including non-combatants. Anticipating the dangers and seeking to pre-empt them by banning LAWs before they become widely operational is the urgently preferred option of human rights activists. The “Campaign to Stop Killer Robots,” run by Human Rights Watch, has been successful not just in raising public concern but in marshalling support for a ban; at the time of writing 31 states, the European Parliament, the UN secretary-general, thousands of AI experts and scientists, and nearly two-thirds of people polled on the issue support an outright ban.

“The most difficult kind of race to stop is a weapons race”

International humanitarian law already contains provisions that outlaw the deployment of certain weapons and tactics, especially those that could be injurious to non-combatants. LAWs are not mentioned because they did not exist at the time the documents were drafted, but the intentions and implications of the various appended agreements and supplementary conventions are clear enough. They provide that novel weapons systems, or modifications of existing ones, should be examined for their consistency with the tenor of humanitarian law.

One of the immediate questions with LAWs is whether they could be programmed to conform to the principle of discrimination: that is, whether they would reliably be able to distinguish between justified military targets and everything else. Could they be programmed to make a fine judgment about whether it is necessary to deploy their weapons? If so, could they be programmed to adjust their activity so that it is proportional to the circumstances they find themselves in? Distinction, necessity and proportionality are key principles in the humanitarian law of conflict, and in each case flexible, nuanced, experienced judgment is at a premium. Could an AI programme instantiate the capacity for such judgment?

This would require AI to be developed to a point where battlefield analysis and decisions about how to respond to threats are not merely algorithmic but have the quality of evaluation that, in human beings, turns on affective considerations. This is best explained by recalling psychologist Antonio Damasio’s argument that if a purely logical individual such as Star Trek’s Mr Spock really existed, he would be a poor reasoner, because of the lack of  emotional dimension to his thought. A machine would need subtle programming to make decisions consistent with humanitarian considerations. Creating a machine analogue of compassion, for example, would be a remarkable achievement; but a capacity for compassion is one of the features that discriminating application of humanitarian principles requires.

Someone might reply to this by saying that human emotions are just what should not be required on the battlefield; machines would be less erratic as they would never be emotionally conflicted, making them swifter, more decisive and less error prone than most—if not all—humans. But the question is whether we wish the decision-maker in a battle zone to be this way, given that the capacity to read intentions, interpret behaviour and read body language is key to conforming to humanitarian law. These are psychological skills that humans develop early in life, and which they apply in mainly unconscious ways. To programme killer robots with such capacities would be yet another remarkable achievement.

“Who would be held accountable if autonomous weapons went haywire—commanders, manufacturers or governments?”

What is at issue here is something beyond facial recognition AI, a concern to human rights activists because of surveillance and privacy implications. The “something beyond” is the capacity of such systems to read and interpret emotions in faces. Emotion recognition is offered by Microsoft, Amazon and IBM as features of their facial recognition software, with obvious benefits to marketing campaigns and advertisers monitoring responses to their products.

But it is also claimed that the ability of machines to read emotions has more general applications, from identifying potential terrorist threats to road safety. The risk of deliberate misuse, not least in racial profiling, is obvious; less obvious is the prospect of fatal mistakes—for example, someone being shot to death because a system identifies them as an immediate terrorist threat on the basis of what it interprets as their emotional state.

It is surely inevitable that such technology will be used in LAWS to identify enemy combatants and read their intentions. Proponents will argue that such systems might have a higher degree of reliability than humans in battle situations, where our judgment can be affected by noise, confusion and anxiety. The answer to this, in turn, is to ask a simple question: if a person smiles, does it invariably and infallibly mean that they are happy? Are emotional indicators such as smiling or extending a friendly hand infallible indicators of an emotional state or an intention? Consider what any normal person would say in reply.

Another question concerns who would be held accountable if LAWs went haywire and killed everyone they encountered, irrespective of who or what they were and what they were doing. Would it be the military’s most senior commanders? The programmers? The manufacturers? The government of the state using them? Identifiable accountability is an important feature of humanitarian protection in times of conflict, because it imposes some restraint on what is done by militaries. Ambiguity or outright absence of it affords too much licence.

At time of writing, the governments of countries developing unmanned weapons and robotic systems still officially maintain that they have no intention of allowing their use without human supervision. But one knows what can happen to good intentions. And whereas we know that the rise of LAWs is inevitable, we have no sense of the limits of their further development or application: policing demonstrations? Conducting warfare in space? The science-fiction imagination, so often anticipating scientific fact, has no obvious boundaries here.

It could be that systems will be developed that render weapons ineffective—this would be the ultimate stop to war, given that most people would rather make peace than try to kill one another. Is that a pipe dream? In every house connected to an electricity supply there is a trip switch which, when the system overloads or there is a short, turns the power supply off. A trip switch for weapons of every kind would be wonderful. To some extent—so far—the mutual risks of nuclear warfare have constituted, as deterrence, a trip switch against that kind of war; other kinds of trip switch against other kinds of war might yet be possible.

Yet even as these words are written, there are doubtless researchers developing devices whose role in the future of war we do not yet know and cannot even anticipate. So long as war continues, so long will there be a race to gain technological advantages over real and putative enemies. LAWS, drones the size of mosquitoes, cyberspace marauders, sonic and laser weapons, violence in space, interdictions of the means of economic and personal life, indeed approximations of almost anything that our imaginations can offer, are almost certainly in current development. The question for the world is whether it is simply going to let this process unfold at its present breakneck speed, with scarcely any effort to limit and control the horrendous consequences that could follow.