Do we need robot law? The question might seem to belong more to the world of science fiction than reality; but technological advances have made it a pressing issue. At a British Academy debate on 31st January in London, the chair Hannah Devlin, a Guardian journalist, began with an arresting anecdote.
Last year, a man in a Tesla car in self-driving mode was killed when he collided with a truck that neither the car nor the human saw. Was the technology to blame or the human being? And if the authorities don’t have complete access to the technology then how can they adjudicate?
Patrick Haggard FBA, Professor of Cognitive Neuroscience at University College London, set up the argument about the responsibility of robots. “From a cognitive point of view, there are two things you need: you need to know what you’re doing; and you need to know what the result of your actions will be.” As a child, when you hit someone, for example, you realise they will be upset. “A sense of agency is crucial for regulating our behaviour.” Robots, right now, lack this agency.
Could robots learn agency? Haggard thought that with advances in deep learning and large databases this could happen. But this would create further problems. When humans learn as children they proceed by “trial and error.” They had parents to guide them and are too small to do much harm usually. But are we prepared to accept robots making the same mistakes? What happens when an intelligent system running a car makes a mistake—misses an approaching truck, for example?
We will reach a point when “robotics” are deeply embedded within society—and that will mean we will need to “fundamentally change the way society works.”
Professor Noel Sharkey, Emeritus Professor of Artificial Intelligence and Robotics, University of Sheffield outlined this brave new world. Amazon is already doing robot deliveries; in New Zealand you can get can get a pizza delivered by drone. Sharkey himself has a robot vacuum cleaner, and believed that in a short time everyone in the room would have one.
There is very little joined-up thinking over these issues, he said. Who is going to control all these devices flying round our heads? Also, many more jobs will be lost than will be created, and we have to prepare for the economic and social consequences of that. (As an aside, Sharkey said the United States would probably be better off with a robot president right now.) But while we need more laws and regulation, said Sharkey, we must be careful not to stifle innovation.
Susanne Beck, Professor for Criminal Law and Law Philosophy at University Hannover precisely outlined the issues. In what areas of life do we accept machine autonomy? It’s likely that a machine will be more efficient but less empathetic than a human being. So where efficiency is key—in traffic light systems, for example—there is little problem. But what about in fields such as “education, medical healthcare and welfare”? To work that out, she said, we need “a social debate such as the one we’re having.”
Her second point was that transferring decision-making leads to a diffusion of liability. If something goes wrong can we hold responsible the person who made the robot? And if we do, then the machines aren’t actually autonomous and therefore not much use in the first place. “Of course we could institute an insurance or damages system—but I doubt these solutions would be sufficient.” Will we need the new legal entity of an “electronic person”?
Thirdly, could you programme robots to obey the law? But how would they apply it in specific contexts where social and cultural knowledge is required—not just abstract programmable rules? It gets tougher when we deal with moral norms, which tend to change more quickly than the law. “We will need some kind of robot law to work out the complex discussions.”
Roger Bickerstaff, a Partner at Bird & Bird, was adamant that we don’t need laws directed at robots. He cited the example of a Swiss art group that created an autonomous machine that was “capable of crawling over the web.” It was given 100 dollars to spend and bought illegal drugs and surveillance equipment. Police then “arrested” the computer. “But since a computer doesn’t have any ethical understanding, it can’t be subject to any criminal consequences.”
Still, there needs to be a discussion. If a robot learns by example and is in the workplace, what would happen if it started harassing a female employee in imitation of one of its colleagues? Who is held responsible?
Bickerstaff also raised the issue of job losses. With driverless cars 3m jobs might be lost. That’s a lot of voters, which means the kind of disruption to democracy we have seen recently will only increase. “I lived through the dismantling of the steel and coal industry. It’s risky stuff.” We shouldn’t be just looking out for maximum profitability: we need to think how AI could lead to huge inequalities. We need something like the modern equivalent of the factory acts of the Victorian era. Perhaps we need, he said, a “more luddite approach to these issues.”