Sponsored

Does AI pose a threat to society?

Automation could offer massive opportunities but we must be wary

March 20, 2017
abstract character shattered into pieces
abstract character shattered into pieces

“Scientists fear revolt of killer robots.” When he read that headline Alan Winfield, Professor of Robot Ethics at the Bristol Robotics Laboratory, was shocked. Speaking at a British Academy debate in Bristol, Winfield said he had told the journalist who wrote the story that discussion about robot ethics was simply a case of scientists behaving responsibly. Winfield then spent several days on national radio explaining that scientists do not fear a revolt of killer robots.

Winfield has participated with the British Standards Institute in drafting the world’s first guide to the ethical design of robots and robotics systems. Additionally, he has contributed to Ethically Aligned Design: A Vision for Prioritising Human Well Being with AI and Autonomous Systems. The purpose was to “bake ethics in from the very beginning of the process of developing these systems.” Think about passenger planes. The reason we trust them is that it is a highly regulated industry with an “amazing safety record.” When it comes to driverless cars, it’s important that we have the equivalent of the Civil Aviation Authority.

Does AI pose a threat to society? Not an “existential one” of course, said Winfield, but we do need to worry about the down-to-earth questions of our “present-day, rather unintelligent AI”: the robots deciding on our loan applications and controlling our central heating systems. The bigger question is: after automation how can we ensure the wealth created by robotics and AI is shared by all society?

Maja Pantic, Professor of Effective and Behavioural Computing at Imperial College London, made the point that artificial intelligence is already changing society. Cashiers are already disappearing in favour of self-scan kiosks; more of us are ordering groceries over the internet. “The whole concept of supermarkets and buying stuff in shops may disappear.” As Google comes to dominate so libraries might become a thing of the past. So why the issue with automated cars? Most of this has come from the United States where a large number of men are employed as truckers. In a cash-strapped NHS, Pantic added, we could have automated carers. Or diagnosis by a computer. Some might say: “How will the doctor ever know what is going on with me?” But our technology for understanding “human facial behaviour is very subtle.” With dementia, for example, powerful cameras that can see 60-100 frames per second can detect the facial symptoms of the disease faster than the human eye.

Christian List, Professor of Political Science and Philosophy at the LSE, acknowledged that there were enormous opportunities so if he sounded “bit negative” that was because the debate required him to focus on the challenges. “The replacement of some types of jobs by new technology is nothing new—think about the industrial revolution or automated assembly-line production… But, in the past the replacement of one set of jobs tended to be offset by the creation of another set of jobs.” With automation it will be different and we need to mitigate the problems of winners and losers. It’s an open question whether the new jobs created will replace the old ones. “As a society we need to think hard about more egalitarian schemes that distribute fruits of productivity growth fairly, which we can achieve through automation.” List suggested the idea of a universal basic income could be a fruitful path to go down.

As for longer-term problems: “I am not very worried about those horror scenarios involving artificial super-intelligent killer robots that might battle against humanity.” More challenging is the idea of devolving responsibly to robots. “What makes someone or something a moral agent?” When an AI system causes some harm, who should be held responsible? Maybe whoever made it up and designed the software and so on? Should it be the manufacturer? Should it be the operator? Should it be, perhaps, the system? “This is a question that arises now already whenever a self-driving car causes a serious accident we have to figure out who is responsible for that.” It will be an important job for philosophers, computer scientists, lawyers and society at large to think about how to refine our moral codes and concepts to deal with the moral challenges that arise in the context of AI technology.

Samantha Payne, COO and Co-Founder of Open Bionics, argued that the mantra of the popular press fuels our paranoia. Such a narrative is more of a threat to societal progress than robotics. “I am incredibly excited by the potential AI has to drastically impact healthcare,” for example. Automation has been happening for years. AI and robotics can be used as tools in a number of sectors to increase productivity and decrease cost: manufacturing, retail, agriculture, transportation, data processing. But there are still areas within these sectors where automation can't work: managing others, applying expertise, setting goals, and interpreting results.

She went on to say that in 2013, Amazon had 1,000 robots working in its warehouses. In 2016 it had 45,000. What happened to the human workforce that has been replaced by robots? How do they now generate their income? This is obviously concerning. A McKinsey report suggested that half of today's work activities could be automated by 2055.

Yet the advantages could be immense. “AI can crunch vast amounts of structured data and identify patterns far faster than humans can. The possibility of marrying the expertise of a doctor with the power of machine-learning has clinicians very excited. Multiple studies have already produced findings where machines have had a significantly higher success rate of identifying skin cancers, breast cancer, and lung cancer compared with human doctors.”

Her advice? Avoid tabloid hyberbole. “I think it’s really important for us to keep open minds and be critical of what we're reading in the news.”