AI research raises profound questions—but answers are lacking
by Vincent Conitzer / May 4, 2016 / Leave a comment
A humanoid robot, equipped with an artificial intelligence, helps a teacher with a science class at Keio University Kindergarten in Shibuya Ward, Tokyo on 25th January, 2016 ©Miho Ikeya/AP/Press Association Images
The idea of Artificial Intelligence has captured our collective imagination for decades. Can behaviour that we think of as intelligent be replicated in a machine? If so, what consequences could this have for society? And what does it tell us about ourselves as human beings? Besides being a topic in science fiction and popular philosophy, AI is also a well-established area of scientific research. Many universities have AI labs, usually in the computer science department. The feats accomplished in such research have been far more modest than those depicted in the movies. But the gap between reality and fiction has been closing. For example, self-driving cars are now on the roads in some places. The world outside academia has taken note, and technology companies are in fierce competition over the top AI talent. Meanwhile, there is a growing public worry about where this is all headed.
Most of the technical progress on AI is reported at scientific conferences on the subject. These conferences have been running for decades and are attended by a community of devoted researchers. But in recent years, they have started to attract a broader mix of participants. At the 2016 conference of the Association for the Advancement of Artificial Intelligence, held in Phoenix in February, one speaker was more controversial than any other in recent memory: Nick Bostrom, a philosopher who directs the Future of Humanity Institute at Oxford University.
Bostrom made waves with his 2014 book Superintelligence. In it, he contemplates the problem that we may soon build AIs that exceed human capabilities, and considers how to ensure that the result will be in our best interest. A key concern is that of an “intelligence explosion”: if we are intelligent enough to build a machine more intelligent than ourselves then, so the thinking goes, that machine in turn would be capable of building something even more intelligent, and so on.
The phrase “technological singularity” is sometimes used to describe that scenario. Will humanity be left in the dust, or even wiped out? Public figures including Elon Musk, Stephen Hawking, and Bill Gates have also warned of the risks of superintelligent AI. Last year, Musk donated $10m to the Boston-based Future of Life Institute to set up a grant programme with the aim of keeping AI beneficial to humans. In February, Atefeh Riazi, the Chief Information Technology Officer at the United Nations, joined the chorus emphasizing the risks of AI.