Technology

Artificial intelligence: where's the philosophical scrutiny?

AI research raises profound questions—but answers are lacking

May 04, 2016
A humanoid robot NAO (C), equipped with an artificial intelligence, helps a tutor teach at a science class at Keio University Kindergarten in Shibuya Ward, Tokyo on Jan. 25, 2016. NAO, together with other two robots attracted the curiosity of the children
A humanoid robot NAO (C), equipped with an artificial intelligence, helps a tutor teach at a science class at Keio University Kindergarten in Shibuya Ward, Tokyo on Jan. 25, 2016. NAO, together with other two robots attracted the curiosity of the children

The idea of Artificial Intelligence has captured our collective imagination for decades. Can behaviour that we think of as intelligent be replicated in a machine? If so, what consequences could this have for society? And what does it tell us about ourselves as human beings? Besides being a topic in science fiction and popular philosophy, AI is also a well-established area of scientific research. Many universities have AI labs, usually in the computer science department. The feats accomplished in such research have been far more modest than those depicted in the movies. But the gap between reality and fiction has been closing. For example, self-driving cars are now on the roads in some places. The world outside academia has taken note, and technology companies are in fierce competition over the top AI talent. Meanwhile, there is a growing public worry about where this is all headed.

Most of the technical progress on AI is reported at scientific conferences on the subject. These conferences have been running for decades and are attended by a community of devoted researchers. But in recent years, they have started to attract a broader mix of participants. At the 2016 conference of the Association for the Advancement of Artificial Intelligence, held in Phoenix in February, one speaker was more controversial than any other in recent memory: Nick Bostrom, a philosopher who directs the Future of Humanity Institute at Oxford University.

Bostrom made waves with his 2014 book Superintelligence. In it, he contemplates the problem that we may soon build AIs that exceed human capabilities, and considers how to ensure that the result will be in our best interest. A key concern is that of an “intelligence explosion”: if we are intelligent enough to build a machine more intelligent than ourselves then, so the thinking goes, that machine in turn would be capable of building something even more intelligent, and so on.

The phrase “technological singularity” is sometimes used to describe that scenario. Will humanity be left in the dust, or even wiped out? Public figures including Elon Musk, Stephen Hawking, and Bill Gates have also warned of the risks of superintelligent AI. Last year, Musk donated $10m to the Boston-based Future of Life Institute to set up a grant programme with the aim of keeping AI beneficial to humans. In February, Atefeh Riazi, the Chief Information Technology Officer at the United Nations, joined the chorus emphasizing the risks of AI.

So far, concerns have mostly been raised by people outside the core AI community. Some researchers cautiously agree with some of the points; others dismiss them. After Bostrom’s talk, a number of researchers complained on social media that allowing him this forum gave him an undeserved credibility. Others emphasized open-mindedness, but (as far as I saw) fell short of endorsing his ideas. But most in the AI community shrugged and continued with their research as usual. Why? Do AI researchers just not care about the future of humanity?

I think the answer lies in the history of AI research, which took off in the 1950s. Early research was very promising, leading to excitement, optimism and high expectations. But limitations of this early work soon became apparent. Approaches that produced impressive results on small, toy examples simply would not scale to real-world problems. AI researchers struggle to this day with making their programs robust enough to handle the real world, which is messy and ambiguous. This led to an "AI winter”: the discipline got a bad reputation in academia and funding was reduced. AI researchers yearned for their work to be scientifically rigorous and respected, and learned to be careful.

Some researchers sought distance from the term “AI.” For example, many of those working on machine learning—in which computers learn automatically how to make predictions and decisions from data—no longer identify themselves as AI researchers. Those who have stuck with the term have often focused on more narrowly defined technical problems. These problems are important roadblocks in AI, but solving them has often led to more immediate benefits. Work on automated planning and scheduling systems, for instance, has been used on the Hubble Space Telescope.

An introductory AI course will typically spend a little time on philosophy, such as John Searle’s “Chinese Room” thought experiment. Here, someone who has no knowledge of the Chinese language sits alone in a room with an incredibly detailed step-by-step manual—read, a computer program—on Chinese characters. Questions are written down in Chinese and slipped under the door. The person consults the manual and follows the instructions to draw other characters and slip them back out. The manual is so good that, from the outside, it appears that there is someone, or something, inside that understands Chinese. But is there really? And if not, then how can a computer, which operates similarly, have any real understanding?

While some AI professors enjoy posing such conundrums in their first few lectures, a typical course—my own included—will quickly move on to technical material that can be used to create programs. After all, such courses are generally computer science, not philosophy. Similarly, very little of the research presented at any major AI conference is philosophical in nature. Most of it comes in the form of technical progress—a better algorithm for solving an established problem, say. This is where AI researchers believe they can make useful progress and win respect in the eyes of their scientific peers, whether they feel the philosophical problems are important or not.

"The AI community generally eschews speculation about the deep future and is more comfortable engaging with more concrete and tangible problems"
All this explains some of the reluctance of the AI community to engage with the superintelligence debate. It has fought hard to establish itself as a respected scientific discipline, overcoming outside bias and its own early overstatements. The mindset is that anything perceived as unsubstantiated hype, or as being outside the realm of science, is to be avoided at all costs. Tellingly, in a panel after Bostrom’s talk, Oren Etzioni, director of the Allen Institute for Artificial Intelligence in Seattle, drew supportive laughter when he pointed out that Bostrom’s talk was blissfully devoid of any data—though Etzioni acknowledged that this was inherent in the problem.

At the same panel Thomas Dietterich, a computer science professor at Oregon State University and President of the Association for the Advancement of Artificial Intelligence, expressed scepticism that an intelligence explosion of the kind Bostrom describes would happen. The AI community generally eschews speculation about the deep future and is more comfortable engaging with more concrete and tangible problems, such as autonomous weapons (those that can act without human intervention), or the unemployment that will ensue from AI replacing human workers. The latter was, in fact, the topic of the panel.

Another issue is that many AI researchers, perhaps unlike the general public, believe there are still essential components to be found before something like the superintelligent AI of Bostrom’s book could emerge. Many of the problems that were once considered benchmarks—say, beating human champions at chess—have been solved using special-purpose techniques that, while impressive, could not immediately be used to solve other problems in AI. This suggests that the “hard problems” of AI lay elsewhere and are perhaps still unidentified. (It has also led AI researchers to lament that “once we solve something, it’s not considered AI anymore.”)

So while recent breakthroughs, such as Google DeepMind’s AI learning to play old Atari games surprisingly well, may have the public worried, AI researchers are not unduly troubled. That being said, these results are certainly impressive to the AI community as well, not least because this time there are common techniques—now referred to as “deep learning”—underlying not only the Atari results, but also progress in speech and image recognition. (Consider the problems that Apple solves to get Siri to understand what you said, or that Facebook tackles to recognise faces in photos automatically.) Researchers had previously attacked these problems with separate special-purpose techniques. DeepMind’s AlphaGo program, which recently won a series of games of Go with Lee Sedol, one of the best human Go players, also has deep learning at its core.

The line of research that led to the deep learning breakthroughs had been largely dismissed by most AI and machine learning researchers, before the few that tenaciously stuck with it started producing impressive results. So our predictions about how AI will progress can be wide of the mark even in the short term. Accurately predicting all the way to, say, the end of the century seems impossible. If we go equally far into the past, we end up at a time before Alan Turing’s 1936 paper that laid the theoretical foundation for computer science. This, too, makes it difficult for mainstream AI researchers to connect with those raising concerns about the future. Some disaster scenarios, such as those related to asteroid strikes or global warming, allow for reasonable predictions over long timescales, so it is natural to want the same for AI. But AI researchers and computer scientists tend to reason over much shorter timescales, which are already challenging given the pace of progress.
"The substantial philosophical literature on consciousness did not come up. This is probably due to discomfort with how to approach such issues."
As one of the recipients of a Future of Life Institute grant, I participated on a keeping-AI-beneficial panel at the conference in Phoenix. It was moderated by Max Tegmark, one of the founders of the Future of Life Institute and a physics professor at MIT—again, an outsider to the AI community. Besides relatively more accessible questions about autonomous weapons and technological unemployment, Tegmark also asked the panel some philosophical questions. All other things being equal, would you want your artificially intelligent virtual assistant (imagine an enormously improved Siri) to be conscious? Would you want it to be able to feel pain? The first question had no takers; some in attendance argued that pain could be beneficial from the perspective of the AI learning to avoid bad actions.

The substantial philosophical literature on consciousness and qualia did not come up. In philosophy, the word “qualia” refers to subjective experiences, such as pain, and more specifically to what it is like to have the experience. Philosopher Thomas Nagel coined the most famous example: presumably there is something it is like to be a bat, though we, as a species that does not use echolocation, may never know exactly what this is like. Is there something it is like to be an AI virtual assistant? A self-driving car? Perhaps this was due to unfamiliarity with such concepts, but it is perhaps more likely that it was due to discomfort with how to approach these questions.

Even philosophers have difficulty agreeing on the meaning of these terms, and the literature ranges from the more scientifically oriented search for the “neural correlates of consciousness” (roughly: what is going on in the brain when conscious experience takes place) all the way to more esoteric studies of the subjective: How is it that my subjective experiences so appear so vividly present, while yours do not? Well, surely your experiences appear somewhere else.  Where? In your brain, as opposed to mine? But when we inspect a brain, we do not find any qualia, just neurons. (If all this seems hopelessly obscure to you, you are not alone – but if you are intrigued, see, for example, Caspar Hare’s On Myself, and Other, Less Important Subjects or JJ Valberg’s Dream, Death, and the Self.)

The state of our understanding makes it difficult even to agree on what exactly Tegmark’s questions mean—is objectively assessing whether an AI virtual assistant has subjective experiences a contradiction in terms?—let alone give actionable advice to AI practitioners. In his book, Bostrom suggests philosophers should postpone work on “some of the eternal questions” like consciousness for a while, and instead focus on how best to make it through the transition to a world with superintelligent AI. But it is not clear whether and how we can sidestep the eternal questions, even if we accept the premise that such a transition will take place. (Of course, philosophers do not necessarily accept the premise either.)

So, generally, AI researchers prefer to avoid these questions and return to making progress on more tractable problems. Many of us are driven to make the world a better place—by reducing the number of deaths from car accidents, increasing access to education, improving sustainability and healthcare, preventing terrorist attacks and so on—and are frustrated to see every other article on AI in the news accompanied by an image from The Terminator.

Meanwhile, genuine concerns are developing outside AI circles. While the AI conference in Phoenix was underway, there was a call at a meeting of the American Association for the Advancement of Science to devote 10 per cent of the research budget on AI to the study of its societal impact. In a climate where funding is tight, AI researchers may not look kindly on people proposing to divert part of it. Yet the AI community should take part in the debate on societal impact, because the discussion will take place anyway without them and be less informed.

Fortunately, members of the community are increasingly taking an interest in short-to-medium-term policy questions, including calling for a ban on autonomous weapons. Unfortunately, we have yet to figure out how to engage with the more nebulous long-term philosophical issues. One area where immediate traction seems possible is the study of how (pre-superintelligence) AI can make ethical decisions—for example, when a self-driving car needs to make a decision that is likely to kill someone, in an attempt to prevent it crashing into many people. Automated ethical decision making is the topic of a number of the Future of Life Institute grants, including my grant with Walter Sinnott-Armstrong, a professor of practical ethics and philosophy at Duke University. But at this point it is not clear to AI researchers how to address the notion of superintelligence and the philosophical questions raised by it.

At the end of Bostrom’s talk, Moshe Vardi, a computer science professor at Rice University, made an excellent point: imagine if, upon Watson and Crick’s discovery of the structure of DNA, all discussion had been on all the ways which it could be abused—this is what the debate over AI is like. Progress in AI will unfold in unexpected ways and some of the present concerns will turn out to be unfounded, especially among those concerning the far off future. But this argument cuts both ways; we can be sure that there are risks that are not currently appreciated. It is not clear what exact course of action is called for, but those that know the most about AI cannot be complacent.

Read more: Artificial intelligence tragedies

Read more: Will artificial intelligence wipe out economists?

I thank Tom Dietterich and Walter Sinnott-Armstrong for helpful feedback on this article