Invent!

There's more to AI than meets the eye
May 19, 2002

Some technologies, such as fusion power, are hyped but never manage to get off the ground. Others, such as text messaging, are not hyped at all, but take off on their own. The technology of "artificial intelligence," a term coined by a group of computer scientists in 1956, would seem to belong in the former category rather than the latter. After decades of over-optimism and numerous broken promises, AI research appears to have achieved very little. Intelligent computers-of the kind epitomised by Hal, the fictional supercomputer in 2001: A Space Odyssey -are nowhere to be seen. Yet much of the research initiated under the banner of AI has produced technologies that are surprisingly useful and widespread. The odd thing is, nobody seems to have noticed.

The founders of the AI programme had lofty goals. They tackled a variety of projects, from speech recognition to language translation to chess playing, whose common strand was an attempt to capture or mimic human abilities, using computers. These specific and unrelated problems were attacked in different ways, but the hope was that solving them would point the way to a generalised theory of machine intelligence. In 1967, Marvin Minsky, a leading AI guru, pronounced that "within a generation, the problem of creating 'artificial intelligence' will be substantially solved." Minsky acted as an advisor on 2001 and his vision of what an artificial mind of the early 21st century would look like was neatly encapsulated by Hal. Not only could Hal perform specific tasks like playing chess and holding conversations; he also demonstrated the general nature of his intelligence by learning to lip-read spontaneously.

Here we are in 2002, and there is no sign of Hal. The artificially intelligent super-brains of science fiction remain just that. Since the late 1980s, researchers have gradually abandoned the term AI in favour of more specific sub-disciplines, such as neural networks, agent technology and case-based reasoning. The AI project, it seems, has been an abject failure. Yet in some ways it was a victim of its own success. Whenever an apparently mundane problem was solved and made into a useful product, such as a system that can recognise handwriting, it was deemed not to have been AI in the first place. This repeated moving of the conceptual goalposts has been christened "the AI effect." In the words of Rodney Brooks, an AI researcher at MIT, "every time we figure out a piece of it, it stops being magical." Instead, it becomes just boring old computing.

But while a generalised theory of machine intelligence has failed to materialise, computers have got pretty good at some tasks that could previously only be done by humans. Mobile phones, for example, commonly have a "voice dialling" feature, so that you can say someone's name rather than key in their number. Dictation systems that transcribe written speech are not perfect, but are used in niches such as medical or law reporting, and by people who cannot use a keyboard. Handheld computers can recognise handwriting with increasing accuracy. Free web services offer to translate text from one language to another, competently enough to give the gist of a document's meaning. Fire up a computer game, and your computerised opponents' moves are governed by AI systems of increasing complexity, as fans of "The Sims," "Creatures" or "Black & White" will know. Buy books or CDs online, and you will be offered other items that might appeal to you based on your purchase history, often with uncanny accuracy.

Even if you never go near a PC, a handheld computer or a mobile phone, you still encounter smart software more often than you might think. Genetic algorithms schedule logistics at airports. Computer auto-pilots are used to land aircraft unaided in difficult conditions. (Last year a Global Hawk, an American unmanned reconnaissance aircraft, flew from California and landed in Australia without any human intervention.) Computer-vision systems read handwritten postcodes and sort mail. Smart software controls fuel-injection and cruise-control systems in cars. Neural networks spot unusual spending patterns to counter fraud at nine out of the top ten credit-card companies in the US. Fuzzy-logic control systems can be found in washing machines and auto-focus cameras.

All of these technologies are in routine, everyday use, and all of them once fell under the umbrella of AI research. With such ambitious goals-trying to get to the bottom of human cognition-it is little wonder that the AI programme made so little progress. But lower your expectations and from a practical, rather than a philosophical point of view, it has not done badly after all. When it comes to building a Hal-like mind and deriving a general theory of intelligence, AI has been a flop. But redefine AI as getting a machine to do something that used to require the intervention of a human mind and it has been an outstanding, if unnoticed, success.