Scientists at The Allen Institute for Brain Science in Seattle. Photos: ERIK DINNEL/ALLEN INSTITUTE

The AI delusion: why humans trump machines

Artificial intelligence may never match the brain
January 25, 2020

As well as playing a key role in cracking the Enigma code at Bletchley Park during the Second World War, and conceiving of the modern computer, the British mathematician Alan Turing owes his public reputation to the test he devised in 1950. Crudely speaking, it asks whether a human judge can distinguish between a human and an artificial intelligence based only on their responses to conversation or questions. This test, which he called the “imitation game,” was popularised 15 years later in Philip K Dick’s science-fiction novel Do Androids Dream of Electric Sheep? But Turing is also widely remembered as having committed suicide in 1954, quite probably driven to it by the hormone treatment he was instructed to take as an alternative to imprisonment for homosexuality (deemed to make him a security risk), and it is only comparatively recently that his genius has been afforded its full due. In 2009, Gordon Brown apologised on behalf of the British government for his treatment; in 2014, his posthumous star rose further again when Benedict Cumberbatch played him in The Imitation Game; and in 2021, he will be the face on the new £50 note.

He may be famous now but his test is still widely misunderstood. Turing’s imitation game was never meant as a practical means of distinguishing replicant from human. It posed a hypothetical scenario for considering whether a machine can “think.” If nothing we can observe in a machine’s responses lets us tell it apart from a human, what empirical grounds can we adduce for denying it that capacity? Despite the futuristic context, it merely returns us to the old philosophical saw that we can’t rule out the possibility of every other person being a zombie-like automaton devoid of consciousness but very good at simulating it. We’re back to Descartes’ solipsistic axiom cogito ergo sum: in essence, all I can be sure of is myself.

Researchers in Artificial Intelligence (AI) today don’t set much store by the Turing Test. In some circumstances it has been surpassed already. It’s not an unfamiliar experience today to wonder whether we’re interacting online with a human or an AI system, and even alleged experts have been taken in by bots like “Eugene Goostman,” which, posing as a Ukrainian teenager, fooled a panel of Royal Society judges in 2014 into thinking it was human. Six years on that sort of stunt is unfashionable in serious AI, regarded as being beside the point.






article body image

 

article body image

 

article body image



Rebooting AI: Building Artificial Intelligence We Can Trust by Gary Marcus and Ernest Davis (Ballantine, £24)

The Feeling of Life Itself: Why Consciousness is Widespread but Can’t Be Computed by Christof Koch (MIT Press, £20)


Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell (Pelican, £20)




As Gary Marcus and Ernest Davis explain in Rebooting AI, the reason we might want to make AI more human-like is not to simulate a person but to improve the performance of the machine. Trained as a cognitive scientist, Marcus is one of the most vocal and perceptive critics of AI hype, while Davis is a prominent computer scientist; the duo are perfectly positioned to inject some realism into this hyperbole-prone field.

Most AI systems used today—whether for language translation, playing chess, driving cars, face recognition or medical diagnosis—deploy a technique called machine learning. So-called “convolutional neural -networks,” a silicon-chip version of the highly-interconnected web of neurons in our brains, are trained to spot patterns in data. During training, the strengths of the interconnections between the nodes in the neural network are adjusted until the system can reliably make the right classifications. It might learn, for example, to spot cats in a digital image, or to generate passable translations from Chinese to English.

Although the ideas behind neural networks and machine learning go back decades, this type of AI really took off in the 2010s with the introduction of “deep learning”: in essence adding more layers of nodes between the input and output. That’s why DeepMind’s programme AlphaGo is able to defeat expert human players in the very complex board game Go, and Google Translate is now so much better than in its comically clumsy youth (although it’s still not perfect, for reasons I’ll come back to).

In Artificial Intelligence, Melanie Mitchell delivers an authoritative stroll through the development and state of play of this field. A computer scientist who began her career by persuading cognitive-science guru Douglas Hofstadter to be her doctoral supervisor, she explains how the breathless expectations of the late 1950s were left unfulfilled until deep learning came along. She also explains why AI’s impressive feats to date are now hitting the buffers because of the gap between narrow specialisation and human-like general intelligence.

[su_pullquote]"A bot posing as a teenager fooled a panel of Royal Society judges into thinking it was human"[/su_pullquote]

The problem is that deep learning has no way of checking its deductions against “common sense,” and so can make ridiculous errors. It is, say Marcus and Davis, “a kind of idiot savant, with miraculous perceptual abilities, but very little overall comprehension.” In image -classification, not only can this shortcoming lead to absurd results but the system can also be fooled by carefully constructed “adversarial” examples. Pixels can be rejigged in ways that look to us indistinguishable from the original but which AI confidently garbles, so that a van or a puppy is declared an ostrich. By the same token, images can be constructed from what looks to the human eye like random pixels but which AI will identify as an armadillo or a peacock.

These blind spots become particularly troubling when AI slavishly recreates human biases—for example, when camera image-processors insist that someone with east Asian eyes must have “blinked.” Mitchell, as well as Marcus and Davis, warn that the dangers of AI are not about Skynet-style robot takeovers but unthinking applications of inadequate systems. Even if an AI system performs well 99 per cent of the time, the occasional failure could be catastrophic, especially if it is being used to drive a car or make a medical diagnosis.

The trouble is, though, it’s not obvious how to do better. These authors argue—and it’s a view widely held among AI researchers—that we need to make systems that think more like humans. But what does that mean?

For one thing, we don’t learn in the same way. Small children don’t need to see 10,000 cats or chairs before they can reliably identify them. And even today linguists disagree about how children deduce, from a tiny training set, the deep grammatical and syntactical rules of language. One thing is clear: we don’t robotically map patterns in the input on to categories for the output. Rather, we develop expectations and make predictions based on an intuitive sense of how the physical world works.

We infer parts of objects blocked from view; we anticipate trajectories; we know that a glass dropped on to tiles will shatter, but on a carpet it won’t. What’s more, we can often distinguish causation from correlation. We know that rain doesn’t “cause” people to put up umbrellas, whereas their desire to stay dry does. This touches on another crucial component of cognition: we develop an intuitive psychology of others, often called Theory of Mind. We believe we know what “algorithms” are guiding their choices and actions. We know to expect the unexpected when driving towards a road crossing where a harassed mother is trying to shepherd three young children while talking on her phone.

It is because of such expectations that we can unravel linguistic ambiguities—we can work out who “she” is in the sentence, “Mary was worried about her grandmother because she had been ill,” and we know not to take sarcasm (“Oh great!”) literally. It is because of such implicit knowledge that we’d never mistake a photo of a puppy for an ostrich. Translation, says Mitchell (whether linguistic or metaphorical), “involves having a mental model of the world being discussed.”



Some AI researchers have tried to build in such capacities by feeding into their system long lists of facts about the world. But that’s a doomed enterprise (as the performance of such systems attests), because there is always more information out there. Others think that rather than off-the-shelf robots, the answer is to build AI systems that need to be taught like children, in the sense that we give them the capacity to learn from “experience,” starting from an initially naive state. Even Turing suggested that the best route to a thinking machine might be via child-like cognitive development, and some of the most fertile research in AI today involves collaboration with developmental psychologists trying to deduce the rules-of-thumb that we infer and use to navigate the world.

Mitchell’s mainstream overview reveals that, while Marcus and Davis’s suggestions about how to “reboot” AI are all very sound, there is nothing especially iconoclastic about them; indeed, many of their recommendations are already being explored by companies such as IBM. The big question is how far they can take us on their own. Will an AI system ever deliver a translation of a literary text, say, that is not only accurate but also sensitive to meaning, unless it has a genuine understanding of what the story is about?

But what would such understanding amount to? AI researchers like to talk about “human-level” intelligence, which Mitchell admits is “really, really far away.” Yet we don’t even know what that means unless the system is conscious of itself; certainly it won’t be attained simply by making systems excel at the imitation game. As one physicist working on machine-learning recently said to me, this would be like imagining that if we can make an aeroplane fly faster and faster, eventually it will lay an egg.

All the same, a narrative has become entrenched that as the complexity and cognitive capabilities of AI increase, eventually a humanlike consciousness will emerge. According to Christof Koch, a neuroscientist at the Allen Institute for Brain Science in Seattle, this is the dominant myth of our age: that in AI, “consciousness is just a smart hack away.” In this view, he says, “we are Turing machines made flesh, robots unaware of our own programming.”

[su_pullquote]“Passing Turing’s test isn’t the answer. That’s like imagining if we can only make a plane go faster, it will lay an egg”[/su_pullquote]

Koch’s The Feeling of Life Itself challenges that idea. A one-time collaborator with DNA and neuroscience pioneer Francis Crick, he advocates a theory of consciousness called integrated information theory (IIT) developed originally by neuroscientist Giulio Tononi. Koch believes the theory establishes that machines built along the lines of our current silicon-chip technology can never become conscious, no matter what awesome degree of processing power they possess. He argues we must decouple assumptions about intelligence from those about consciousness: we can, for example, imagine systems that are conscious without much intelligence and vice versa. Even if we build machines to mimic a real brain, “it’ll be like a golem stumbling about,” he writes: adept at the Turing Test perhaps, but a zombie.

To make such a claim, Koch needs to have a picture of what consciousness really is. He describes it as how experiences feel. Every conscious experience has five distinct and unalienable properties, he says: for example, that it is unified, unique and with definite content (“this table is blue, not red”). In trying to infer what kind of physical mechanisms are necessary to support these features, IIT boils it down to the causal power of a system to “make a difference” to itself. In Koch’s words, consciousness is “a system’s ability to be acted upon by its own state in the past and to influence its own future.”

Consciousness is, then, an intrinsic property, not the output of some computation. Whether or not a network has this property of influencing itself depends on its architecture. If information is merely “fed forward” to convert inputs to outputs, as in digital computers, then IIT insists it can only generate zombie intelligence. Consciousness demands feedback in the circuit—something not evident in all of the brain, but present in a region towards the back of the cortex that Koch suspects is the physical seat of consciousness.

One of the striking features of IIT is that it makes consciousness a matter of degree. Any system with the required network architecture may have some of it. Koch believes “that consciousness is a fundamental, elementary property of living matter.” This view has been derided as panpsychism, but Koch doesn’t mean consciousness is spread equally everywhere. In IIT, a significant amount of it can exist only in particular kinds of things, not only human brains but also in those of other animals. Not, however, in our current silicon-based AI.

This view of consciousness is just one among many, and there is no sign yet of any resolution of the issue among neuroscientists or philosophers of mind. This is more than an obscure point of theoretical detail, and not only because of the implications for “thinking machines.” Diagnosing consciousness in patients who are comatose, brain-damaged or in vegetative states is vital for making decisions about their care. Tononi’s work on IIT has produced a method of testing for consciousness by sending magnetic pulses resonating bell-like around the cortex. The unconscious brain, says Koch, “acts like a stunted or cracked bell.”



In Koch’s picture, then, the Turing Test is irrelevant to diagnosing inner life. What’s more, it implies that the transhumanist dream of downloading one’s mind into an (immortal) computer circuit is a fantasy. At best, such circuits would simulate the inputs and outputs of a brain while having absolutely no experience at all. It will be “nothing but clever programming… fake consciousness—pretending by imitating people at the biophysical level.” For that, he thinks, is all AI can be. These systems might beat us at chess and Go, and deceive us into thinking they are alive. But those will always be hollow victories, for the machine will never enjoy them.