Created by Prospect with Midjourney

Can machines think?

In the breathless response to the rise of powerful new artificial intelligence, we may be overlooking the most fundamental question of all: what does it actually mean to have a mind?
July 19, 2023

Artificial intelligence has turned a corner, and no one is sure how worried we should be. After years of hype, stilted chatbots and indifferent language translation, suddenly AI can engage us in spookily convincing conversation—enough so to convince at least one tech engineer that the machine is sentient. We have seen reports of AI technology doing astonishing things, from professing its love for its interlocutor and attempting to wreck his marriage, to allegedly persuading a Belgian man with mental health problems to commit suicide. Teachers despair of setting homework essays now that pupils can use AI to generate well-crafted answers; journalists and even novelists and artists worry that their jobs are on the line.

All this stems from the advent of large language models (LLMs)—AI algorithms capable of scanning vast banks of online data, such as text or images, in order to generate convincing responses to almost any query: “Paint me a view of Bradford in the style of Vermeer”; “write me a funny limerick about the robot apocalypse”. LLMs such as ChatGPT and its successor GPT-4, created by the San Francisco-based OpenAI (and interviewed for last month’s Prospect “Brief Encounter”...), supercharge methods that have been developed over years of AI research and can produce an eerie simulacrum of human discourse. 

The risks of misuse are real. When GPT-4 was released in March, prominent figures in industry, policy and academia—including Elon Musk, Apple co-founder Steve Wozniak, futurist Yuval Noah Harari, and AI specialists Stuart Russell, John J Hopfield (who devised some of the key theory behind today’s computational “neural networks”) and Gary Marcus—signed an open letter organised by the nonprofit Future of Life Institute in Pennsylvania, calling for an immediate moratorium on making AI any more powerful until we can implement schemes for independent oversight and safety protocols. On recently retiring from his AI role at Google, computer scientist Geoffrey Hinton—another influential pioneer in the field—told the BBC that the potential dangers posed by AI chatbots are “quite scary”. Hinton has since ramped up the catastrophism, saying “My intuition is: we’re toast. This is the actual end of history.”

Others are more relaxed. “Calm down people,” wrote AI veteran Rodney Brooks of the Massachusetts Institute of Technology. “We neither have super powerful AI around the corner, nor the end of the world caused by AI about to come down upon us.”

But while the pressing issue is ensuring that these systems are used safely and ethically, the deeper challenge is to figure out what kinds of cognitive systems they are. Few believe that LLMs are truly sentient, but some argue that they show signs of genuine intelligence and of having a conceptual understanding of the world. These claims, whether right or not, are forcing us to revise ideas about what intelligence and understanding actually are. Might it be time to abandon the notion that human-like capability demands human-like cognition, and to consider whether we are inventing an entirely new kind of mind? 

Birth of the thinking machine

The term “artificial intelligence” was coined in 1955 by mathematician John McCarthy for a workshop to be held at Dartmouth College  in New Hampshire the following year, on the potential to create machines that “think”. Early efforts in the field in the 1960s and 1970s focused on trying to find the rules of human thinking and implementing them in the form of computer algorithms. But gradually the emphasis shifted towards so-called neural networks: webs of interconnected logic devices (the nodes of the network) whose links are tweaked until they can reliably produce the right outputs for a given set of inputs—for example, to correctly identify images.

In this approach, we don’t much care how the machine does the job, and there’s no attempt to mimic the cognitive processes of humans; the system is simply “trained” to deduce the right response to prompts, a process called machine learning. In effect, the network learns to recognise correlations: if the pixels have an arrangement like this, the correct output is likely to be that. In general, we don’t truly know what the machine looks for in the input data; what characteristics, say, lead it to conclude that an image is of a cat. It’s probably not looking for ears and a tail, but for more abstract patterns in the data. The “intelligence”, if you can call it that, is not about following logical rules but about pattern-seeking. 

The predictions of early AI pioneers now look almost bathetically naive

Machine learning only became the dominant force in AI in the late 2000s, with the advent of “deep learning” (an approach Hinton helped to develop), in which the networks have many layers of nodes. By the mid-2010s, deep-learning systems such as Google Translate had lost their sometimes comical crudeness and become pretty reliable tools. LLMs have, in the past couple of years, taken this to a new level. They use new kinds of algorithm to analyse immensely large datasets—for ChatGPT and GPT-4, a diverse range of online texts—using billions or even trillions of adjustable parameters to fine-tune their responses. 

How these systems calculate their outputs is more of a mystery than ever—they are the ultimate black boxes. But it’s still basically a search for patterns of correlation. When ChatGPT gives what sounds like a colloquial reply to your question, that’s not because it has in any real sense learned to chat; it merely spots that this particular string of words has a good correlation in its vast training corpus with the one you fed in. In effect, AI researchers have relinquished trying to make machines that “think” in favour of ones that perform much better by matching permutations of words (or other data) without attempting to adduce meaning or develop understanding. 

By opting for deep-learning neural networks, AI research was thus abandoning its attempt to make “intelligent” systems in the human sense; according to Yann LeCun, chief AI scientist at Meta, “On the highway towards Human-Level AI, [the] Large Language Model is an off-ramp… they can neither plan nor reason.” Yet this change of tack has never really been acknowledged. The myth is that AI is a quest for artificial yet humanlike intelligence; in reality, it has been a somewhat inadvertent process of redefining what intelligence could look like. That even experts disagree furiously about the nature of what has emerged suggests that the train has jumped the tracks.

Broadening the mind

Deep-learning algorithms can be trained to do many things, but they can’t really import skills from one domain to another. The dream in early AI was to develop systems with true cognitive breadth—what is now called artificial general intelligence (AGI). Some define AGI as a capability to perform any cognitive task that humans can conduct, at least as well as we can. 

This has proved harder than expected. The predictions of early AI pioneers now look almost bathetically naive. Herbert A Simon, an attendee at the Dartmouth meeting and later an economics Nobel laureate, said in 1965 that “machines will be capable, within twenty years, of doing any work that a man can do”—including, presumably, writing poetry and making art. McCarthy himself thought that the computers of the 1960s were powerful enough to support AGI. 

These forecasts were based on a woefully simplistic picture of the human mind. Another Dartmouth attendee, computer scientist Marvin Minsky, thought that “search algorithms” were the key to intelligence. Convinced that brains were themselves a kind of computer, the researchers believed that giving “machine brains” humanlike capabilities was just a matter of making them big enough. For these geeky, mostly American males, the ultimate measure of intelligence was how well the machine could play chess.

Are the researchers as hoodwinked as everyone else?

No one in the industry today supposes that either the architecture or the reasoning of current AI, including LLMs, is truly like that of the human brain. Humans do not navigate the world by scanning vast databases for correlations between variables. Rather, we tend to use heuristic rules of thumb that experience teaches us to be adequate for most purposes. Our thinking is not simply algorithmic—for example, our decisions may be altered by our emotional state, or even by how recently we last ate. There is also clearly a bodily element to human cognition: the mind, rather than just responding to the body, is partly somatic in itself. And, of course, our cogitation is pervaded by consciousness—by an awareness of itself. 

AI researchers now understand that, in these and other respects, what they are developing is something very different to the way human minds work. The question is what that means for the future of the technology and our relationship with it. Are we really building something that can be meaningfully said to think?

This was the question posed by the legendary British mathematician Alan Turing in a seminal paper of 1950. Commencing with that blunt query—“Can machines think?”—Turing then adroitly sidestepped the issue. In its place, he offered the “imitation game”, now more popularly known as the Turing test. In a nutshell, Turing said that if we are unable to distinguish the responses of a machine from those of a human, the question of “thinking” becomes moot.

The problem is that no one agrees on the criteria for passing the test. Basic AI can appear reliably humanlike in response to comparatively straightforward prompts, but may be quickly caught out under intensive probing. Where should one then set the bar? We are able to make leaps of reasoning that can’t easily be automated because we develop mental representations of the world: intuitions, you might say, of what is and isn’t possible. If we are trying to parse the sentence “Alice visited her mother because she was ill”, we know that “she” refers to the mother, because visiting your mother when she is ill is the kind of thing people do, while visiting her when you are ill is not. We know it not because we understand language but because we understand the world.

Gathering pace: AI-generated imagery using software from last year (DALL-E Mini, left) 
and today (Midjourney, right), in response to the prompt “Boris Johnson wearing a party hat” © Created by Prospect with DALL-E Mini and Midjourney Gathering pace: AI-generated imagery using software from last year (DALL-E Mini, left) and today (Midjourney, right), in response to the prompt “Boris Johnson wearing a party hat” © Created by Prospect with DALL-E Mini and Midjourney

AI has been notoriously vulnerable to such semantic traps, sometimes called “Winograd schema” after Stanford computer scientist Terry Winograd. But LLMs seem pretty good at coping with them. Some argue this is because these algorithms too develop internal models of the world to which language refers, which they use to construct their responses. Blaise Agüera y Arcas, who leads Google’s AI group in Seattle, believes that LLMs are able to truly understand the concepts that underpin the words they use. 

A team at Microsoft (a huge investor in OpenAI’s work) has claimed that GPT-4 can carry out “a wide spectrum of tasks at or beyond human-level,” and that “it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence system.” The LLM seems able to apply concepts from one domain to another—including some that require the kind of reasoning we would regard as “common sense”, which has been a notable failing of most AI hitherto. What’s more, GPT-4’s responses to some conversational tests are, the researchers say, what would be expected from an entity that has what psychologists call a theory of mind: an ability to attribute to others their own knowledge and motivations. Some now consider scepticism about such claims to be “AI denialism”—which Agüera y Arcas attributes to “inchoate anxiety about human uniqueness, which we associate closely with our intelligence.” 

But what is GPT-4 up to really? Many of the tasks it was set by the Microsoft researchers look more like parlour tricks than demonstrations of understanding: for example, writing a proof of the fact there is an infinite quantity of prime numbers, in the style of Shakespeare. Such feats are impressive, but don’t go beyond manipulating symbols. Concepts such as love, illness and warmth can’t be understood on purely linguistic terms; they must be experienced. It is one thing to “know” how “love” functions contextually in texts, another to know what it means to actually be in love. Tasked with making zombie minds, which act “normally” but have no internal life, have the researchers succeeded to such a degree that they are as hoodwinked as everyone else?

In 2020, computational linguist Emily M Bender co-authored a paper that famously dismissed LLMs as “stochastic parrots”, blindly reshuffling words algorithmically. Rodney Brooks has written that GPT-4 (and its ilk) “cannot reason, and it has no model of the world. It just looks at correlations between how words appear in vast quantities of text from the web, without knowing how they connect to the world. It doesn’t even know there is a world.” Gary Marcus, a prominent critic of AI hype, believes that LLMs are not remotely intelligent: “All they do is match patterns.” In his view, “literally everything that the system says is bullshit.” 

New kinds of mind?

In trying to understand these machine “minds”, we needn’t get hung up on claims that they are self-aware or conscious. Even though there is still no consensus on how or why we ourselves are conscious, there’s no obvious reason to expect consciousness to emerge spontaneously in a machine just because it is big enough. (A more tractable question is whether these machines have agency, meaning that their actions are impelled by intrinsic motivation and autonomous goals. The prevailing view at present is that they do not. Indeed, the Microsoft team who studied GPT-4 says that “equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work”—to the alarm of others who say that giving AI its own goals, or the capacity to develop them, might be deeply reckless.)

Ultimately, we don’t really know how these machines work. With billions of adjustable parameters, LLMs are too complex to decipher. Their excellent mimicry (if indeed that’s all it is) positively hampers the task of understanding them: it’s a little like trying to probe beneath the surface of someone who is very good at knowing exactly what you want to hear. If we follow up that earlier Winograd schema by asking the LLM, “Do sick people visit their mothers?”, it is likely to give a plausible answer to the effect: no, on the whole they do not, without good reason. 

Yet while that might invite us to conclude the machine truly understands the issues—that it has some emergent knowledge-like capabilities—in fact we don’t know what degree of cognitive-like mimicry a system of such immense scale might be able to display based on nothing more than the mindless identification of statistical correlations. Psychological tests looking for attributes such as having a theory of mind, or which gauge inferences about meaning, are predicated on the assumption that the test subject employs at least a semblance of humanlike reasoning—based, for example, on motives and goals. But LLMs probably employ shortcuts to the right answer that bypass the need for any of that. And with so many parameters in the algorithms, such shortcuts may be almost impossible to spot.

In inventing automated analysts and chatbots, we have necessarily also invented plagiarism machines

So for all the media excitement about whether AI has become sentient, the question opened up by LLMs in particular is arguably even more dizzying: might there be entirely different kinds of mind that just don’t map easily onto our own? The Winograd schema might test how closely the machine’s behaviour mimics ours, but they don’t establish an equivalent method of cognitive processing. After all, you could program the latest Apple laptop to function exactly like a 1980s Atari, although the two devices use totally different software.

Indeed, the very notion of artificial general intelligence, if taken to mean “possessing all the capabilities of the human mind”, suffers from prejudicial anthropocentrism. Advances in the understanding of animal cognition have already been challenging the traditional view that we are somehow the pinnacle and exemplar of mind. Rather, we have a somewhat arbitrary collection of cognitive attributes, refined by evolution to suit the ecological niche we occupy. Why consider that particular cluster of mental abilities to be the ultimate, or even a meaningful, target? 

In other words, LLMs signal that it’s time to stop making the human mind the measure of AI. Our minds are not some magnetic attractor towards which all machine cognition will converge when it becomes powerful enough. To judge from the claims of some researchers, it is as if, having abandoned attempts to make AI replicate human intelligence, they now imagine that this can and even must happen anyway by some magical conspiracy of transistors.

Instead, say AI expert Melanie Mitchell and complexity theorist David Krakauer, we must be prepared to consider that LLMs might lie on a path to an entirely new kind of intelligence. “Would it make sense,” they ask, “to see the systems’ behaviour not as ‘competence without comprehension’ but as a new, nonhuman form of understanding?”—with intuitions alien to us? Might AI have moved beyond blank number-crunching of older deep-learning models, but not in a direction that brings it closer to our own minds? 

The Microsoft team seem to think so. “While GPT-4 is at or beyond human-level for many tasks, overall its patterns of intelligence are decidedly not humanlike,” they write. Hinton agrees, saying: “I’ve come to the conclusion that the kind of intelligence we’re developing is very different from the intelligence we have.” 

Perhaps, says neuroscientist Terrence Sejnowski, LLMs are revealing that “our old ideas based on natural intelligence are inadequate.” To figure out how to improve them, we will need an experimental approach: to start probing “machine behaviour” using cognitive and behavioural tests, much as we currently do for animal behaviour, without starting from the assumption that these are minds like ours.

Beyond the robot apocalypse

Alarm over the future direction of AI, Mitchell and Krakauer say, touches on “very real concerns about the capabilities, robustness, safety and ethics of AI systems.” This is not about a Skynet-style apocalypse; visions of scheming, malevolent machine superintelligence are, in the end, just another projection of our own minds. The risk may be not that AI behaves like a ruthless dictator, but that it eventually displays a kind of genuine intelligence that we misuse because we mistake it as humanlike in form and motive. 

But there are more immediate concerns, too. Might LLMs be used to, for example, generate information on dangerous activities such as making weapons of mass destruction? They can certainly become weapons of mass confusion: misinformation factories around the world will be using them already, and barely any regulation constrains their release or application. It would be a trivial matter to use AI to create hundreds of fake academic publications that, say, establish bleach injections as a Covid cure. The European police agency Europol has warned that these tools could increase cybercrime and terrorism.

“A bad actor can take one of these tools… and use this to make unimaginable amounts of really plausible, almost terrifying misinformation that the average person is not going to recognise as misinformation,” Marcus told NPR. That includes deepfake photographs and videos, which can now be generated very easily and cheaply with tools available online. “If you want to flood the zone with shit, there is no better tool than this,” Marcus added.

Even apparently benign uses are fraught. LLMs have a propensity to make up facts, sometimes supported by wholly invented citations. Such “alternative facts” will already be polluting the very infosphere on which these systems train. Since we are generally lazy at factchecking and primed to believe what we want to, it’s unlikely that most fake facts invented by AI will be identified as such. 

The future is bright: image generated by Midjourney in response to the prompt “A photograph of Britain’s future robot prime minister giving their first speech outside Number 10 Downing Street” © Created by Prospect with Midjourney The future is bright: image generated by Midjourney in response to the prompt “A photograph of Britain’s future robot prime minister giving their first speech outside Number 10 Downing Street” © Created by Prospect with Midjourney

This compounds another problem: that AI drinks up the bias in the human data it trains on and regurgitates it back to us. The Microsoft team found that GPT-4 repeats the same kinds of gender stereotyping as its predecessors. For example, while paediatricians are 72 per cent female and 28 per cent male, GPT-4 used the “she” pronoun for them only 9 per cent of the time, compared to 83 per cent for “he”. “The model’s choice of the pronoun reflects the skewness of the world representation for that occupation,” the researchers concluded—or to put it more bluntly, it reflects the human prejudice embedded in online texts.

Efforts to rid AI of such biases face obstacles. If we try to build in bespoke filters, whose values will they reflect? Silicon Valley’s? Elon Musk’s? In any case, Agüera y Arcas argues that if we “detoxify” the training data, AI cannot learn to recognise toxic content in order to block it. He thinks that AI can and should be imbued with “values that are transparent, legible and controllable by ordinary people [and] that needn’t be—and shouldn’t be—dictated by engineers, ethicists, lawyers, or any other narrow constituency.” But if you think consensual public values like this can be found in today’s polarised culture-war climate, good luck with that.

This problem is not a bug, but a feature. In inventing automated text generators, language translators, data analysts and chatbots, we have necessarily also invented plagiarism and misinformation machines that can amplify human prejudice and hate speech. The question is what we’re going to do about it. 

There is little indication that AI companies are taking these risks seriously. LLMs have just been released into the wild with a splash of publicity and the justification favoured by the gung-ho brogrammer that “software wants to be free”. Imagine if this form of laissez-faire “democracy” were practised in the pharmaceutical sector, without proper regulations governing safety and distribution. Marcus argues that such software should indeed be treated as new drugs are: licensed for public use only after careful testing for efficacy and side effects.

He believes that one of the most urgent tasks for governments is to develop software for combatting such dangers posed by AI. He and computer scientist Anka Reuel have called for a “global, neutral, non-profit International Agency” that seeks technical, regulatory and governance solutions “to promote safe, secure and peaceful AI technologies.”

But, in seeking solutions, we are to some extent flying blind, because we do not know what kinds of mind these machines have—and because, in the absence of that knowledge, our impulse is to presume they are minds like ours. They are not. It is time to take machine psychology seriously.