Brave new world: City 17, the setting for ‘Half-Life: Alyx’ (2019), reaches ever upwards. Image: Valve

How video games wrote the future

Games aren’t just the most significant and popular artform of our time—they’re the driving force behind the AI revolution
June 11, 2025

Artificial intelligence used to mean something different. Or at least it did to me, back in the time of blocky, beige home computers and bold prognostications about the end of history—the 1990s. Video gaming was in its adolescence then but, unlike most teenagers, there was barely a trace of awkwardness or unease about it. The medium had moved out of computer labs and amusement arcades and into our homes, with consoles and PCs delivering increasingly sophisticated experiences. Yet it was still young enough for every new development —and there were a lot of those—to be thrilling. From the bounce and colour of Super Mario World (1990) to the horrible majesty of System Shock 2 (1999), it was a glorious era.

It was probably 1998’s Half-Life that excited me most—and properly introduced “AI” into my lexicon. Here was a game of soaring ambition, taking you—that is, the bespectacled scientist Gordon Freeman—from the site of an experiment gone horribly wrong to a harsh alien planet across a dozen, practically unbroken, hours. Its visuals were so intense that I had to upgrade my wheezing computer with a dedicated graphics card. Its long-fingered monsters would invade my dreams at night. But it was the special forces soldiers sent in to clear out Gordon’s research compound—with extreme prejudice—that really stood out. They didn’t move like the baddies in other games. They were agile, responsive. They took cover when needed and retreated if things got even worse. They threw grenades to flush me out and— I’m pretty sure—tried flanking manoeuvres. At their best, they even seemed…intelligent?

I wasn’t the only one who noticed. All the gaming magazines—and there were a lot of those then, too—lauded the AI of Half-Life’s soldiers. “It’s surprising how entertaining well-implemented artificial intelligence can be,” observed the critic Jason Bates at the new(ish)fangled website IGN, “and it’s probably worth it to play Half-Life just to fight its infantry.” Suddenly, AI was an acronym on my and my friends’ lips, as we judged other games by Half-Life’s standards. It had become a part of playground chat. 

This wasn’t AI as we know it today, of course—nor as real computer scientists would have known it at the time. It was far more rudimentary. Half-Life’s soldiers were what coders call “finite state machines”: they had a state (patrolling, say) and then they had conditions (Gordon getting too close) that would make them change state (retreat!). Every aspect of their behaviour was simply a preprogrammed shift from one state to another. They weren’t operating on the fly, much less learning. They merely looked smart. 

And yet: out of that decade of Half-Life and Quake, of Mario and Lara Croft, real advancements in real AI emerged. If you stared long and hard enough at games, you might even have seen the future. You can still see the future in new games today.

The career of Demis Hassabis, one of the greatest practitioners of AI today, began in the video-game industry of the 1990s. Born in London in 1976 to a Chinese-Singaporean mother and a Greek-Cypriot father, Hassabis had a preternatural aptitude for board games, especially chess. He was beating adults from the age of four and achieved the rank of master at 13. Thanks to winnings from a chess tournament he could buy something most kids would have bugged their parents for around Christmas time—a computer; specifically, the lovely, rubber-keyed ZX Spectrum, released in 1982. The ZX played games but also allowed users to play around with code. Hassabis did just that. He learnt to program on that machine.

And he learnt to program well. In 1992, aged 15, he entered a competition in Amiga Power magazine to win a job at Bullfrog Productions, then the hottest spot in the UK games industry, by designing a spin on the classic Space Invaders (1978). Legend has it he won that competition, but the legend is slightly wrong: the August 1992 issue of Amiga Power shows that Hassabis’s (misspelt as “Hassapis”) entry, Chess Invaders, finished third, having been scored “8 for originality, 5 for implementation and 6 for sound” by Bullfrog’s co-founder Peter Molyneux. This was still enough, however. After the University of Cambridge told Hassabis, now 16, to take a gap year before starting his degree in computer science, he started work at Bullfrog.

The result was one of the most successful and influential games of the time: Theme Park (1994). Players had to construct and manage, yes, their very own theme park. Put good rides in good places, staff them properly, ensure the food stands are adequately supplied, and the punters will come. But you’ve got to keep those punters happy, which is not, it’s fair to say, their natural state. They might start beating up your employees; neglected rides will break down; it can all go to seed extraordinarily quickly. Even today, having been surpassed by descendants such as this year’s Two Point Museum, Theme Park is still a winningly fraught and funny experience.

For which Hassabis deserves much credit. His name alone is printed alongside Molyneux’s in the game’s manual, under “Creators and Lead Programmers”—and, presumably, that programming was not straightforward. Theme Park contains dozens of variables, from HR to hotdogs, all overlapping to create small but quite complex worlds. There was some Half-Life-style AI in its coding too: Hassabis was telling little digital people what to do in case of fairground emergency.

It’s no surprise to say that Hassabis went on to win the Nobel Prize in Chemistry; he was awarded it last October, with his colleague at Google DeepMind John Jumper and the University of Washington’s David Baker, for research into the protein molecules that are the component parts of all life. Hassabis and Jumper developed an AI model called AlphaFold that could predict the structures of these proteins. The implications are enormous. As Hassabis said in an interview with the US news show 60 Minutes in April, models such as AlphaFold are helping to bring the end of disease—all disease—“within reach, maybe within a decade or so”. Against such ambition, it seems trivial to say that Hassabis also received a knighthood last year, for services to AI.

There is a direct link between games, AlphaFold and Demis Hassabis’s Nobel prize

What might surprise, however, is how directly the games led to AlphaFold and the Nobel. After a series of other games and a quick PhD in neuroscience at University College London (UCL) Hassabis founded DeepMind, a true AI research organisation, in 2010, with a UCL colleague, Shane Legg, and the entrepreneurial Mustafa Suleyman. One of Hassabis’s buddies from Cambridge, David Silver, was coopted as a consultant. Together, they decided on their new company’s priorities—and protein structures were not top of the list. DeepMind would initially focus on games.

There are many reasons for this, mostly summed up by a line in a lecture by Hassabis at Cambridge’s Computer Lab in March: “Games are the perfect proving ground for AI systems.” Games have (more or less) clear objectives; they have (more or less) clear rules for achieving those objectives; and they also have human players, of varying standards, as a point of comparison. This makes them a good, straightforward test for any form of intelligence; certainly more straightforward than, say, setting an essay on Russian literature, which is open to a broader range of interpretations. In games, to paraphrase the philosopher Cersei Lannister, you win or you die. There is no middle ground. 

These associations between games and digital brains have existed for decades. Most people over the age of 40 will remember Deep Blue, the IBM supercomputer that took on chess world champion Garry Kasparov in two six-game matches in 1996 and 1997. It was defeated across the first match, although it did win one game—an impressive enough feat. But then, after an upgrade, it beat Kasparov overall in the second match, winning two games, drawing three and losing one. It was a shuddering moment. Only a few days before, the cover of Newsweek had called the rematch “The Brain’s Last Stand”—and now the brain had lost.

But DeepMind wanted to go far beyond Deep Blue. After all, IBM’s supercomputer was just that: a very super computer, an amazing feat of engineering, rather than an intelligent agent. It hadn’t learnt chess. It wasn’t innovating in any meaningful way (in fact, one of the IBM scientists behind the machine, Murray Campbell, said its boldest, smartest move may have been the result of a software bug). Deep Blue had been loaded with millions of chess positions and could sort through them towards winning positions within an instant—all through brute computing power. Although he may not have been left entirely unjaded by the experience, Kasparov did have a point when he wrote, in his 2017 book Deep Thinking, that “Deep Blue was intelligent the way your programmable alarm clock is intelligent.” Perhaps the brain was still standing, after all.

Man vs machine: Go champion Lee Sedol faces a new kind of opponent. Image: Associated Press / Alamy Stock Photo Man vs machine: Go champion Lee Sedol faces a new kind of opponent. Image: Associated Press / Alamy Stock Photo

DeepMind’s plan was to build systems that could learn games—plural—by themselves. Its most notable early breakthrough, described in a 2013 paper, was a neural network that could process visual data and had been trained, thanks to what is known as “reinforcement learning”, to recognise success and failure. When this network was presented with seven classic games from the Atari 2600 console—including Pong, Breakout and, in a beautiful moment of circularity for Hassabis, Space Invaders—it taught itself to play them, much as a human would, by watching, trying, failing, trying again and, finally, mastering. As the research team put it in the paper, “We find that it outperforms all previous [computerised] approaches on six of the games and surpasses a human expert on three of them.” Within months, Google had bought DeepMind for around £400m, in large part because of that impressive gameplay. 

Another breakthrough came in 2016 with a system for playing the ancient Chinese board game Go. This might sound like a regression to Deep Blue—a machine playing one thing very well—though it was anything but. DeepMind’s system, AlphaGo, effectively taught itself a game many orders more complicated than chess. Consider that chess has 20 possible opening moves, while Go has 361 and spirals out from there. For many, Go is an almost spiritual pursuit, not least because, even for its best players, its dimensions are unknowable.

And yet, between 9th and 15th March 2016, AlphaGo defeated one of the world’s greatest players, Lee Sedol, 4–1 in a five-match series—and was awarded the highest Go ranking, an “honorary 9 dan” as a result. Elon Musk, a pre-Google investor in DeepMind, tweeted (naturally) that, “Many experts in the field thought AI was 10 years away from achieving this.” As for Sedol, he admitted to the New York Times last year that “I could no longer enjoy the game. So I retired.”

DeepMind has since positioned its systems in front of StarCraft II (2010) and Quake III (1999), both hideously competitive games that demand a thousand micro-decisions a second from their best players (and the AIs turned out to be better than most of them, of course). OpenAI, the research organisation behind ChatGPT, released a program that defeated world champions at Dota 2 (2013). The point of training on games, however, is not to win. It is to open AIs to new experiences and ways of learning. “Although we started with games… as a convenient testing ground,” Hassabis once said, “the ultimate aim for DeepMind… was to build general-purpose algorithms.” 

In other words: Quake III and shooting opponents in the head one day; AlphaFold and ending disease the next.

At which point, I’m afraid I need to reintroduce another character to the narrative—myself. As well as my work as books and culture editor at Prospect, I am also the Daily Mail’s games critic. This means that I play a lot of video games. It also means that I am sometimes subjected to mildly unkind comments underneath my reviews online. “Get a life!” crops up frequently. “Grow up!”, too. But it’s the polite, in-person amusement of friends and colleagues that irritates me more. It’s the half-smile that plays across their faces when asking questions about my hobby-turned-job. “Ooh, do you have a funny gamer name?” (Yes—it’s a combination of my dream job and my favourite novelist, if you really want to know.) “Do you sit in one of those weird chairs?” (Yes—although it’s plain black, rather than various shades of fluorescent marker.)

That said, part of me gets it. Historically, gaming has been marketed at children and taken up enthusiastically by man-boys. It is, on occasion, not the most edifying of pastimes. But I am also tired of the defensive crouch adopted by many gamers—myself included—when pressed about the subject; the mumbled lines about gaming being bigger, I think you’ll find, than the film and music industries combined. Enough. It’s time to spill some blood. 

Because anyone who has taken an interest in games—and that’s not necessarily the same as playing them—has had a front-row seat on the whole culture, not just Demis Hassabis and developments in AI. 

You would have noticed the major political conflagration of 2014 and 2015 known as Gamergate. This started small, after the former boyfriend of a game designer accused her of engaging in a sexual relationship with a journalist in exchange for favourable coverage of her own low-budget, text-based release Depression Quest (2013). These were preposterous claims, the ravings of a spurned man—but they lit a fire. Soon, men were rising from the danker corners of the internet to attack women in gaming and women everywhere, often in the most violent and threatening terms. And who was watching the sick professionalism of this campaign as it unfolded? Steve Bannon, who had already studied the dynamics in the community around the hugely popular online game World of Warcraft (2004). “You can activate that army,” he is quoted as saying in Joshua Green’s account of the 2016 US presidential election, Devil’s Bargain. “They come in through Gamergate or whatever and then get turned on to politics and Trump.”

The soldiers in ‘Half-Life’ (1998) were clever but not invulnerable. Image: Valve The soldiers in ‘Half-Life’ (1998) were clever but not invulnerable. Image: Valve

You would have noticed Elon Musk’s recent attempts to restoke this culture war with the help of X, his $44bn social media and AI operation. The would-be Martian programmed a game when he was 12 inspired by Space Invaders (again!) and has remained an incorrigible gamer since. On The Joe Rogan Experience last year, Musk suggested he was in the top 20 players globally of the hyper-competitive fantasy slash-a-thon Diablo IV (2023), although many cast doubt on this claim. And now, like the Gamergaters, he is incensed by the supposed intrusion of “woke” values into gaming. “Make video games great again!” he tweeted in February, one of his many exclamations on the theme. His response is a new games studio within X. 

You would have noticed characters such as Palmer Luckey, a fan of militaristic first-person shooters such as Call of Duty, who started making virtual reality headsets in his parents’ garage while still in his teens—so as to be more immersed in the action. The first consumer version of his work, the Oculus Rift headset, raised $2.5m on the crowdfunding site Kickstarter in 2012; two years later, he and it were acquired by Facebook for about $2bn; three years after that he left Facebook after secretly donating to a pro-Trump campaign group. His main project since has been the defence-technology startup Anduril Industries, which develops equipment including rockets and drones—and has secured big contracts with the Pentagon, the US Army and Britain’s Royal Marines. Anduril also recently announced a partnership with OpenAI. ChatGPT with an AK47? Just great

Get a life? Ha! Get a clue

And that’s before you even consider the games themselves and their effects on players. Gaming is a pathway to many things, good and bad, from careers in coding to misadventures in cybercrime. Drama inspired by games has infiltrated cinemas and television schedules. Games occupy millions of people every day. There is even a growing body of neuroscience that suggests that people’s engagement with the real world is shaped by their engagement with these virtual worlds. And yet, somehow, latent forms of snobbery have put all this beyond most respectable people’s interest and ken. Get a life? Ha! Get a clue. 

Video games have had more impact on the world around them—from politics to fashion, from technology to ethics—than any other modern artform. A history of the 21st century that doesn’t mention gaming would be much like a history of the 1960s that doesn’t mention rock ’n’ roll—possible, but beside the point.

Jensen Huang should certainly feature, alongside Hassabis, in the history books. The perpetually leather-jacketed 62-year-old, born in Taiwan but raised in the US, co-founded and runs what is probably the biggest, most important company in the world that is not yet a household name—Nvidia. Given the nature of its business—designing computer chips mostly manufactured in Taiwan—Nvidia is especially susceptible to Trump’s trade wars, but (at time of writing) its market capitalisation is over $3 trillion, a total that has only ever been surpassed by Apple, Microsoft and Nvidia itself. Huang, its CEO, is worth around $120bn.

The reason for this is AI. Nvidia provides the complicated circuitry that underpins most of the world’s AI systems; its market share in this area is said to be somewhere between 70 and 95 per cent. Its chips—arrayed on cards stacked one on top of the other—fill vast, energy-hungry data centres that perform millions of computing operations per second. And it’s not just AI research organisations such as DeepMind tapping into that power; big businesses and governments do too, or they would like to. In May, Nvidia announced a deal with Saudi Arabia to provide 18,000 of its most sophisticated chipsets for the country’s AI infrastructure. Any nation that wants to participate in the 21st century’s greatest, most terrifying revolution needs to have Huang on speed dial.

But here’s the twist: Nvidia’s chip arrangements are known as GPUs, which stands for graphics processing units. Remember the graphics card I had to buy (or have bought for me by understanding parents) so Half-Life would run on my family PC in 1998? That was a GPU, designed to supplement a computer’s brain or central processing unit (CPU) so that it could better handle more intensive tasks such as 3D gaming. Mine was not a Nvidia model, but it was similar to the GPUs that were Nvidia’s stock-in-trade at the time. Once again, the future was lurking in the gaming rigs of the 1990s. 

Huang, an electrical engineer by education, established Nvidia with Chris Malachowsky and Curtis Priem in 1993 to make GPUs for gaming. But, as Stephen Witt details in his brilliant new biography of the Nvidia boss, The Thinking Machine, it was a decision that Huang forced through in 1998 that really set the company up for its AI future—and that decision was itself inspired by one of the great geniuses of games programming, John Carmack.

Carmack was working on Quake III—one of the games that DeepMind would later train its AIs on—and wanted to be able to squeeze faster speeds and better graphics from the hardware available to him. Huang, knowing the popularity of Quake and sensing a business opportunity, decided to expand the hardware available to Carmack by making him his own special graphics card. That card, which would eventually be released in consumer versions, was special not just because of its exclusivity to Carmack but also because of its internal architecture. Rather than adding more and more chips to its surface—the traditional way of boosting power—Huang reckoned that the paradigms of “parallel computing” would best suit Carmack’s needs. This effectively meant splitting up Quake III’s visual demands into smaller packets of data and having the card process them all at once. It required some extremely clever circuitry. 

And it was a risk. Parallel computing was then mostly regarded as the Betamax to sequential computing’s VHS—an obsolete fancy. Old-fashioned sequential computing, the stuff of companies such as IBM and Intel, may not have been able to handle lots of instructions at the same time, but that didn’t matter if it simply got quicker and quicker at doing one thing after another. And, for the longest time, that is what had happened. CPUs and early GPUs kept up with everything thrown at them. Even at the time of Quake III, and in spite of Carmack’s tremendous enthusiasm for his new graphics card, Nvidia’s experiments in parallel computing were considered just that—experiments.

But what if, wondered Huang, sequential computing stopped being quick enough? By this point, he was so besotted by parallel computing that he saw potential that very few others did. He committed. In 2007, he introduced a software platform—the snappily named Compute Unified Device Architecture (Cuda)—that enabled its users to play around with the parallel computing powers of their GPUs and thereby dedicate them to operations beyond gaming. 

Including, as it happened, the development of AI. In any competition for the most important scientific research project of the 21st century, consideration should be given to one set in 2012 by the British-Canadian computer scientist Geoffrey Hinton (another Nobel winner, for his innovations in AI since the 1970s) for two University of Toronto students, Alex Krizhevsky and Ilya Sutskever. Hinton had fiddled around with Cuda and Nvidia cards and was struck by their suitability for the neural networks that help AIs to learn, so he asked Krizhevsky and Sutskever to use them to achieve one of the early holy grails of AI research—image recognition. The system that they produced, officially called “SuperVision” but commonly referred to as “AlexNet” (after Krizhevsky), was powered by just two $500 Nvidia GTX 580 graphics cards—the sort being sold to PC gamers at the time—and far surpassed all CPU-powered attempts at image recognition to that point. Almost overnight, this became the way to do machine learning. Nvidia became the hardware. And Huang became the man.

The baddies in ‘Half-Life: Alyx’ (2019) are smarter and better-looking. Image: Valve The baddies in ‘Half-Life: Alyx’ (2019) are smarter and better-looking. Image: Valve

Which more or less brings us to the present day. The GPUs have become bigger, faster and more complicated, but the principles remain much the same. Parallel computing, the Cuda interface, it’s all still there. The thing that has changed the most is Nvidia’s business model—in the fourth quarter of last year it was 90.4 per cent AI and just 6.5 per cent gaming. In an echo of Hassabis’s approach to training AIs, Huang told Fortune in 2017 that “video games were simultaneously one of the most computationally challenging problems and would have incredibly high sales volume. Those two conditions don’t happen very often. Video games was our killer app—a flywheel to reach large markets funding huge R&D to solve massive computational problems”.

But AI has not moved on—and will not move on—from games entirely. Now that they have become generative, now that they can make things, AI systems will be increasingly involved in games design, and games will remain involved in the act of AI design. Or perhaps those two things are the same? As one former Nvidia employee tells me, “It’s getting hard to see where the games end and the AI begins, and vice versa.” 

A case in point: one of Huang’s more recent preoccupations, the “Omniverse”. This is a grand merger of Nvidia hardware and software to make a virtual world in which anyone can (for a fee) design and make practically anything. Users can build a model factory, test out its production lines and then refine them for the real world. They can then walk through that factory with overseas collaborators and even, thanks to AI, communicate using different languages. One of Nvidia’s most impressive demonstrations of the Omniverse’s power was uploaded onto YouTube three years ago: a near-perfect digital recreation of an actual person that was able to speak simultaneously in English, German, Spanish, French and Mandarin—with the speaker’s lips synced to the sounds produced by each language in real time.

‘The bigger goal is building a world-model’

There was something of the Omniverse, too, in a demonstration that Hassabis gave when speaking with 60 Minutes in April. It was of an AI model called Genie 2. One of Hassabis’s DeepMind colleagues showed Genie 2 a photograph of a Californian mountainscape—and Genie 2 then turned that photograph into an “interactive world” that anyone could walk around at the push of a key. “Of course, there’s lots of implications for entertainment and generating games and videos,” chipped in Hassabis, “but actually the bigger goal is building a world-model… You can imagine a future version creating an almost infinite variety of different simulated environments which the AIs can learn from and interact in and then translate that to the real world.” 

Which is to say, they’re building new worlds for both us and AIs to occupy. The titans of this technological revolution are all games designers now, and the rest of us players. There are dangers to this, of course; there always are. In her book Playing with Reality—a Prospect book of the year in 2024, not least because it is to AI and gaming what Shoshana Zuboff’s The Age of Surveillance Capitalism was to big data and social media—the neuroscientist Kelly Clancy emphasises that games, while beneficial to the human condition, are only ever simulated versions of reality. A model is just that. Which makes it inherently risky to break out from the parameters of a game into the world itself. 

But there are also joys. You can experience them yourself if you have a half-decent home computer and the inclination to download InZOI, which was released into “early access”—a state of ongoing, live development—in March. InZOI is a life simulation game in the spirit of the classic Sims titles—meaning that you make a digital human and then drop them into a world of other digital humans—except that it pushes much closer towards photorealism. It is also one of the first releases to showcase Nvidia’s “ACE” technology, which brings various AI upgrades to gameplay; specifically, in this case, the option to populate the virtual environment with “Smart Zois”, digital humans who have deeper personalities than their less smart compatriots. These Smart Zois lead more organic lives within the game, less beholden to preprogrammed code than they are to their inner thoughts, and their daily routines are far less routine as a result. Who knows what they’ll do?

The tech in InZOI is a slimmed-down, consumer version of the marvels inside Nvidia’s and DeepMind’s headquarters. It is a grain of sand in the edifice that Jensen Huang, Demis Hassabis and others are trying to construct. But it still helps you to see the whole. Nowadays, the AI in games is real AI. And real AI is the AI in games. Perhaps, in the near future, I’ll be able to talk to Half-Life’s soldiers to ask them how they feel, and let them know how I feel too. I’d tell them that I’m sorry for killing them in the past. That I’m scared but also quite excited about what’s to come. Welcome to the playground.