What we should tell our grandchildren about AI

They will see the promise—it is incumbent on us to alert them to the threat, or humanity will perish

November 14, 2023
Robert Skidelsky
Robert Skidelsky hopes the liberatory possibility of new technology will outweigh its destructive power. Image: Nick Moore/Alamy Shutterstock, edit by Prospect

My new book, The Machine Age, is an ambitious—possibly overambitious—attempt to understand the human condition at this moment in time, through the prism of our relationship with machinery. 

The book is structured around three stories: the relationship of machines to jobs, to freedom and to survival. Of course, when I talk about the relationship between humans and machines I am using a figure of speech. It’s not the machines which promise heaven or threaten hell. It’s those who turn on the switches. The danger is that sooner rather than later they will lose control of what they have created, like Frankenstein and his Monster. 

The job displacement story is rarely out of the headlines and with it the threat of human redundancy, as machines force more and more people into uselessness. “Nearly half of voters fear AI will take their jobs” is a typical headline.

My second story is about how our dependence on technology places immense powers of surveillance and control in the hands of the state, its agencies and the giant tech platforms. 

The third story is the extinction story: about how the acceleration of technological power threatens the physical liquidation of our species. 

Each story has a vision of heaven and hell. Job displacement promises leisure but threatens uselessness. Computer technology promises freedom but threatens despotism. Artificial intelligence promises to prolong life and extinguish it.

How to make sense of all this? In 1930 John Maynard Keynes wrote an essay called Economic Possibilities for our Grandchildren, which has been a great source of inspiration for me. What sort of story should we tell our grandchildren? Let’s start with the most familiar story—what will happen to jobs?

Here is what Elon Musk told Prime Minister Rishi Sunak at Bletchley Park in early November: “There will come a point where no job is needed. You can have a job if you want to have a job for personal satisfaction. But the AI will be able to do everything.” Jobs would become a hobby, a pastime for an era when consumers had reached the point of bliss—no longer needing to work for a living.  

However, Musk was a signatory to a letter by the Future of Life Institute calling for a “pause” in AI development so as to make “today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” 

The demand expressed in this letter is that the deployment of machines should take into account our interests as humans. Is it really such a good thing that no one will need to work? The fear that machines will destroy not just our livelihoods but the meaning of our lives goes back to the Luddites in the early 19th century. It accelerated with the victory of IBM’s Deep Blue over the world chess champion Garry Kasparov in May 1997, and has crescendoed with the achievements of generative AI in the form of ChatGPT. 

The fear of what David Ricardo in 1817 called the “redundancy of people” has never been far from the surface. Is the machine our friend or our enemy? Of course it can be both: a friend for some, an enemy for others. It can improve our welfare, measured by GNP, and reduce our sense of wellbeing as humans, much of which is unmeasurable. And the question takes us well outside the workplace. We are talking not about factory robots, but about networks of computers to whom we are wired up for all our important activities. 

The frame of my book’s second discussion is provided by Jeremy Bentham’s Panopticon of 1786, a sketch of an ideal prison system, in which the prison governor would shine a light on the surrounding prison cells from a central watchtower while himself remaining unseen. This would reduce the need for actual prison guards. Bentham’s ambition for his invention stretched beyond the prison walls to schools, hospitals, workplaces. His was a vision of society as an ideal prison. It may have inspired the two-way TV system in George Orwell’s Nineteen Eighty-Four, in which Big Brother is continually watching you. Technology’s role is not to create spying systems, but to perfect them. So my second story can be thought of as the Orwellian creep story.

My own personal experience of the modern surveillance system, prompted me to write in my book: “Bentham’s world is coming to pass. Today’s digital control systems operate not through watchtowers but through computers with electronic tracking devices, and voice and facial recognition systems. We enter Bentham’s prison voluntarily, oblivious to its snares. But once inside, it is increasingly difficult to escape.” 

It is often argued that there is a trade-off between privacy and security, and in a world of increasing menace, it is privacy which has to yield. But Bentham’s vision of the ideal prison took shape before military intelligence picked it up and robots made it possible. It is part—a sinister part—of the quest for the “perfect society” which goes all the way back to Plato, and really took off in the 18th century. 

So, we must surely alert our grandchildren to the potential malignity of the technology they will otherwise take for granted to satisfy their tastes, habits and desires.

My book’s third thread is about physical extinction. If you use Google Books’s “ngrams viewer” you will see the increasing use of phrases like “existential risk” since the early 2000s, denoting a growing perception of looming catastrophe.

In the pre-modern period, the existential challenges which humans had to face were mainly caused by natural catastrophes. These were usually attributed to disobedience to God. The bible prophesies an apocalypse as God’s punishment for mankind’s sinning: “The land shall be utterly emptied, and utterly spoiled... and they that dwell therein... are burned.” Such natural disasters still happen. But the ones we now worry about most are anthropogenic disasters, caused by our own feckless behaviour. The journalist and historian Misha Glenny has talked of the “four horsemen of the modern apocalypse”: weapons of mass destruction, global warming, pandemics and network dependency. 

“We must alert our grandchildren to the potential malignity of technology they will otherwise take for granted”

The supreme paradox at the heart of current responses is that while awareness of the extinctive possibilities of technology grows daily, almost no one is prepared to give up on its redemptive promise. For example, Matt Clifford, head of the UK government’s Advanced Research and Invention Agency, claims that within the next few years AI could be capable of killing “many humans”, but it also has immense potential for good: “you can imagine AI curing diseases, making the economy more productive, helping us to get to a carbon neutral economy.”

What drives the relentless quest for more and more advanced computers irrespective of their destructive power?

Two things, I suggest. The first is the Daedalus complex of scientists and technicians. From the 18th century onwards, scientists started thinking of themselves as social engineers. Economists have been at the forefront of the quest for social perfection through the ideal equilibrium of market prices. Friedrich Hayek hit the nail on the head when he warned against the uncritical transfer to the problems of society of habits of thought of the natural scientist and engineer. These habits of thought continue to shape mainstream’s view of technology. Medicine is a good example of why it is so difficult to turn off the switch: AI may kill millions, but it may save even more millions—and billions yet to come. 

The second reason to doubt that an off switch—even a pause—will ever be activated is because AI research is now thoroughly weaponised. Technology has made war possible in the air, and under the sea. Now, it will enable wars in space. 

Pick up the issue “Digital Nato” published by our own Parliament’s Scientific All-Party Parliamentary Group and you will read—in the usual barbarous language of such publications—that “the growing prevalence of hybrid threats is mandating the need for NATO to ensure its warfare development agenda (WDA) is digitally enabled, thus delivering an integrated and interoperable multidomain operations defence capability.”

“Digital Innovation”, it continues, is the “golden thread” that cuts across all aspects of Nato’s Warfare Development Agenda. “Our commitment to harnessing cutting-edge technologies ensures that our Alliance... is always a step ahead.” A step ahead of whom? Well, China of course. And what does Beijing say? Well, it needs to keep a step ahead too. 

Weaponisation of AI development prevents any international agreement to control it. Remember how close the world came to nuclear war in 1962? We’re back there with hybrid forms of warfare breaking out, and no rules of the game such as were agreed after the Cuban missile crisis. 

So what do we finally tell our grandchildren? It would be wonderful to say: disaster is not inevitable; we can and must think our way out of it. I wouldn’t deny for a moment that ideas help shape events. But to be effective, ideas—both good and bad—also need the support of events. We cannot change our fate by thought alone. We will learn through experiences that may, at times, be painful. This has been the historical method of progress. If we are to reconcile our belief in progress with the evidence of continuous human wickedness, we have to believe in something like the redemptive power of evil. 

But this entails a religious approach to life and fate—not an abandonment of science, but an understanding of its limits. In the words of Albert Einstein, “science without religion is lame, religion without science is blind.” 

How do we make sense of the cycle of world history except in a religious frame? As William Cowper tells us, “God moves in a mysterious way, His wonders to perform; He plants his footsteps in the sea, And rides upon the storm.”

Joseph Schumpeter made a notable attempt to secularise this mystery in his notion of economic progress through “creative destruction”. Social scientist Albert Hirschman has transformed the idea of the biblical storm into that of the “optimal crisis”—a crisis deep enough to provoke a change of awareness, but not so deep that it wipes us out. And we can translate it back into religious language: it is through bringing about extreme events that the Devil does God’s work.

So, I end my three stories with a qualified optimism. The last two sentences of my book read: “In Christian theodicy, Apocalypse means ‘revelation’, and is a prelude to the Second Coming. ‘For such things must come to pass, but the end shall not be yet’.” 

This is an edited version of a lecture given by Robert Skidelsky on the 6th November to promote his new book The Machine Age