Image: Vincent Kilbride

An AI-enabled world isn’t inevitable

How much change are we willing to accept in return for an AI future?
July 14, 2025

I spent one of the first weeks of my summer doing something I haven’t done in years: writing computer code. As a professor, when I want to generate data, I usually have the luxury of telling my students what I need and helping them learn to write the code to retrieve it. But it’s summer, my students have better things to do than help me and I really wanted to explore a new project. So I tried working with Gemini, Google’s free AI assistant, which I selected because I hoped it would be especially good at retrieving data from the Google Maps database.

What I was doing is now popularly known as “vibe coding”, describing code in human language and letting a generative AI translate my instructions into executable code. On the one hand, the process was anything but automatic—the AI produced lots of bad and dysfunctional solutions, at one point quitting entirely and suggesting I pay $29 to call Google tech support. But I steered the AI away from tools that didn’t work and towards tools that did, much as I do when working with my students. Ultimately I was able to produce the results in only a few hours of interaction with the AI, about a tenth of the time it would have taken me to code unassisted, if I had the time and patience to complete the task. 

Using an AI to complete a task I otherwise would not have taken on helped me understand my friends who insist that AI is a transformative technology. It hasn’t been for me thus far, because of the nature of my work. But for friends who write code, AIs are already changing the world, and it’s hard not to marvel at what other changes are coming.

The experience led me to this train of thought: assume for the moment that AI really is transformational—it’s going to change how we work and play, how we discover new technologies and solve global problems. What should we be willing to spend to bring about these transformations?

Sam Altman, the CEO of OpenAI, claims that Meta has been trying to poach its top employees, offering signing bonuses of $100m to employees willing to switch sides and help Meta build “superintelligence”. The $300bn valuation of OpenAI—a company having so much trouble getting people to pay to use its tools that it is considering selling advertising—suggests that investors believe these technologies are worth trillions of dollars. 

But the question of what we might pay for an AI transformation can’t be answered purely in fiscal terms. In 1994, British sustainability advocate John Elkington introduced the idea of a “triple bottom line”, insisting that corporations think not only about profits, but about people and the planet as well.

A thorough analysis of the tradeoffs of an AI transformation would consider the environmental impacts of these energy-hungry systems and the possible impacts on human wellbeing due to job loss and theft of creative works used to train AI models. For those imagining AIs that cure cancer and discover endless sources of clean energy, any cost would be bearable.

The work of actually calculating these tradeoffs is shockingly difficult. Despite countless headlines warning that AI systems use more electricity and water than previous technologies, such as search engines, determining how much energy a particular use of a particular tool requires is nigh on impossible. The most popular AI tools are closed systems: the details of how many data centres and processors they use are closely guarded trade secrets. Researchers have had to extrapolate from studies of less powerful open-source AI models, accepting that their findings map imperfectly onto tools like Google Gemini, which helped me with my coding problems. A synthesis of these studies published in the MIT Technology Review suggests that a series of routine AI-enabled tasks—researching a set of questions, producing an image and a five-second video—could consume as much power as running a microwave oven for 3.5 hours. 

Because ChatGPT is now the fifth most visited website in the world, we have to multiply those kilowatt hours by tens of millions of users a day. We can understand these effects better in aggregate than in terms of individual decisions. There is so much demand for electricity to power new AI data centres that there’s a five to seven year wait to order a new gas turbine. Three Mile Island, the site of the US’s most serious nuclear power accident in 1979, is being brought back online to power Microsoft data centres. 

It takes a while to restart a power plant—Three Mile Island won’t be online until 2028, and companies need power now. This is leading some companies to cut corners to win the energy war. Elon Musk’s xAI is powering its data centre with methane-powered generators, which locals worry is adversely affecting health. 

While companies like Google and Microsoft have been pioneers in creating a market for green electricity, the rise of AI is reversing progress towards a carbon neutral future. A pre-print study from researchers at Harvard’s TH Chan School of Public Health finds that energy used by US data centres is 48 per cent more carbon intensive than electricity generation in the US as a whole, with 56 per cent of data centre power coming from fossil fuels. And while data centre power use represents 4 per cent of US total electricity consumption now, those figures precede industry changes, such as Google generating AI summaries for virtually every search query. That percentage seems destined to rise, with the Trump administration announcing a private sector partnership to invest $500bn in creating new data centres to power the AI transformation. 

How much energy should the world invest in developing the future of AI? Somewhere between the cynic’s answer—none at all—and the enthusiast’s—whatever it takes—is the space for a profound and society-wide debate. We are making decisions that prioritise possible progress against the certainty of increasing climate change through rising carbon emissions with very little debate and incomplete data.

At least we’re beginning a societal conversation about AI and the environment. Impacts on humans from the transformation of work and creativity due to widespread adoption of AI may be at least as disruptive as changes to the planet. If AI is likely to eliminate at least some jobs, and concentrate power in the hands of those who build and control these new infrastructures, we need to ask a parallel question: How much change to the world of work are we willing to accept in return for an AI future? To the world of art, music, film and literature? 

Investment in AI is following a classic—but deeply problematic—pattern of technological inevitability. Because a technological future of AI-enabled everything is possible, we jump to the conclusion that it is inevitable, failing to question whether or not it is desirable.

Financial markets are investing in AI companies and data centres without knowing if there will be sufficient return on their investments in terms of earnings. It’s even more impossible to know what return on investment would look like in a triple bottom line analysis, including environmental and social impacts. We don’t have the data and right now we’re not even trying to answer these questions.