Illustration by Vincent Kilbride

Two warring visions of AI

The power struggle between “doomers” and “accelerationists” will define the way this world-changing technology evolves
January 16, 2024

Late last year, the technology world was captivated by stories of a fast-moving coup at OpenAI, the organisation that created the program that has provoked so much conversation about the future of artificial intelligence, ChatGPT. At the centre of this boardroom drama—which saw Sam Altman briefly ousted as CEO, only for him to return days later—appeared to be a debate between two rival schools of thought regarding the dangers of AI. It’s worth understanding the terms of this debate, if only to know what questions are dominating discussions within companies developing such transformative technology.

On the board of the nonprofit that owns OpenAI were a number of thinkers who believe AI could lead to the destruction of humanity. Such thinkers are known as “doomers”, and their concerns focus on the risk that advanced AIs could decide to eliminate humanity, either in order to gain more power or prevent further environmental degradation. Opposing the doomers are the accelerationists, who believe the AI-enabled future is one where rapid scientific achievement will help conquer the great problems facing mankind. Believing that AI will save lives, accelerationists argue that we owe it to future generations to speed up the progress of AI research.

Given that the stakes of this debate concern the future of humanity, it’s not surprising the rhetoric can get overheated. Marc Andreessen, an influential venture capitalist who achieved fame in the 1990s with his work on the Mosaic and Netscape internet browsers, recently released a “Techno-Optimist Manifesto” declaring that those who fear the pace of AI development—including those concerned about such things as “‘social responsibility’, ‘stakeholder capitalism’, [the] ‘Precautionary Principle’, ‘trust and safety’, ‘tech ethics’, ‘risk management’”—should be condemned for embracing “the nihilistic wish, so trendy among our elites, for fewer people, less energy, and more suffering and death.” In essence, Andreessen argues, anyone who would slow the pace of AI development has the blood of future generations on their hands.

Whoever’s worldview is embedded in ChatGPT could have a lasting impact—for good or ill

You won’t be surprised to hear that the combative billionaire identifies as an accelerationist, but you might not have heard of a term he has also embraced: “TESCREAList”. Scholars Timnit Gebru and Émile P Torres coined the acronym TESCREAL to refer to a bundle of ideologies that are widely discussed in the AI community. The acronym serves as a compact intellectual history, listing schools of thought in chronological order: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism and longtermism. 

Torres says we can consider four of these ideas (TESC) as facets of transhumanism: the idea that humans can transcend their current limitations of intelligence and emotionality and become “better” versions of the species, no longer constrained by our biological form or need to live on Earth. Rationalism, the belief we can become better ­decision-makers by maximising our capacity for reason, has heavily informed effective altruism and longtermism, movements that the notorious Sam Bankman-Fried brought to visibility while he defrauded the customers of ­cryptocurrency exchange FTX.

Effective altruism applies the utilitarian framework of moral decision-making, encouraging adherents to think of their actions in terms of saving or improving the most lives. Bankman-Fried and others believed they were justified in making as much money as possible in order to donate it to effective charities, rather than doing work that had a direct, positive impact on the world. As celebrated crypto-critic Molly White points out, it’s worth being suspicious of any philosophy that encourages rich people to become richer.

Oxford philosopher William MacAskill has combined this utilitarian calculus with the idea that we have responsibilities not just to existing, living humans but to generations of humanity in the distant future. We may be the ancestors of countless humans who will occupy the planet (and perhaps the whole universe) for millennia to come. MacAskill asks us to consider the welfare of these theoretical humans. For him, if there’s even a small percentage chance that AI will destroy them, protecting us from that doom scenario should be as high a priority as combatting climate change. 

Explaining these philosophies is not the same as endorsing them—Gebru and Torres coined their term to criticise a complex of ideas they see as deeply problematic, and I use the term in the same spirit. But understanding TESCREAL is essential to understanding a debate that otherwise seems esoteric, if not entirely insane. Both the AI doomers and the accelerationists sincerely believe they are proposing courses of action that could affect the future of countless humans.

One response is to suggest a little humility. Humans aren’t always great at problem-solving, and the farther problems are from our own experience, the more difficulty we have. A litany of failures in the field of international development suggest the dangers of importing solutions from one context into another. Perhaps the reason we are inclined to help friends and family more often than people across the world—or in the distant future—is that we are better positioned to understand how to do so. MacAskill cites approvingly the idea from the constitution of the Native American Iroquois Confederacy that “In every deliberation, we must consider the impact on the seventh generation”. But we might also read this as an instruction to acknowledge our limits in proposing solutions for wildly distant futures.

Recoiling from the more far-fetched doomer or accelerationist scenarios needn’t preclude thinking long term. Indeed, we should be thinking about long-term harms to present and future humans from the systems we are already unleashing today. In previous columns we’ve discussed the dangers of AI that could make decisions about criminal justice or social benefit programmes based on biased information. And you don’t have to be a doomer or accelerationist to appreciate that the systems we are building now are likely to shape a great deal of knowledge and discovery. Whoever’s worldview is embedded in ChatGPT and other AIs could have a lasting impact, for good or ill.

In April 2023, the Washington Post worked with the Allen Institute for AI to analyse what texts were used to train AIs such as Meta’s LlaMA and Google’s T5. The authors examined something called the “Colossal Clean Crawled Corpus”—a resource that collects data from 15m sites into a huge set of human-generated text. The Post analysed how AI learned from this corpus and was able to identify the sites most important to teaching AI how language works. 

The leading sources include Google’s patent database, Wikipedia and English-language newspapers including the New York Times and the Guardian. On one hand, it’s good news that the corpus carefully excludes some of the most toxic corners of the internet—sites like 4chan have been reduced in influence so they represent only a tiny fraction of the source text. But it’s undeniable that there’s a strong “developed world” bias—particularly an American and UK bias—to the training sources. We know that newspapers tend to report more about wealthy nations than poor ones, and that Wikipedia has struggled to address the problem of women being less likely to be considered “notable” than men with comparable biographies. Our AI systems are incorporating biases from the corpora they are learning from, and these will likely shape what these systems “know” and what they can help us to discover.

If the principles of justice, equity and representation were governing the conversation, we would be paying more attention to whom AI systems include and exclude. Working to collect knowledge and perspectives from marginalised populations of the global south, and ensuring these were integrated into the knowledge production systems of the future, would be a way to look at challenges much more immediate than the distant danger of killer AIs. Integrating such perspectives would help us build truly human knowledge tools, far removed from the TESCREAL complex of ideas. We should challenge both what’s in our technology and the values of those who are building it.