World

How to save humanity from extinction

The human race faces a number of existential threats. These are the people working to counter them 

September 26, 2023
Stanislav Petrov is credited for avoiding a nuclear war in 1983. Image: Xavier Rockatansky / Alamy Stock Photo
Stanislav Petrov is credited for avoiding a nuclear war in 1983. Image: Xavier Rockatansky / Alamy Stock Photo

The thing about employers is that they generally expect their instructions to be obeyed. It is a principle that holds up across the world of work, but it is particularly true of the Soviet Air Defence Forces.

This left Stanislav Petrov, a lieutenant colonel, in an unenviable situation. It was just after midnight on 26th September 1983, and Petrov was the duty officer at the command centre for Oko, the Soviet system designed to give early warning of approaching ballistic missiles. 

It was an intensely dangerous period in Russian-American relations, with both sides on high alert for nuclear attack. American bombers, probing Soviet radar vulnerabilities, would soar towards the edge of Soviet airspace before turning away at the last minute. At the beginning of that month, the USSR had shot down a South Korean passenger jet that had strayed into Russian airspace.

Petrov, sitting in the bunker just south of Moscow, would have been well aware of these tensions. Imagine, then, the horror he would have felt on hearing the bunker’s alarm go off. Lights were flashing. Over the intercom, an officer shouted for Petrov to stay calm and do his job. The computers were reporting that an American intercontinental ballistic missile was hurtling towards Russia, and Soviet military doctrine dictated an immediate counter-attack. “LAUNCH”, said the computer display.

Petrov didn’t know whether the report was accurate, but he told his commanders it was false. “I thought the chances were 50-50 that the warnings were real,” he later recalled, “but I didn’t want to be the one responsible for starting a third world war.”

Even when the computers said another quartet of missiles were on their way, the lieutenant colonel held his nerve. The satellite system’s reliability had been questioned, and the Soviet view was that an American attack would involve hundreds of missiles, but we can count ourselves lucky that Petrov stayed his hand. He was soon vindicated: first by his continued survival, then by the finding that the system had been triggered by light reflecting off a cloud.

Petrov died in 2017. Two years earlier, in an interview with Time magazine, he spoke about the continued risk that nuclear false alarms posed to humanity. “The slightest false move can lead to colossal consequences,” he said. “That hasn’t changed.” 

Since then, humanity’s situation has become more precarious. Nuclear tensions remain high, and the natural risks that infrequently menace life on Earth—asteroids, supervolcanoes—have been joined by threats, such as AI and human-created pandemics, that we have brought into the world of our own volition. This summer, a group of experts were polled as to the chances of humanity being wiped out by the year 2100; the median estimate was 6 per cent. (To put that in context, that would make our imminent extinction about twice as likely as your next train being cancelled.)

For the 40th anniversary of Petrov’s quick thinking—a date known as Petrov Day within the community of researchers who study threats to humanity—I asked a set of experts: What are the main threats? Is it possible to neutralise these extreme risks? (They are also known as catastrophic risks and existential risks, depending on their severity). And if they can be neutralised—how? Beginning with the lesser threats, and moving on to the gravest, this is the testimony of individuals who’ve made it their lives’ work to understand these threats and mitigate them.

Asteroids

Were it not for the Chicxulub asteroid, mammals might not have supplanted the dinosaurs and humanity would not be here to ponder its own destruction. 

The vast, vast majority of a solar system is empty space, but there is enough flotsam and debris hurtling around the Sun that, historically, a globally risky asteroid has hit Earth once every few million years.

About once a year, says Nasa, a car-sized asteroid burns up in Earth’s atmosphere. “There is an endless amount of smaller boulders and grains of sand,” says Anders Sandberg, a natural risks expert at the University of Oxford’s Future of Humanity Institute. “But most of them are super tiny, and the probability of getting hit by something oversized that would wipe out human civilisation is relatively low. There is some slight worry about long-periodic comets”—comets that take 200 years or more to orbit the sun—“that we’re really unlucky with, that we don’t see coming until it’s too late. But that risk is still relatively understood and managed.”

And our planetary defences are improving: in 26th September last year—Petrov Day, funnily enough—Nasa successfully executed a mission in which a spacecraft deflected an asteroid by hitting it head-on.

Space weather

On 1st September, 1859, the Northern Lights danced in skies worldwide. Miners in the Rocky Mountains, thinking the sun had risen, got up at 1am. Compasses span wildly and fires broke out in telegraph stations.

The cause was the most intense geomagnetic storm—a burst of energy released by the Sun —in recorded history. Such an event today would hardly wipe out our species, but would cause chaos by overwhelming our power grids and other electronic equipment. That scenario, says Sandberg, is “a really horrifying thought.

“We still don’t have a really good idea what would happen if a lot of power grids were wiped out by a solar flare, but we have good reason to believe that there’s absolutely nothing good there.”

The UK government is quietly taking this risk seriously. The Met Office Space Weather Operations Centre is attempting to forecast these events, and the electricity sector has built spare transformer capacity and is replacing old models with more resilient versions. Other sectors have been called on to make plans. 

“We just need to make sure that our power grids can handle it the next time the sun burps in our direction,” says Sandberg. 

Supervolcanoes

Scholars of supervolcanoes have been known to grumble about the paucity of funding their field receives compared to what’s lavished on the more Hollywood-ready field of asteroid deflection. We live on a planet with molten innards, a result of which is that supervolcano eruptions—that is, volcanic eruptions large enough to cause global ill-effects—come round many times more often than do large asteroid collisions. As recently as 1815, the eruption of the Tambora volcano in Indonesia lofted enough soot into the stratosphere to cause famine in Europe and, as it was put at the time, “a year without summer”.

What can we do? In the short term, better monitoring would help us avoid situations where currently active volcanoes wreak havoc by forcing trade routes to be diverted. Yet as things stand, we know too little about existing eruptions. “There is an embarrassing number of volcanic eruptions we discover only later when somebody goes through the satellite data,” says Sandberg.

In the very long term, it might be possible to administer long-term geoengineering to dangerous supervolcanoes, syphoning off enough heat to avert a destructive eruption.

And in the medium term, some researchers have argued, we should prepare ourselves for “abrupt sunlight reduction scenarios”—ASRS—such as that caused by Tambora. ASRS, which can also be caused by asteroid impacts and, it is believed, nuclear war, impair photosynthesis and bring down global temperatures, causing a ripple effect that goes all the way to the top of the food chain. (This effect seems to have done more damage to the global dinosaur population than the initial impact.)

Mike Hinge is a senior economist at Allfed—the Alliance to Feed the Earth in Disasters. He and his colleagues advise governments on how best to feed their populaces in the event of an ASRS. A moderate shock, says Hinge, might result in countries using polytunnels to extract maximal efficiency from their crops; a more severe shock might force us to put tree matter through paper mills, using them to produce edible sugars from plant cellulose.

“I can understand that some people find it very heavy and wouldn’t want to work on this,” says Hinge. He feels, though, that we are not helpless. “I find it very uplifting that we can make progress on this.”

Climate change

Unlike many of the other threats to humanity, climate change is tangible to us. We feel it in our hotter summers; we see it in footage of extreme weather and the destruction of species. Climate change, therefore, has received orders of magnitude more attention and funding than any other threat.

We still haven’t fixed it, and we are therefore on a path to a world 2.5°C warmer—with a broad range of potential outcomes—than pre-industrial levels. This means higher sea levels, additional extreme weather, a higher frequency of developing-world famine and the further loss of species in the animal kingdom. 

Such are the effects of the warming scenarios considered most likely by the Intergovernmental Panel on Climate Change. These scenarios do not entail human annihilation, but they make for a more dangerous world. “Climate makes other risks more salient and stronger,” says Johannes Ackva, a climate solutions expert, “mostly through political destabilisation. Food insecurity can lead to political instability, leading to political risk and to civil war.”

Ackva grew up a climate activist and now works for a non-profit called Founders Pledge, where his research guides the philanthropy of entrepreneurs who want to use their wealth to address the climate crisis. His view is that society should be making “big bets” on technological innovations that might make clean energy cheaper, or reduce the amount of carbon in the atmosphere. 

We’ve done this with solar and wind, he says, crediting Germany with bringing down the cost of renewables. Renewable energy, though, is imperfect. For instance, it requires battery storage, and our batteries are not yet very good at storing large amounts of energy for months-long periods necessitated by energy sources that vary by season. 

This means we should hedge our bets by looking at other energy sources. The British government, Ackva points out, is examining the potential of small modular reactors—miniature nuclear power plants—while several American companies are making eye-catching progress in harnessing geothermal energy.  

Akva sees potential in carbon capture (which captures carbon released by industrial processes before it is expelled into the atmosphere) and carbon removal (in which carbon is extracted from the air).

In the interest of the climate, Ackva is vegetarian and does not drive. But he says that political action, rather than individual lifestyle change, has to be the primary solution for systemic problems. One way to do this with regard to climate, he says, is to donate to high-impact nonprofits, such as the Clean Air Task Force and TerraPraxis, which support energy innovation. (Both are supported by the climate change fund that Ackva oversees and to which individuals can donate).

“Donating is a fairly effective and underappreciated opportunity,” says Ackva, who counts himself as more optimistic about climate change than he used to be. The damage we are causing to the planet is set to continue, but society has woken up to the problem, and a massive overhaul of our energy production seems much more plausible than it used to. Losing sleep, he says, won’t help him address the problem. “There’s still risk”—the risk that climate change has a catastrophic impact on humanity, rather than just a bad one—"but we can reduce that risk.” 

Nuclear war

In Petrov’s day, there were about 60,000 nuclear warheads in the world. Today there are something like 12,500.

But there are more routes to conflict, explains Carl Robichaud of Longview Philanthropy. In the 1980s, the vast majority of the warheads were owned by the US or the USSR, a state of affairs that permitted bilateral arms reduction over the decades that followed. Today, China is tripling or quadrupling its arsenal.

“How do you achieve risk mitigation and arms reductions when you have three parties that are negotiating?” asks Robichaud, whose job is to oversee grantmaking to organisations that work to reduce the threat of nuclear war. “That’s a new problem. You also have countries like North Korea.” 

Robichaud outlines several scenarios that could lead to the first hostile use of nuclear weapons since 1945. China invades Taiwan. Skirmishes between India and Pakistan turn into war. Vladimir Putin, on the retreat in Ukraine, makes good on his threats. Or a miscalculation, like the one Petrov could have made, sparks an escalation.

The consequences would be horrifying. “If you have a nuclear war, either by accident, miscalculation, or deliberate use of nuclear weapons, you could have as many deaths and injuries in the first few days as you had in all of World War One or World War Two. And it would only get worse from there.”

City centres would have been flattened. Survivors in the surrounding area would suffer radiation sickness. Infrastructure such as health systems would be totally overrun. From here, we can’t be sure what happens next—we are relying on modelling, thankfully, rather than history—but a sufficiently large nuclear exchange, according to widely accepted modelling, would kick up enough soot to cause an ASRS (a scenario known, in this context, as nuclear winter.)

As the world rebuilds, says Robichaud, “you could have a new wave of nuclear proliferation that leads to a world that’s armed to the teeth with nuclear weapons. And that’s a really scary scenario too.”

Nuclear war itself won’t kill everyone on the planet, but its second-order effects—the things that happen after the exchange—would put humanity in a fragile position. How can we make sure this horrific scenario doesn’t play out?

One answer might be the use of missile defence systems, whereby governments can shoot approaching warheads out of the sky. Robichaud sees this measure as potentially counterproductive insofar as it encourages states to produce, and potentially deploy, more warheads than they would have otherwise. “Missile defence might be useful in defeating or deterring a small nuclear state like North Korea that can’t afford to spend a ton of money on their nuclear weapons delivery systems. But against a large capable state like Russia or China, it’s likely to be counterproductive. “Essentially, it sets a floor on the minimum number of nuclear weapons that those countries will see as necessary.”

Preparedness plays a role; we should be ready for circumstances such as nuclear winter. This is the eventuality Allfed is trying to ready us for. Robichaud, though, thinks that it is difficult to make accurate forecasts of what conditions will be like after a nuclear exchange, and is more confident in interventions that improve resilience and redundancy in command and control systems, communications systems and early-warning systems. Fundamentally, it is diplomacy that stands the best chance of reducing the total of global warheads. It has happened before and it needs to happen again. This precedent, though, is no guarantee of solace for those who work to make the world safer from nuclear war. 

“I deal well with ambient levels of stress,” says Robichaud. “And I have a cheery disposition, which makes me well suited to this work. I think if I were a really anxious person, I would have a hard time working for as long as I have on this issue.”

Pandemics

In the past 150 years, four pandemics have caused a million or more deaths. Covid-19 was the most recent of these, directly causing the loss, according to WHO figures, of seven million lives.

Yet biologists fear that we may face far worse pandemics. One possibility is that a pandemic is accidentally released. This risk is exacerbated by existing work on dangerous pathogens, including work that, in the interest of science, makes these pathogens more transmissible. So-called “gain-of-function” research is intended to enhance our understanding of pandemics, and the work would be of benefit to humanity if it could simply be reliably sealed in a lab.

Alas. Putatively high-security labs have been known to let slip both smallpox—repeatedly—and anthrax, as well as the plague, which killed an American biologist in 2009. These labs are multiplying in number. It is inevitable that more pathogens will leak—as Covid-19 perhaps did.

Worse still is the prospect of maliciously created pathogens. Scientific advances are making it quicker, cheaper and easier for researchers to synthesise new diseases by using DNA that can be ordered online. Terrorists have already tried to cause mass casualties using existing pathogens; this risk will only increase as technology becomes democratised. According to the biosecurity specialist and MIT professor Kevin Esvelt, at least 30,000 people today could, if they wanted to, assemble an influenza virus from scratch.

Rather than rely every day on the goodwill, and watertight security, of every single professional and amateur virologist, we should take several measures to shore up our biosecurity. For instance, diplomats must ensure that screening systems for DNA orders—which are currently being constructed, and which would prevent dangerous sequences from being sold—are made mandatory worldwide. Similarly, diplomacy can enforce accountability for the release of pathogens—a preventative measure, but an important one.

Dangerous labs should be shut down. Lab facilities that can produce new mRNA vaccines at speed should be upgraded. Improved PPE, reliable enough to allow essential workers to go about their business in a worst-case outbreak, should be manufactured at scale. Wastewater should be monitored for new pathogens, particularly at transport hubs such as airports. (The US and the UK have both updated their biosecurity strategies over the past few months, addressing many of these ideas. Biosecurity experts I spoke to were cautiously pleased with the upgraded policy packages.)

One new tool appears to be of particular promise. Recent research suggests that low-wavelength light, between 200 and 230 nanometres, kills viruses and bacteria without harming multicellular organisms like us. If we can make indoor air as safe as the outdoors, using specially designed LED bulbs, we will have another layer of protection. And the better our protection, the less attractive pandemics will be to those who wish to cause widespread harm.

AI

The danger posed by an artificially created pathogen is currently capped by the intelligence of the humans who create it. But that cap will soon be removed. Earlier this year, Esvelt, the MIT biosecurity expert, asked non-scientific students to use a large language model (LLM) AI to help them devise a pandemic. 

In one hour, his report states, “the chatbots suggested four potential pandemic pathogens, explained how they can be generated from synthetic DNA using reverse genetics, supplied the names of DNA synthesis companies unlikely to screen orders, identified detailed protocols and how to troubleshoot them, and recommended that anyone lacking the skills to perform reverse genetics engage a core facility or contract research organization.”

Chatbots, of course, are limited in their ingenuity. But the capability of AI has accelerated at an extraordinary rate. So far, calls for a pause have been unsuccessful. We therefore face the prospect of super-human ingenuity being applied to any problem a human wants solved. 

And the creation of a pandemic is just one of those potential endeavours. As Mustafa Suleyman, the co-founder of the leading AI lab DeepMind, writes in his new book, The Coming Wave: “Ask it”—an advanced AI system—“to suggest ways of knocking out the freshwater supply, or crashing the stock market, or triggering a nuclear war, or designing the ultimate virus, and it will. Soon.”

Both experts on AI and experts on forecasting view AI as the most likely cause of human extinction this century. What can be done? 

We need strict controls on who gets to build and use powerful AI systems. At the same time, we must ensure those systems themselves are controllable. Lee Sharkey, co-founder of a new AI safety lab, Apollo Research, says that the long-term scenario that most concerns him, “where long term might actually be a reasonably short length of time, is that we would just lose control of the systems because we don’t know how to control them properly”.

One obstacle to our controlling them is that we don’t understand them. AIs are, to use a term coined by the DeepMind researcher Neel Nanda, “inscrutable black boxes”. If we are to solve the “alignment problem”—the problem of ensuring that the actions of AIs accord with our values and interests, even though we cannot always clearly define them—then we will probably need to understand the reasoning going on within those black boxes.

That is the essence of the field of interpretability, in which Nanda and Sharkey work. Sharkey began his undergraduate studies in medicine, before picking up computational neuroscience and, eventually, moving into AI safety. To an extent, he says, his work in interpretability “is basically a form of artificial neuroscience, where you take a neural network, you break it up into bits that you can individually understand, and you try to tell the story of how these bits interact, and how this overall story determines what the neural network’s output is.”

Sharkey, knowing that existing AIs have proven difficult to control and adept at manipulating humans, has had moments of gloom. “It’s pretty troubling. But getting neurotic about it is not going to help.”

He and his colleagues aim to persevere. If we understand complex AI systems, we stand a better chance of controlling them. And if we can control them, we might be able to use them to enhance our likelihood of survival rather than diminish it. Our world could be a safer place than Petrov’s, rather than a more dangerous one.