Technology

A Christmas message from Martin Rees

Welcome to the post-human world

December 24, 2015
article header image

We live in a world that is increasingly dependent on elaborate networks: electricity grids, air traffic control, international finance, just-in-time delivery, globally dispersed manufacturing, and so forth. Unless these networks are highly resilient, their benefits could be outweighed by catastrophic (albeit rare) breakdown—real-world analogues of what happened to the financial system in 2008. Our cities would be paralysed without electricity. Supermarket shelves would be empty within days if supply chains were disrupted. Air travel can spread a pandemic worldwide within days. And social media can spread panic and rumour at the speed of light.

It is imperative to guard against the downsides of such an interconnected world. Plainly this requires international collaboration. For instance, whether or not a pandemic becomes global may hinge on how quickly a Vietnamese poultry farmer can report a strange sickness among his animals. And the magnitude of the societal breakdown from pandemics would be far higher than in earlier centuries. English villages in the 14th century continued to function even when the Black Death almost halved their populations. In contrast, our social framework would be vulnerable to breakdown as soon as hospitals overflowed and health services were overwhelmed—which could occur when the fatality rate was still a fraction of 1 per cent. And the human cost would be worst in the megacities of the developing world.

Advances in microbiology—diagnostics, vaccines and antibiotics—offer the prospect of containing pandemics. But the same research has controversial downsides. For example, in 2012 researchers showed that it was surprisingly easy to make a virus both more virulent and transmissible. Last October, the US federal government decided to cease funding these so-called “gain of function” experiments. Also, malign or foolhardy individuals have far more leverage than in the past. It is hard to make a clandestine H-bomb. In contrast, biotech involves small-scale dual use equipment. Millions will one day have the capability to misuse it, just as they can misuse cybertech today. Indeed, biohacking is burgeoning even as a hobby and competitive game.




Read more on biology:

Gene drive technology holds promise and peril

Why human embryo editing is not a 'slippery slope'




In the early days of DNA research, a group of biologists met in Asilomar, California and agreed guidelines on what experiments should and shouldn’t be done. This seemed an encouraging precedent, and there have been calls for similar regulation of the new bio-techniques. But today, 40 years later, the research community is far more international, and more influenced by commercial pressures. Whatever regulations are imposed, on prudential or ethical grounds, can’t be enforced worldwide any more than the drug laws can—or the tax laws. Whatever can be done will be done by someone, somewhere.

We know all too well that technical expertise doesn’t guarantee balanced rationality. The global village will have its village idiots and they will have global range. The rising empowerment of tech-savvy groups or even individuals, by biotech as well as cybertechnology will pose an intractable challenge to governments and aggravate the tension between freedom, privacy and security. Monitoring will also advance—utilising technology such as wearables or microwave and neutron beams. These might not be as vexatious as current security checks but, of course, that doesn’t make us relaxed about their intrusiveness. The order in which futuristic technologies develop can be crucial. Monitoring techniques, vaccines, and so forth should be prioritised above the technologies that render them necessary.
"The great physicist Freeman Dyson conjectures a time when children will be able to create new organisms as routinely as his generation played with chemistry sets."
These concerns are relatively near-term—within 10 or 15 years. But what about 2050 and beyond? The smartphone, the internet and their ancillaries would have seemed magic even 20 years ago. So, looking several decades ahead we must keep our minds open, or at least ajar, to transformative advances that may now seem science fiction.

The great physicist Freeman Dyson conjectures a time when children will be able to design and create new organisms just as routinely as his generation played with chemistry sets. Were even part of this scenario to come about, our ecology (and even our species) surely would not long survive unscathed. And what about another transformative technology: robotics and artificial intelligence (AI)? It’s 20 years since IBM’s “Deep Blue” beat Garry Kasparov, the world chess champion. More recently, another IBM computer won a TV gameshow—not the mindless kind featuring bubble-headed celebs, but one called “Jeopardy” that required wide knowledge and had crossword-clue style questions.
"Advances in sensors and motor-skills have been slower. Robots are still clumsy in moving pieces on a real chessboard. They can’t tie your shoelaces or cut your toenails."
Computers use “brute force” methods. They learn to identify dogs, cats and human faces by “crunching” through millions of images—which is not the way babies learn. They learn to translate by reading millions of pages of, for example, multilingual EU documents (they never get bored!). There have been exciting advances in what is called generalised machine learning—Deep Mind (a small London company that Google recently bought) created a machine that can figure out the rules of old Atari games without being told, and then play them better than humans.

Advances in sensors and motor-skills have been slower. Robots are still clumsy in moving pieces on a real chessboard. They can’t tie your shoelaces or cut your toenails. But sensor technology, speech recognition, information searches and so forth are advancing apace. They won’t just take over manual work (indeed plumbing and gardening will be among the hardest jobs to automate), but routine legal work (conveyancing and so on) and medical diagnostics and operations.




Read more on robotisation:

Are only poets safe from robots?

Is technology set to steal your job?

Capital punishment: why new technologies are hurting us (for now)




Can robots cope with emergencies? For instance, if an obstruction suddenly appears on a crowded highway, can Google’s driverless car discriminate whether it’s a paper bag, a dog or a child? The likely answer is that its judgement will never be perfect, but will be better than the average driver. But when accidents occur, they will create a legal minefield. Who should be held responsible—the “driver,” the owner, or the designer?

The big social and economic question is this: will robotics be like earlier disruptive technologies—the car, for instance—and create as many jobs as it destroyed? Or is it different this time? These innovations could generate huge wealth for an elite. It’s not just lefties but people like Martin Wolf of the Financial Times who argue the need for massive redistribution to ensure that everyone had at least a “living wage.” He also argues that we need to create and upgrade public-service jobs where the human element is crucial and is now undervalued—carers for young and old, custodians, gardeners in public parks and so on.

But, looking further ahead, if robots could observe and interpret their environment as adeptly as we can, they would truly be perceived as intelligent beings, to which (or to whom) we can relate, at least in some respects, as we do to other people. Such machines pervade popular culture—in movies like HerTranscendence and Ex Machina. In his scary and scholarly book Superintelligence, Nick Bostrom, the Oxford philosopher, speculates what could happen if a machine developed a mind of its own. Would it stay docile, or “go rogue”? If it could infiltrate the internet, it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes—or even treat humans as an encumbrance.

Some of the serious AI pundits think the field already needs guidelines—just as biotech does. But others regard these concerns as premature—and worry less about artificial intelligence than about natural stupidity. Be that as it may, it’s likely that during this century our society and its economy will be transformed by autonomous robots, even though the jury’s out on whether they’ll be “idiot savants” or display superhuman capabilities.

There’s disagreement about the route towards human-level intelligence. Some think we should emulate nature, and reverse-engineer the human brain. Others say that this is as misguided as designing a flying machine by copying how birds flap their wings. Philosophers debate whether “consciousness” is special to the wet, organic brains of humans, apes and dogs—if so robots, even if their intellects seem superhuman, will still lack self-awareness or inner life.
"There’s also a cut-price option of having just your head frozen. When I told them I’d rather end my days in an English churchyard than a Californian refrigerator, they derided me as a “deathist.”
Computer scientist and futurist Ray Kurzweil, now working at Google, argues that once machines have surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones—an intelligence explosion. He thinks that humans could transcend biology by merging with computers, maybe losing their individuality and evolving into a common consciousness. In old-style spiritualist parlance, they would “go over to the other side.”

Kurzweil is the most prominent proponent of this so-called “singularity.” But he is worried that it may not happen in his lifetime, so he wants his body frozen until this nirvana is reached. I was once interviewed by a group of “cryonic” enthusiasts from California called the Society for the Abolition of Involuntary Death. They will freeze your body, so that when immortality is on offer you can be resurrected or your brain downloaded. If you can’t afford the full whack there’s a cut-price option of having just your head frozen. I told them I’d rather end my days in an English churchyard than a Californian refrigerator. They derided me as a “deathist.”

Cryonics is still a fringe idea, but research on ageing is being prioritised. Will the benefits be incremental? Or is ageing a “disease” that can be cured? Dramatic life-extension would plainly be a real wildcard in population projections, with huge social ramifications. But it may happen, along with human enhancement in other forms.

Technology brings with it great hopes, but also great fears. Which scenarios are pure science fiction, and which could conceivably become real? How can we enhance our resilience against the more credible threats, and warn against developments that could run out of control? At the same time, we mustn’t forget an important maxim: the unfamiliar is not the same as the improbable.

My special interest is space—and this is where robots surely have a future. During this century the whole solar system will be explored by flotillas of miniaturised probes—far more advanced than the Rosetta probe, which landed on a comet, or that which surveyed Pluto, both of which were built 15 years ago. Giant robotic fabricators may build vast lightweight structures floating in space (solar energy collectors, gossamer-thin radio reflectors, for instance)—mining raw materials from the Moon or asteroids.

Robotic advances will erode the practical case for human spaceflight. Nonetheless, I hope people will follow the robots, though it will be as risk-seeking adventurers rather than for practical goals. The most promising developments are spearheaded by private companies. For instance SpaceX, led by Elon Musk, who also makes Tesla electric cars, has launched unmanned payloads and docked with the space station. Musk hopes soon to offer orbital flights to paying customers. Wealthy adventurers are already signing up for a week-long trip round the far side of the Moon—voyaging further from Earth than anyone has been before (but avoiding the greater challenge of a Moon landing and blast-off). I’m told that they have sold a ticket for the second flight but not for the first. These private enterprise efforts can tolerate higher risks than a western government could impose on publicly-funded civilians, and thereby cut costs compared to Nasa or the European Space Agency. They should be promoted as adventure or extreme sports, some people argue—rather than “space tourism,” which lulls people into an unrealistic sense of confidence.

By 2100, courageous pioneers in the mould of Ranulph Fiennes or Felix Baumgartner, who broke the sound barrier in freefall from a high-altitude balloon, may have established “bases” independent from the Earth—on Mars, or maybe on asteroids. Musk himself (aged 44) says he wants to die on Mars—but not on impact.

Whatever ethical constraints we impose here on the ground, we should surely wish these adventurers good luck in using all the resources of genetic and cyborg technology to adapt themselves and their progeny to alien environments. This might be the first step towards divergence into a new species: the beginning of the post-human era. And it would also ensure that advanced life would survive, even if the worst conceivable catastrophe befell our planet.

But don’t ever expect mass emigration from Earth. Nowhere in our solar system offers an environment even as clement as the Antarctic or the top of Everest. It’s a dangerous delusion to think that space offers an escape from Earth’s problems.

And here on Earth I’ve argued that we may indeed have a bumpy ride through this century. Environmental degradation, extreme climate change, or unintended consequences of advanced technology could trigger serious, even catastrophic, setbacks to our society. But they wouldn’t wipe us all out. They’re extreme, but strictly speaking not “existential.”

So are there conceivable events that could snuff out all life? Physicists were (in my view rightly) pressured to address the speculative "existential risks” that could be triggered by Cern’s Large Hadron Collider particle accelerator, which generated unprecedented concentrations of energy. Could it convert the entire Earth into particles called “strangelets”—or, even worse, trigger a “phase transition” that would shatter the fabric of space itself? Fortunately, reassurance could be offered: indeed I was one of those who calculated that cosmic rays of much higher energies collide frequently in the galaxy, but haven’t ripped space apart.

But physicists should surely be circumspect about carrying out experiments that generate conditions with no precedent even in the cosmos—just as biologists should avoid release of potentially-devastating pathogens. So how risk-averse should we be? If there were a threat to the entire Earth, the public might properly demand assurance that the probability is below one in a billion—even one in a trillion—before sanctioning such an experiment.

But can we meaningfully give such assurances? We may offer these odds against the Sun not rising tomorrow, or against a fair die rolling 100 sixes in a row; that’s because we are confident that we understand these things. But if our understanding is shaky—as it plainly is at the frontiers of physics—we can’t really assign a probability, nor confidently assert that something is stupendously unlikely. If a US Congressional committee asked me: “Are you really claiming that there’s less than a one in a billion chance that you’re wrong?” I’d feel uneasy saying yes.

But on the other hand, if a congressman went on to ask: “Could such an experiment disclose a transformative discovery that—for instance—provided an unlimited and unenvisioned source of energy?” I’d again offer fairly monstrous odds against it. The issue is then the relative likelihood of these two unlikely event —one hugely beneficial, the other catastrophic. Innovation is often hazardous, but undiluted application of the “precautionary principle” has a manifest downside. There is “the hidden cost of saying no.”

And the priority that we should assign to avoiding truly existential disasters depends on an ethical question posed by the Oxford philosopher Derek Parfit, among others. Consider two scenarios: scenario A wipes out 90 per cent of humanity; scenario B wipes out 100 per cent. How much worse is B than A? Some would say “10 per cent worse” as the body count is 10 per cent higher. But others would say B was incomparably worse, because human extinction forecloses the existence of trillions of future people—and indeed an open-ended post-human future.

This immense future, incidentally, is something that astronomers are specially aware of. The stupendous timespans of the evolutionary past are now part of common culture (outside fundamentalist religious circles, at any rate). But most people still tend to regard humans as the culmination of the evolutionary tree. That hardly seems credible to an astronomer. Our Sun formed 4.5 billion years ago, but it has 6 billion years remaining before the fuel runs out. And the expanding universe will continue—perhaps for ever. To quote Woody Allen, eternity is very long, especially towards the end.
"the Earth’s biosphere isn’t the optimal environment for AI—in interstellar space, designers may create non-biological brains with powers that humans can't even imagine"
The timescale for human-level AI may be decades, or it may be centuries. Be that as it may, it is but an instant compared to the future horizons, and far shorter than the timescales of Darwinian selection that led to humanity’s emergence.

I think it’s likely that the machines will gain dominance on Earth. This is because there are chemical and metabolic limits to the size and processing power of “wet” organic brains. Maybe we’re close to these already. But no such limits constrain silicon based computers (still less, perhaps, quantum computers): for these, the potential for further development over the next billion years could be as dramatic as the evolution from pre-Cambrian organisms to humans. So, by any definition of “thinking,” the amount and intensity that is done by organic human-type brains will be utterly swamped by the future cerebrations of AI.

Moreover, the Earth’s biosphere isn’t the optimal environment for advanced AI—interplanetary and interstellar space may be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological “brains” may develop powers that humans can’t even imagine. But we humans shouldn’t feel too humbled. We could be of special cosmic significance for jump-starting the transition to silicon-based (and potentially immortal) entities, spreading their influence far beyond the Earth, and far transcending our limitations.

So, even in this “concertinered” timeline—extending billions of years into the future, as well as into the past—this century may be a defining moment where humans could jeopardise life’s immense potential. That’s why the avoidance of complete extinction has special resonance for an astronomer.

Back in the here and now, I’d argue that there’s no scientific impediment to achieving a sustainable world, where all enjoy a lifestyle better than those in the west do today. We live under the shadow of new risks—but these can be minimised by a culture of responsible innovation, especially in fields like biotech, advanced AI and geoengineering. The thrust of the world’s technological effort needs redirection, yes, but we can be technological optimists.

Yet the intractable politics and sociology—the gap between potentialities and what actually happens—engenders pessimism. There are many problems. The emergent threat from globally-empowered mavericks is growing. The pre-eminent concern, however, is the institutional failure to plan long-term, and to plan globally. For politicians, the local trumps the global, the immediate trumps the long term. But almost all the issues set out here have to be tackled, monitored and regulated internationally. This can happen via agreements of the kind being sought in Paris on climate—or via beefed-up global agencies resembling the International Atomic Energy Agency or the World Health Organisation. But scientists who have served as government advisors have often had frustratingly little influence. Experts by themselves can’t generate political will.

Politicians are, however, influenced by their inbox, and by the press. Academics can sometimes achieve more as “outsiders” and activists—promoting their message via widely-read books, via campaigning groups, via blogging and journalism, or through political activity. If their voices are echoed and amplified by a wide public, and by the media, long-term global causes will rise on the political agenda.

And sometimes, the great religions can be our allies. The Pope’s recent Encyclical on the environment and climate was hugely welcome. The Catholic Church transcends normal political divides—there is no gainsaying its global reach, nor its durability and long-term vision, nor its focus on the world’s poor. This Pope’s message resonates in Latin America, Africa, and East Asia—even perhaps in the US Republican party. Universities span national and political divides, too. Younger people, who may live into the twenty-second century, have wider horizons. And they care: about the environment, and about the poor. Student involvement in, for instance “effective altruism” campaigns, is burgeoning.

It would surely be shameful if we persisted in short-term policies that denied future generations a fair inheritance and left them with a more depleted and more hazardous world. Wise choices will require the effective efforts of natural scientists, environmentalists, social scientists and humanists, all guided by the knowledge of what 21st-century science can offer, and inspired by values that science alone can’t provide.

“Spaceship Earth” is hurtling through the void. Its passengers are anxious and fractious. Their life-support system is vulnerable to disruption and breakdowns. But there is too little planning too little horizon-scanning, too little awareness of long-term risk. In the words of futurologist James Martin: “If we understand this century‚ our future will be magnificient. It we get it wrong we may be at the start of a new dark age.”

The above is extracted from a lecture given in the Sheldonian Theatre to mark the tenth anniversary of the Oxford Martin School