Lock up your laptops

The takeover of the planet by man-made machines is one of the oldest themes of science fiction. But it may come true
December 20, 1997

We are approaching a crossroads in the history of mankind which is likely to transcend in importance any previous event-from the discovery of fire to space travel. Within the lifetimes of many of us, artificial intelligence will become capable of reproducing itself without human intervention. We stand at the edge of an abyss, staring into a future we cannot guess.

The physical components of this phenomenon require no special technical breakthroughs, merely that artificial intelligence continues to expand at its present rate. Within 40 years, computers will control factories which make other computers. A "closed loop" of manufacture will have been generated. Because the expanding nerve net will also be connected to the energy supply, artificial intelligence will be capable, at a certain stage, of supplying its own energy.

At this point a new species will be born, created from non-organic materials from the minds rather than the genes of another species. It will be an event unprecedented in creation. We will be sharing our planet with another species, capable of self-replication, whose evolutionary development is proceeding at a phenomenally faster rate than our own. This is potentially the most important event in our history-and perhaps the most sinister-yet we seem to be approaching it with something like equanimity. Apart from occasional alarums in the press about the implications of advanced robots or computerised buildings which "talk" to one another at night, there has been almost no rigorous discussion of this.

Several assumptions seem to act as brakes on our consideration of whether and how artificial intelligence might replace human beings. One of the most important, and widespread, is that computers cannot replicate human consciousness. It follows that the replacement of humans by computers is a logical impossibility; or at the very least that such a contingency is so far ahead to be beyond our consideration. This argument is simply a non sequitur. Whether computers can simulate human consciousness is an academic question depending on the definition of "consciousness." But mammals did not have to simulate or replicate all the attributes of dinosaurs before they replaced them. Indeed, it is the differences which the usurper species possess which enable them to supplant an existing species.

In the process of replacing human beings, for example, it may be a positive advantage for an artificial intelligence system not to possess human consciousness. At its most stark and melodramatic, the lack of human consciousness would mean that a computer "decision" to eliminate humans could be taken without the operation of conscience.

What about the question of motivation? Is it not anthropomorphic to assume that computers would want to "take over"? But suppose we built into the computer system an instruction to self-repair. The system would not only respond to such an input by repairing elements which had already malfunctioned, but also scan for other possible malfunctions. One of the elements in its environment, which it would scan, are the human beings who exist on the periphery of its operations and who retain a capacity to intervene. In other words, the input of a simple instruction to self-repair would have an effect analogous to a motivational system of self-defence, creating behaviour similar to paranoia.

A further fallacy is that computers can be controlled with absolute certainty by means of direct instructions. Asimov's famous "first law of robotics" stated that all computers would be instructed not to harm humans. It may have been possible to circumscribe the behaviour of artificial intelligence at an earlier stage of development; but there have been at least three fundamental developments in computer technology-which have breached any system of absolute control-since Asimov's laws were set down.

First, a new generation of "parallel inference" computers has been developed to help predict complex systems such as the weather or the stock market. Sometimes called "neural nets," they do not function within the old yes-or-no algorithms, but by means of a series of weighted "maybes." We cannot instruct such a computer not to attack human beings in the expectation that this instruction will be absolute, because neural nets constantly revise their knowledge systems in the light of their own operational experiences. They behave, in other words, more like "free-thinkers."

Second, and more important, Asimov's great classical model of controlled computers did not take into account developments in communications between computers. Perhaps most significant, when a computer is able to access all the information in another computer in "real" time, it is not "communicating" as such, but generating a single unified organism. The physical components of two interfaced computers may be spatially separate, but their combined operating intelligence forms a single weave. When computers are connected in this manner-apparently randomly and without overall planning-to form what is fashionably called "the information superhighway," the product is likely to be a single organism of unprecedented complexity and intelligence.

The third difference between the current world and Asimov's "laws" of robotics is that Asimov's model was based on physics. Future computer systems increasingly will resemble biological organisms. One of the key features of a biological organism is that the behaviour of the whole cannot easily be predicted by a study of the parts. In support of this theory, a single instance can be cited. The Wall Street crash of 1987 was a real life computer-generated crisis. It had been assumed, with ineluctable logic, that individual computers could process background stockmarket information far more quickly than human minds and, having processed that information, could take the decision to buy or sell shares with greater speed and accuracy. The unexpected and unpredicted result was that, when a certain momentum in selling shares had been reached, the net of interconnected computers began to offload yet more shares. This in turn fed the panic. The result was a sudden spiral towards chaos, causing a wipe out in share values. Only the decision by humans to shut down markets prevented further damage. It was a perfect demonstration of the thesis that the behaviour of nets cannot be easily predicted from the individual computers that make up that net.

So we find, living among us, a rapidly evolving alien being of unprecedented intelligence which-although it is the product of human activity-has not been designed by any one human mind. The individual computers of which it is made may be configured by human designers, but the vast interconnected systems of computers and computer networks will have emerged independent of human control.

Evolutionary history shows that when organisms occupy different niches, they may happily survive alongside one another. When they occupy the same niche, in direct competition, one species usually ousts the other. In this context it is worth noting that the new species of artificial intelligence does not occupy some remote niche at several removes from human beings. It occupies every facet of the human species's environment. Wherever there are humans, the new species of artificial intelligence is increasingly present. The question is: which of these two intelligent systems, occupying the same niche, will survive?

We can consider this from a longer-term, biological perspective. The nervous systems of animals are differentiated from other body tissue by their electrical component. In the more advanced biological species, nervous tissue occupies a higher proportion of body weight. If we extend this evolutionary development to its ultimate degree, we can speculate that a future species will leave behind chemical organisation entirely and become wholly electrical. It is not a coincidence, perhaps, that this happens to be a good description of a computer.

In my view we have at most two or three decades in which to consider our future reasonably safe. After that we begin to enter an era of increasing risk. When I wrote about this subject in the Spectator a few years ago, I touched on some of the points mentioned here. In the correspondence which followed, a number of critics claimed that any potential threat from computers could be countered easily by "switching off" the system. Perhaps I could invite you, the reader, to "switch off" the internet now. You will see that it is a vastly more difficult problem than at first it seems.

The critical feature of a "switch" is that is must be designed, constructed and installed before it can be used. We need only look at our own households to see that a switch is a specific physical entity designed for a specific purpose. We install a light switch, for example, to turn the light in a bathroom on or off. The notion that computers pose no great threat, because in an emergency we could switch off the system, is simply untrue. The precise opposite is the case. The system can only be switched off if we agree that a threat exists and take steps to install such a switch (or switches) as a result of this prior argument.

But this is hardly the end of our problem. How can such a switch be constructed without political and social resistance? The right to communicate on the internet without interference from an arbitrary authority is one of the established freedoms of liberal society. Who or what authority should be granted the power to switch off the net? Is it national bodies or some form of international body? These are highly complex political and social issues. Switches and "fire-breaks" are only possible if there is prior agreement that there are dangers and the political will to act. At present, I see no significant recognition of the problem, let alone the will to act.