Are we masters of technology or has it mastered us, asks Will Selfby Will Self / September 15, 2016 / Leave a comment
Written by Norbert Wiener, an MIT mathematician, and published in 1948, Cybernetics: or Control and Communication in the Animal and the Machine is the book which first brought the term “cybernetics” to public attention. Synthesised from the Ancient Greek kubernan (meaning to steer, navigate or govern) the coinage has resonated ever since, giving rise to all sorts of odd, cyber-prefixed neologisms—my personal favourite being the chain of American-style confectioners dubbed Cybercandy. Wiener, a famously eccentric character, had been driven to develop an overarching theory of the machine by two vital problems that had arisen during the Second World War.
The first was the need for an automated system that would allow British anti-aircraft gunners to hit German bombers—and by extension make it possible for any gunner to hit a fast and erratically moving target; and the second was the dropping of the nuclear bomb Little Boy on Hiroshima in 1945. Wiener, like many scientists of his generation, responded to the split-second incineration of 125,000 Japanese civilians with horror: he had an epiphany in which he saw a future of deadly conflict dominated—and perhaps even initiated—by sophisticated machines. But again, in common with so many scientists of the era, Wiener had already tried to bring about just such a future, by creating a machine that would massively enhance our ability to locate, aim and unerringly deliver military ordanance.
This Janus-faced—or perhaps, more properly, Manichean—inspiration was thereby encrypted into the cybernetics blueprint from the outset: on the one hand this was intended to be a general theory of how all possible—not just actual—machines might work, with a view to assisting those intent on building them. On the other hand, it was a minatory account of how interaction between humans and human-like machines might lead to the latter becoming firmly ensconced in the driving seat. Given 2016 has already seen the first fatal accident involving a self-driving car, now might seem like the ideal time to take stock and calmly examine the last 70 years of human-machine interaction—possibly with the ulterior motive of discovering whether it’s a “who” or a “what” in control.
“Like many technological advances, the ones which typified cybernetics were born out of man’s compelling desire to kill his fellow man”
On the surface of it, Thomas Rid’s Rise of the Machines is less ambitious: taking a particular development of the core ideas of cybernetics, automation or anarchy for example, its main narrative vehicle travels forward, while each of the seven sections loops back so as to place asynchronous developments within an overall timeline. Rid’s structure, of course, owes something to one of the core concepts of cybernetics: the feedback loop.
The kinds of machines which inspired Wiener were ones that could self-correct or otherwise modify their own behaviour in response to external stimuli; prototypical forms of this were proximity fuses that used radio waves to detect the targets they were homing in on, and so detonate the shells which housed them. Equally influential was the Sperry ball turret, which placed the human gunner in the middle of a continuous feedback loop of incoming data and outgoing gunfire: with its hydraulic servomotors and machine guns, the cyclopean-looking turret was a primitive sort of cyborg, or cybernetic organism. So, like many technological advances, the ones which typified cybernetics were born out of man’s compelling desire to kill his fellow man—preferably at a distance.
Rid locates Wiener and cybernetics on the intellectual map of the mid-20th century—positioning them somewhere between the architectural modernism of Le Corbusier, who saw the house as “a machine for living,” and the behaviourism of Ivan Pavlov and BF Skinner—psychologists who understood the human psyche in mechanical terms, as subject to a feedback loop which could be hijacked to reinforce certain sorts of behaviour. In fact, cybernetics, taken as a theory of the congruent behaviours of humans and sophisticated machines seems entirely behaviouristic to me. While the enlargement of the idea to encompass man-machine interaction introduces troubling philosophic questions, such as whether any meaningful distinction can be drawn between inorganic and organic entities operating in the same way.
Rid himself writes that: “By 1970, cybernetics had already peaked as a serious scholarly undertaking, and it soon began to fade. Its scientific legacy is hard to evaluate.” He acknowledges that “cybernetic ideas and terms were spectacularly successful and shaped other fields: control engineering, artificial intelligence, even game theory.” But he ruefully concedes that “cybernetics as a science entered a creeping demise, with therapists and sociologists increasingly filling the rolls at the American Society for Cybernetics.” This might seem about as damning a judgment as is imaginable on a field of intellectual endeavour—if it were the correct way of understanding cybernetics, which I don’t believe it is.
Since the inception of wireless broadband in the early 2000s there’s been an increasingly febrile climate surrounding our use and understanding of a suite of technologies I like to refer to as Bi-Directional Digital Media (BDDM). The proleptic insights of thinkers as diverse as Marshall McLuhan, Jean Baudrillard and Guy Debord into the ontological and epistemic impacts of mass mediatisation are now felt experientially by those masses: our bodies may still patrol the streets, but our minds, increasingly, are smeared across a glassy empyrean—and we feel this deep and existential queasiness, as our emotions are pulled hither and thither by the ebb and flow of massive online feedback loops: an acid reflux of imagery and data to which we’re subject 24 hours a day, 365 days a year.
Which is presumably why there have been a rash of books, of varying quality, which attempt to explain what the hell’s going on—although for once, the devil really isn’t in the detail, since nobody imagined signing a mobile phone contract was tantamount to becoming a cyborg. James Gleick’s searching and thoughtful The Information, published in 2011, limned the origins of the current age of data—Nicholas Carr’s The Shallows (2010)and The Glass Cage (2014), looked respectively at the cognitive impacts of the internet and automation. Last year saw the publication of Laurence Scott’s The Four Dimensional Human, which hymns the emergent phenomenology of the BDDM realm; and this year came Greg Milner’s Pinpoint, a history of the United States’s military global positioning satellite system, the technology of which, arguably, is most foundational of the cosmic cat’s cradle humanity has woven together out of the virtual and the actual. Even film director David Cronenberg got in on the act with Consumed (2014), his first novel—an exploration of contemporary anthropophagy, which seems to suggest that the mediatised mind is indeed auto-cannibalistic. This is just a small selection of the books on the broad subject; however, what struck me as I read Rid’s contribution, was how few references there are in these other works to cybernetics or any of the other cyber-prefixes. Indeed, it is quite possible to conceive of writing about the BDDM’s impact—in the widest sense—without referring to the subject (or pseudo-subject) at all.
Rid’s book reflects this redundancy. He begins in high theoretical style, and returns to these sunny uplands at the beginning and end of his chapters, but in between he gets bogged down in all sorts of practical details; notably the development of virtual reality technologies, cyborgs and computer encryption. All of these are interesting subjects, but they’ve been covered better elsewhere—Rid wants us to understand cybernetics as the fons et origo of all the shiny, happy things we see about us and hold in our hands, but to do so we must cleave to a very crude—and frankly implausible—view of human inventiveness. The story of computer encryption is indeed fascinating and bizarre: discovered by a technical officer at GCHQ in the early 1970s, the method of public key encryption depends on the impossibility of factoring very large numbers—it is, if you like, the modern era’s equivalent of the Golden Mean or Pi: mathematical axioms which mysteriously seem to underpin the very materiality of our world. The secret transmission of information is a vital aspect of waging war, and all the way from Bletchley Park to the US National Security Agency’s massive data centres—vast server-farms, layed out on the high plains of the American west—computing has evolved in lock-step with superpower conflict.
“The devil really isn’t in the detail—nobody imagines signing a mobile phone contract was tantamount to becoming a cyborg”
Rid comes at all of this somewhat counter-intuitively through the Whole Earth Catalog, a biannual almanac of hippy survivalism published by Stewart Brand in the late 1960s and early 1970s. Rid proposes the Catalog, which was updated in response to its readers’ input, as a proto-form of the sort of computer-augmented feedback loops which have come to typify social media—and it’s true that counter-cultural types were early adopters of the internet, and formed the ranks of cyberpunks, and then so-called “cypherpunks” who battled to wrest public key encryption from the spooks, so making the web safe for… well, safe for commerce, since without it you couldn’t safely order Rid’s book on Amazon. It is, as I say, a fascinating story—but one told rather more comprehensively by Gordon Corera, the BBC’s Security Correspondent, in Intercept: the Secret History of Computers and Spies which came out last year. Corera, unlike Rid, brings the story right up to date—covering the impact of Edward Snowden and Wikileaks, and is far more authoritative on the nature and extent of current cyber-warfare. Rid, by contrast, remains hobbled by cybernetics itself—a snare that drags him back, again and again, to the early 1970s, when the anthropologist Gregory Bateson saw in cybernetics a heuristic with which to interpret the phenomenon of consciousness itself.
Bateson’s fundamental insight did indeed build directly on Wiener’s work: for Bateson, not only were the man and the tool best considered as an integrated system, but the environment within which the man plied the tool should also be in the loop—a loop which, by extension, lassoes just about every sentient thing in the universe. “What some call God,” is how Bateson termed this collective consciousness, although you don’t have to be a devotee of psychotropic drugs in order to see how uncannily his insight prefigures our web-based psycho-social smearing; after all, while we may hypothesise that consciousness originally arose as a by-product of language-acquisition at some point in humans’ evolutionary past, in the future that’s emerging it seems increasingly to be a by-product of media. I only wish Rid had pushed these rather more philosophic speculations further. His foundational observation is that cybernetics—in keeping with Wiener’s own Manichean view—always generated imagined futures, whether utopic or dystopic.
This gives him licence to investigate the imaginings of writers such as William Gibson, as they descried a world in which—to quote, as Rid does, the great hippy novelist, Richard Brautigan—we are either “all watched over by machines of loving grace”; or alternatively one that follows Randall Jarrell’s monitory vision, in his poem “The Death of the Ball Turret Gunner.” In five spare lines Jarrell evokes a darker version of Bateson’s cosmic feedback loop, one populated by “black flak and the nightmare fighters.” Jarrell suggests that in this automated realm of death the human subject will, soon enough, be surplus to requirements; the final line reads, “When I died they washed me out of the turret with a hose.”
Believers in the “singularity”—that supersession of biological by machine-intelligence which was first named by another cybernetics-influenced sci-fi writer, Vernor Vinge—can also be split into pro- and anti- utopian and dystopian. Indeed, so marked is this response to the meld of man and machine, it leads me to believe that this is fundamentally what cybernetics was all about. Far from being some sort of “top down” theory, which carved out the conceptual space within which technological innovation might take place, it was rather a sort of cybernetic system itself, one which generated these visions, sent them out into the world, and then auto-corrected them in response to our feedback.
Rid understands this perfectly well, and he does his best to keep his narrative up there in the speculative clouds—his problem is that just like the rest of us, he can’t help being distracted by all that shiny, ergonomically-designed hardware. Distracted by that—and bamboozled also by the universal human tendency to scale-up from our existential and very real apprehension of individual free will, to a hazy virtualisation within which all humanity is similarly endowed. It’s this that makes most writers on technology plump for one or the other, and decide whether the rise of the machines is a good or a bad thing—as if we really had any choice in the matter.
Will Self’s latest novel is “Shark” (Bloomsbury)