The surveillance capitalism of today's tech giants has radically reshaped what it means to be human, but that doesn't mean it's invincibleby Joanna Kavenna / June 7, 2019 / Leave a comment
One morning, in a possible future. You are woken by your clothes vibrating. You ask them politely to stop. Some mornings you want to swear at your clothes, but you know that everything you say is logged by interconnected appliances and swearing may impair your verified user status. This, in turn, could affect your job prospects, credit rating and your ability to support your family.
Suppressing your emotions, you go into the kitchen, where your fridge says a hearty good morning and tells you that you are nearly out of milk. Yesterday, for some reason, it ordered a tonne of alfalfa and quinoa. A band around your wrist suggests your blood sugar is low. Also, you drank too much wine last night, the fridge adds in an admonitory tone. Protestations that it was your brother’s birthday are of no interest to this immaculate device.
Despite the concerns of your fridge you skip breakfast and hurry to your car. Unfortunately, your car tells you that it won’t start because you have failed to top up the oil, despite several warnings. You try to start the engine anyway but nothing happens. Realising that you are extremely late—which will also be logged and, oh yes, applied to your verified user status—you swear loudly. Your verified user status is adjusted down. You find yourself apologising to the car.
It doesn’t reply.
You wonder if it is offended.
This scenario sounds fictional and yet, according to Shoshana Zuboff, it is almost upon us. Zuboff is a professor emerita at Harvard Business School and the author of a previous study about the automation of work places, In the Age of the Smart Machine, written as far back as 1988. Her latest book, The Age of Surveillance Capitalism, is a brilliantly written, passionate and humane work that stands alongside the work of tech-savvy sceptics such as Peter Pomerantsev, Evgeny Morozov and Francesca Bria.
Her book is concerned with “the darkening of the digital dream and its rapid mutation [into] surveillance capitalism.” This version of capitalism “unilaterally claims human experience as free raw material for translation into behavioural data.” These data are fed back into the system to improve our “user experience” or sold to advertisers to ensure that sites like Google, Facebook and Tinder remain free for users. However, Zuboff explains, the data are also “fabricated into prediction products that anticipate what you will do now, soon, and later.” Prediction products become a lot more attractive if their predictions are accurate—so surveillance capitalists “nudge, coax, tune, and herd behaviour towards profitable outcomes.” This is a shift of enormous significance.
Zuboff describes surveillance capitalism, a term she coined in 2014, as a “new breed of economic power.” It was only in 2004 that Facebook was founded; in the same year Google’s Gmail was launched, which scanned supposedly private correspondence to generate advertising. Google co-founders Larry Page and Sergey Brin realised that such targeted advertising required deep knowledge of the people who used their products. Indeed, it required the transformation of such users from customers who bought products into the product that tech companies could sell to advertisers—“the sources of raw-material supply,” as Zuboff puts it.
In that utopian year thrilling phrases abounded: “the open internet,” “connectivity,” “the cyber-commons.” It was hoped that the open web would be a force for positive social transformation, allowing widespread access to a global store of knowledge. While we surfed avidly, though, we were also becoming increasingly knowable to those who cared to know us. It’s taken a long time for this to be widely understood. How many of us were aware, in 2004, that our emails were not private? Zuboff suggests that surveillance capitalists, including Google and Facebook, have been secretive about these practices and—when necessary—have lobbied against proposed regulation. As Edward Snowden has revealed, the security services formed murky alliances with tech corporations, further eroding our privacy.
Page has suggested that the aim of his company is to become “almost automagical because we understand what you want and can deliver it instantly.” For Zuboff this process is like a “one-way mirror,” in which we know nothing about those who know everything about us, and are watched and controlled by “an exclusive data priesthood among whom Google is the übermensch.” The political effects are already being felt. The 2008 US presidential election was one of the earliest in which a political consultant could confidently announce: “We knew who… people were going to vote for before they decided.” The use of targeted political advertising on Facebook helped to tip the 2016 election in Donald Trump’s favour.
More is to come. The ultimate aim of this “digital omniscience” is to convert our lives into “a pervasive everywhere augmented reality environment… that can be intuitively browsed,” says Professor Joseph Paradiso of the MIT Media Lab. As information streams directly into our ears and eyes, “the boundaries of the individual will be very blurry.” In the so-called Internet of Things (IoT), according to an anonymous senior systems architect quoted by Zuboff, everything will be connected: “It could be your liver. That’s your IoT. The next step is what we do with the data. We’ll visualise it, make sense of it, and monetise it. That’s our IoT.”
These galvanising prophecies fail, however, to contend with questions of ownership. Who owns your liver, or my liver, and who owns the data that is “monetised” from our livers? Furthermore, who owns our most intimate conversations? A recent article in Bloomberg explains that Amazon employs thousands of people to transcribe and annotate voice recordings captured in Echo owners’ homes and offices. Amazon explains: “We use your requests to Alexa to train our speech recognition and natural language understanding systems.” As well as requests to Alexa, however, they also pick up people singing in the shower, arguments, the screams of children—even sounds that indicate sexual assaults. Who owns this “data”? What else will be done with it?
In 2015, a start-up called Realeyes won a €3.6m grant from the European Commission for a project code-named “SEWA: Automatic Sentiment Analysis in the Wild.” This seeks to develop automated technology that discerns how people are feeling as they view content—scouring our faces, voices, gestures and bodies to discern our non-conscious responses and emotions. Zuboff writes: “There can be no shadow, no darkness. The unknown is intolerable. The solitary is forbidden.” Yet: “What if I don’t want my life streaming through your senses? Who knows? Who decides? Who decides who decides?”
As Zuboff contends, all of this radically disrupts the idea of individual autonomy. In such a reality, where do you reside? Who are you anyway? Are you the master or the slave? The lord or subject? Is your house really your home when you are constantly being observed and nudged? For corporations such as Google, Facebook and Apple any uncertainty is profit-damaging chaos. Free humans are the unpredictable elements in what for them should be a perfect feedback system. To iron that out human behaviour must be significantly modified. “Friction is to be reduced,” writes Zuboff; the world must run smoothly. Telemetry, or automated communications, can convert any home into a “safe, AI-driven place.” “Safe,” adds Zuboff, means “anomaly free.” And yet individuals are by definition anomalies. Furthermore, can we accurately call a technology “predictive” when it requires individual choice to be limited to the paths it has itself ordained?
Free will and determinism are venerable philosophical, scientific and theological notions: how might we be free if God or biology compel us to do something? How might we know our desires when we can only think in languages we have learned? Surveillance capitalism and its project of behavioural modification add major questions to this debate. Citing Sartre’s dictum that “freedom is nothing but the existence of our will,” Zuboff argues that “experience is not what is given to me but rather what I make of it… No matter how much is taken from me, this inward freedom to create meaning remains my ultimate sanctuary.” Sartre also wrote that it was not enough “to will’; one must “will to will”—to project choice into the world.
The question is not merely whether we have the right to choose but whether some hidden system chooses what’s in front of us, and therefore what we perceive. If we are only permitted to perceive certain aspects of reality—curated in line with suppositions about our personalities and desires—then we move within a falsified environment. Other choices may be expunged for the greatest good or for our own good, or for the good of someone who is paying for our choices to be restricted in their favour. Or our choices may be restricted in line with social norms, or subjective terms that are mistaken for objective classifications. The danger is we will be stuck in a feedback loop, closing off the possibilities of change.
Microsoft recently applied for a patent for a device that monitors user behaviour in order to preemptively detect “any deviation from normal or acceptable behaviour that is likely to affect the user’s mental state.” Once again, though, who decides what is “normal or acceptable behaviour”? Who is the judge? Our choices in the world have always been influenced by social conventions, financial necessity, fear of injury, family ties, advertising, political propaganda, social ordinance and so on. But Zuboff makes the point, persuasively, that never has there been so much information available about us, and never has it been so possible to direct our enterprises in the world.
Although China is developing a tech-totalitarianism that combines mass surveillance with disappearances, random arrests and murders, Zuboff confines her focus, mainly, to western surveillance capitalism. In her view, Brin, Page and Mark Zuckerberg of Facebook are sui generis utopians. Surveillance capitalism “is not a coup d’état in the classic sense but a coup des gens”—not the state being overthrown but the very idea of what it means to be a person. It is an ideological project, and not an inevitable consequence of scientific progress.
Zuboff quotes MIT scientist Alex Pentland—“the godfather of wearables” such as Google Glass—dismissing the question of free will as a regressive notion from the 18th century: “I don’t think we are individuals… We are a social species.” Zuckerberg has previously explained that people with integrity only have one self; there is no need to distinguish between the public and private self anymore, no need for privacy at all. Yet for Zuboff this is precisely what is needed: “inward experience from which we form the will to will,” homes that are sanctuaries, and—most of all—uncertainty as “the necessary habitat of the present tense.”
Zuboff manages somehow to be unnerving and inspiring at the same time. We do not know the future. We are anomalous because no human is quite like any other. We do not entirely know ourselves, and this means we are also unknowable to others to a degree. This might well mean that the behavioural modification/surveillance capitalist project is ultimately doomed—as with so many enterprises of prescriptive utopianism. The most sophisticated data-gathering devices can’t entirely understand our innermost thoughts, suppressed desires, inchoate dreams, can’t fully gauge the ambiguous depths of the human personality. Therefore the self as a concept, and the destiny of each individual self, remains fascinatingly uncertain.