The growing influence of technology in the workplace is making many professional jobs redundantby John McDermott / March 27, 2014 / Leave a comment
Baxter the robot: experts have suggested that “as machines become smarter, the less opportunity there is for humans to complement them” ©Matt Rourke/AP/PA Images
Soon it might be robots that write articles about the future. If the boldest predictions made about technology turn out to be true, humans like you would still read stories but humans like me would not research, analyse, interview, transcribe, scribble, plan, draft, edit, rewrite, sub-edit and dispatch. Artificial intelligence will have rendered the inadequacy of my intelligence all too real.
Narrative Science wants to make this happen as quickly as possible. The Chicago-based company’s algorithms are already used to preview companies’ quarterly earnings and report on sports games. It is a huge leap from writing pithy accounts of data-rich occurrences to penning more complex articles. But Narrative Science is thinking big; it is one of many businesses aiming to make obsolete even the more cognitive of professional jobs.
The idea that smart machines could displace much of the human workforce sounds like a prospect from science fiction. And yet science fiction is not fantasy. Visiting the World’s Fair in Queens, New York, in 1964, the author Isaac Asimov speculated what the world would look like half a century hence.
Reflecting on the jamboree, Asimov wrote in the New York Times that in 2014, “much effort will be put into the designing of vehicles with ‘Robot-brains’”; “communications will become sight-sound”; and that “mankind will therefore have become largely a race of machine tenders.” Today we have driverless cars, Skype and the modern office replete with personal computers.
In The Second Machine Age, a hotly debated new book, Erik Brynjolfsson and Andrew McAfee, professors at the Massachusetts Institute of Technology, begin almost where Asimov’s predictions end. The doyen of sci-fi forecast that for all the innovations of the second half of the 20th century, “Robots will neither be common nor very good in 2014, but they will be in existence.”
If Asimov had been writing about 2010, Brynjolfsson and McAfee would have said he was prescient. A few years ago, robotics, artificial intelligence (AI) and data analysis were “laughably bad.” Then they “suddenly got very good.” The main argument of The Second Machine Age is that technology is at an “inflection point.” We are about to see its profound consequences, the MIT pair says.
Brynjolfsson and McAfee’s World’s Fair are the testing centres, laboratories and servers of America’s technology companies. The two authors ride in Google’s driverless car, which has clocked more than 100,000 miles on real roads without a crash. They meet Watson, the IBM supercomputer that defeated all before it on Jeopardy!, the American quiz show. They marvel at progress in the use of “big data” and how rapidly improving speech recognition augurs the maturation of artificial intelligence.
Then they meet Baxter, a manufacturing robot developed by a company based in Massachusetts. In robotics and AI, “Moravec’s paradox”, devised by the scientist Hans Moravec, states that higher-level human activities are easier to mimic than basic functions. It is easier, for example, to create an algorithm that performs thousands of advanced mathematical calculations in a few seconds than one that can make up a simple story. (This is the challenge faced by Narrative Science.) Robots build cars and tablets but they cannot make the bed or tend the garden better than humans. But Baxter shows how we are “making inroads” against Moravec’s paradox, according to Brynjolfsson and McAfee.
He, or rather “it,” is a humanoid: the manufacturer, Rethink Robotics, has given Baxter an iPad for a “face” and it has mechanical arms which humans can manipulate. Crucially, Baxter then “remembers” those movements and can repeat them without a coffee break or complaining about pay. Brynjolfsson and McAfee say that Baxter is another sign that technology is “on the cusp of exploding.”
Baxter is cool. It looks friendly enough, but is hardly Asimovian. It is therefore understandable that sceptical economists and technologists have questioned the breathless technooptimism of Brynjolfsson and McAfee.
Robert Gordon is one of the pair’s most trenchant critics (see p34). The Northwestern University professor says that: “A central problem in assessing the optimistic view is that there is rarely any examination of the past.” His problem starts with the identity of the “second machine age.”
Brynjolfsson and McAfee compare today’s technology to the steam engine, the symbol of the first machine age. For Gordon, this amounts to a sleight of a robotic hand. He is right that there is no straight line of inheritance from James Watt to Sergey Brin. But Gordon also argues that if the second machine age is to live up to its hype, it will need to match the epochal changes brought about by what economic historians typically call the “second industrial revolution.”
Three “general purpose technologies”—rare innovations that transform not only one industry but the entire economy—were developed within a few months of each other in 1879. Thomas Edison invented the first properly working light bulb, Karl Benz built the first reliable internal combustion engine and, two decades before Marconi, David Edward Hughes sent a wireless signal. Along with huge improvements in public health, these inventions meant that by the time of the Great Depression, the rich world had been transformed.
“Everything happened at once,” Gordon writes in a paper published in the US in February by the National Bureau of Economic Research. The second industrial revolution was “multidimensional.” The internal combustion engine meant cars and thus motorways, which led to wholesale distribution networks. Electricity meant light, air-conditioned offices and the service economy. US productivity grew by an average of 2.36 per cent a year from 1891 to 1972.
In contrast, Gordon says, the computer revolution of the past 40 years has been “one-dimensional,” despite barcodes, cash machines, personal computers, the internet and mobile phones. There was a burst of productivity in the 1990s, as the price of processors and memory plummeted and there was a surge in IT investment, but this was “unprecedented and never repeated.” Average annual US productivity growth from 1972 to 2012 was relatively weak at 1.59 per cent.
In effect, Gordon looks at the second industrial revolution and the more recent history of information technology and says to the techno-optimists: “beat that.” He accuses Brynjolfsson and McAfee of a “haze of incantations” about gadgets. Driverless cars are “so minor compared to the invention of the car itself.” Big data is used by companies in a “zero-sum game” for customers. And robots? “They can think but can’t walk, or they can walk but can’t think.”
Gordon has support. Peter Thiel, the PayPal founder and billionaire investor, says that innovation is slowing. He argues that technology has become a narrow field, meeting consumer needs rather than developing general purpose technologies, citing “the relative stagnation of energy, transportation, space, materials, agriculture and medicine.” At the entrance to Thiel’s fund reads the statement: “we wanted flying cars, instead we got 140 characters.”
Techno-curmudgeons such as Gordon and Thiel are right to ask tough questions of Brynjolfsson and McAfee. Ironically, the MIT professors’ fêted treatise on the future is a rather old-fashioned economics book. Although it includes statistics and charts, the argument is built through anecdote, theory and extrapolation. No one should confuse it for a crystal ball.
Nevertheless, as Asimov showed, we can make accurate guesses based on the present. Even if Baxter and its friends cannot outdo the internal combustion engine—and the burden of proof remains upon the likes of Brynjolfsson and McAfee—there are good reasons to listen carefully to the techno-optimists.
In part, this is because there is more than one relevant conclusion to draw from the second industrial revolution. There was a lag of about three decades between the three great inventions of 1879 and an acceleration in productivity growth, as academics such as Chad Syverson and Paul Davis have shown. The commercial application of inventions takes time; companies need to change processes; younger tech-savvy workers must replace older employees.
The productivity data might also be underestimating the impact of the computer revolution. The digitisation of goods has meant that their cost of reproduction is essentially zero. There is a large amount of consumer surplus that probably goes unmeasured by traditional economic metrics. Put another way, we now have lots of free stuff that we would pay for. This abundance does not always show up in gross domestic product.
Gordon’s account, like many on this subject, is also squarely focused on the US. He gives insufficient attention to the potential for global collaboration in science, innovation and technology in the rest of the 21st century. The recombining of existing ideas and the widespread availability of data are major sources of invention—and we have more minds working together than ever.
Most importantly, the sceptical view does not engage with the main argument of the techno-optimists: the quality of very recent advances in machine intelligence, data analysis and robotics driven by the exponential growth of computing power. This is without mentioning fields such as genetics, pharmacology, clean energy and nanotechnology, which Gordon neglects.
Ideas that were failures only a few years ago now work. Of course, we cannot know whether Gordon is too pessimistic but it is why Paul Krugman says that, “my gut feeling is that Bob, while making a persuasive case, is probably wrong.” In 2004, dozens of driverless cars crashed in the desert while competing for a prize from the Defense Advanced Research Projects Agency. Eight years later, the state of Nevada gave the self-driving Google car a licence. Algorithms are doing the work of traders, journalists, lawyers and other professionals. Even Siri, the much-maligned iPhone assistant, is getting better.
Jerry Kaplan, an entrepreneur and professor of AI at Stanford University, put it bluntly: “People don’t understand it, they don’t get what it’s going to mean,” he told the Financial Times. “I feel like one of the early guys warning about global warming.”
At a HSBC call centre in Leeds ©Chris Ratcliffe/Bloomberg via Getty Images
Likening technological change to climate change is an imperfect comparison. The former is transforming the world for the better, the latter quite the opposite. Nevertheless, they share an important implication: even if we cannot predict what will happen when, we should still prepare for extreme events. For even if the future promised by the techno-optimists does not emerge, or takes longer than they expect, the underlying dynamic should remain the same.
It must be stressed that the benefits of the machine age could be vast, bringing millions out of poverty across the world. More people will be able to access digital goods for free. Things that were once scarce could soon be in abundance. Statistical analysis will improve disease diagnosis and encourage education tailored to individual pupils. We will feel overloaded with information but there are opportunities for better decisions and greater freedom.
And if techno-optimists such as Tyler Cowen are right, “it is likely that new technologies emerging will lead us out of… the great stagnation,” the term the economist uses to describe the past 40 years of stagnant American real wages.
However, machine intelligence is likely to increase inequalities of income, wealth and probably opportunity, too. Productivity improvements, even if they don’t match the extraordinary gains of the second industrial revolution, would bring “bounty,” as Brynjolfsson and McAfee call it, but also an unequal “spread.”
The clearest way this will manifest itself is via the labour market. Cowen predicts that “average is over”: employment will be divided into two categories, a minority of very high-paying jobs and a majority of low-paying jobs. Cowen invites you to ask yourself questions such as “Are you good at working with intelligent machines or not?” and “Are your skills a complement to the skills of the computer, or is the computer doing better without you?”
David Autor, an economist at MIT, uses a two-by-two diagram to depict the interaction between jobs and technology. Imagine a square window with four panes. The middle vertical line runs from “cognitive” at the top to “manual” at the bottom. The middle horizontal line runs from “routine” on the left to “non-routine” on the right. A routine manual job such as on an assembly line is in the bottom left pane and a non-routine cognitive job such as a mechanical engineer is in the top right pane—and so on. Over the past four decades the darkest panes have been the ones on the left, the routine jobs. When economists think of what is valuable they think about what is scarce. In the past few decades, there has been no shortage of unskilled routine labour; manufacturing has shrunk and machinery can do jobs that previously took dozens of workers; software has replaced service sector work done by typists and tax preparers.
The new machine age is set to darken the remaining panes in Autor’s diagram. That, at least, is the conclusion of “The Future of Employment,” a paper published in September by Carl Frey and Michael Osborne. The displacement process will soon encompass non-routine jobs, according to the Oxford academics.
Frey and Osborne say that 47 per cent of jobs in the US are at “high risk” of computerisation in the next two decades. Jobs classified as “high risk” include: credit analysts, cooks, geological technicians, crane operators, chauffeurs, cartographers, real estate agents, baggage porters and, ironically, semiconductor processors. Writers should be OK, apparently, having (rather appropriately) the same odds of replacement as veterinarians.
Is your job safe? If it is a mostly manual job, there is a lower chance of it being replaced by a computer in the next 20 years if it requires “perception and manipulation” skills. Plumbers visit hundreds of houses in a single year, each requiring unique fiddly work. Baxter would struggle. Yet, wholesale workers, for example, are in trouble. Companies such as Amazon design their warehouses large, wide and predictable for robots a bit like Baxter.
More cognitive workers will thrive to the extent that their work requires what Frey and Osborne call “creative” and “social” intelligence. If your job involves the development of novel ideas, or if it depends on a lot of human interaction and empathy, then there is less chance of it being displaced, they say. Jobs at low risk include: psychologists, curators, personal trainers, archeologists, marketers, public relations, most engineers, surgeons and fashion designers.
Unlike the previous wave of technological change, this one could erode professions. Lawyers, for example, are threatened by cheap notary software and by algorithms that can read thousands of pages much quicker than a clerk. Thousands of the jobs in the City of London are susceptible to technology that makes irrelevant their middlemen functions or shows up their maths skills.
The Frey-Osborne model is crude. Humans and technology are not engaged in a zero-sum game. Cowen uses the metaphor of freestyle chess, games where human teams use computers to compete against each other. Even before IBM’s chess-playing computer Deep Blue, software could outsmart grandmasters but the most successful teams are those combining human creativity and raw machine intelligence. The same goes for the broader future of work.
Nevertheless, some soothsayers predict that this time is different. Technologists such as Martin Ford and economists such as Robert Hanson argue that as machines become smarter, the less opportunity there is for humans to complement them. They contest that once the upfront cost of a robot is met, it works for free; there is no wage low enough that a human could offer the boss.
In “Economic Possibilities for Our Grandchildren” (1930), Keynes made a similar argument for “technological unemployment,” or “the discovery of means of economising the use of labour outrunning the pace at which we can find new uses for labour.” Keynes worried about the short-term effects of this dynamic but ultimately thought it meant progress towards a richer, more leisurely society. Ford disagrees. He argues that without work there is no income; when there are no jobs, who will buy what the robots are making?
Ford’s dystopian future may prove correct in the very long term but it seems improbable in the next few decades. The recessions in the US and the UK accelerated the decline of routine jobs but employment growth has proved resilient. This suggests that flexible labour markets are, at least for the moment, defying Ford’s thesis. Ford also underestimates the power of professional bodies to resist change in their work.
More importantly, the dystopian thesis represents a form of “lump of labour” fallacy: the idea that there is a set number of jobs in the economy. Machines, like skilled immigrant labour, should lead to what Keynes called “a temporary phase of maladjustment” but ultimately native workers should benefit. Work is not static. There are still opportunities for labour to complement machine intelligence and new jobs that we cannot yet imagine will be created.
Then there are the returns to capital created by machine intelligence that have to be used. This consequence is less cheery. When I once asked Cowen whether it is better learn code or Mandarin, he said neither—marketing would be a superior option. In his book Average is Over, he adds: “Making high earners feel better in just about every part of their lives will be a major source of job growth in the future.”
Like Brynjolfsson and McAfee, Cowen is optimistic about the potential of technology. But, rightly, he is far less sanguine about inequality and what will happen in a world of machine intelligence: “This is not a world where everyone is going to feel comfortable… The world will look much more unfair and much less equal,” he writes in Average is Over, “and indeed it will be.”
Cowen’s account strikes me as more realistic about the downsides of the new machine age than Brynjolfsson and McAfee’s. Nevertheless, both techno-optimist accounts tend to overplay the responsibility of technology and jobs in driving inequality— and underplay the response required. “There will be no harm in making mild preparations for our destiny,” Keynes reflected in his 1930 essay.
The debate about technology cannot take place in isolation from the other trends that are shaping our economies. Although science fiction might suggest otherwise, robots and other innovations are not singular forces that appear out of nowhere. Their impact will be transmitted through the institutions built by humans. We should not be fatalistic; we can have the good and reduce the bad. We only need to look at the past decade to see that technology is not destiny. Robots do not regulate or deregulate finance, set tax rates, directly determine the power of unions or calibrate the responses to recessions. They do not depress demand, reduce public investment or build corporate cash piles.
US economists such as Cowen argue that since the recession the labour market has been indicating trends that are likely to grow—participation is falling and median wages continue to stagnate. In the UK, it is harder to make that case: median earnings have not risen in real terms since 2004/5 but participation remains high. Demand has been depressed and there has been capital shallowing due to insufficient investment; if anything, there has been too little sign of the robots. And in a forthcoming paper, Frey and Osborne find that 36 per cent of UK jobs are at “high risk” of computerisation— fewer than in the US. Behind the vogueish discussion about technology is perhaps a more important trend: the declining share of income going to labour rather than capital.
And without a political reponse, the type of technological change discussed by Brynjolfsson and McAfee would only further such a divergence. In a world where there are huge returns to the owners of machine intelligence and global winner-takesall markets for digital products, this trend would be encouraged. Capital-biased technological change could therefore be every bit as important as skill-biased technological change to inequality in the “second machine age.” In Britain, where the property market is creating a rentier society and growing inequalities of wealth across regions and generations, this matters greatly; it risks becoming harder and harder to work one’s way towards the top of society.
The techno-optimist response to the prospect of rising inequality is to emphasise the importance of education. “Teach the children well,” is the first piece of advice Brynjolfsson and McAfee offer in advance of the machine age. This is perfectly sensible. So, too, is supporting start-ups and “real-time databases” to match employers and employees, two other ideas plucked from the zeitgeist. But given the epic scale of the change they predict, their response is tiny and naive, as if greater equality were only an app download away.
Cowen’s call for a “new social contract” for the digital age is more like it. But the deal offered by the professor of economics is far from enticing for the majority of people. “We will move from a society based on the pretense that everyone is given an OK standard of living to one in which people are expected to fend for themselves much more than they do now.” A “hyper-meritocracy” would, however, apparently emerge in this new world.
I think we can do better. A hyper-meritocracy seems unlikely in a society with such concentrations of income and wealth. Even if a more quantified economy allows genuine talent to be revealed more easily, this talent does not suddenly appear, waiting to be spotted by an algorithm. One presumes the top 15 per cent will have plenty of resources to ensure their children are best equipped for the new machine age.
The future can be prosperous and thrilling. Nevertheless, some “mild preparations” are essential to consider. First, turning classrooms into nodes of a “Massive Open Online Course”, or MOOC, is an efficient vision but schools are only one part of education. If the goal is to ensure that young people are cognitively equipped to complement machine intelligence then early years provision must be improved. What happens after school is also important; we can do better than aimlessly send students to redundant courses.
Second, beyond education, there is a need to resist the neo- Luddism of those who obsess over data privacy, while also making sure consumers can understand and extract the value of their data. Since we all benefit from the analysis of data in aggregate, we have a collective interest in making sure it is shared, protected and rewarded. Projects run by the Open Data Institute are already finding novel ways this can be done in the energy and banking sectors.
Third, if the new machine age continues to reward capital at the expense of labour, there will be a need to think deeply about the distribution of what is scarce. Land is one example—and the arguments for a prospective land value tax are compelling. But the ownership of capital more broadly will also matter. Are our analogue intellectual property rights too restrictive for the digital age? Is there a role for a national mutual fund to distribute the ownership of capital more widely, a question even Brynjolfsson and McAfee consider.
Fourth, our domestic and global tax systems will need to do more to reward work and progressively tax wealth. The idea of a basic guaranteed income has gained popularity among some wonks: when there is no income to be earned because the robots are taking it all, the state will need to step in. But we are not there yet and we should not give up on work too easily. Better would be to gradually move towards a tax system that narrowed the labour to capital divergence, incentivising employment in the process.
Finally, we must arrive at a less hysterical approach to the robots business. Technological change is not new. It can, as Ernest Hemingway said of bankruptcy, appear gradually then suddenly. But this is no excuse for neo-Luddism or for fatalism. Our culture should be one that embraces technological change and remembers that it is an exclusively human ability to make moral decisions.
In his 1964 article on the World’s Fair, Asimov worried that “mankind will suffer badly from the disease of boredom, a disease spreading more widely each year and growing in intensity.” This was a rare wrong guess: the world carries more potential for more people than at any point in human history. But another of the writer’s prophecies should serve as a warning. Asimov predicted that in 2014, “the lucky few who can be involved in creative work of any sort will be the true elite of mankind, for they alone will do more than serve a machine.” In enjoying the bounty of progress from our new overlords, we should remember who is really the boss.