Far from leading to better results, cost-benefit analysis too often provides a bogus rationale for bad decisionsby John Kay / March 8, 2019 / Leave a comment
Published in April 2019 issue of Prospect Magazine
John Kay on the uses and abuses of cost-benefit analysis. Photo: Prospect composite Is cost-benefit analysis a useful tool of policy that has transformed the quality of public decision-making? Or is it a pseudo-scientific cover for irrational decisions? The Trump moment is an unfortunate time for Cass Sunstein, the distinguished American constitutional lawyer, to publish a treatise arguing in favour of the first proposition. In the last 50 years, he explains, the United States has undergone a revolution. “No gun was fired. No lives were lost… Nonetheless, it happened.” As a result, he asserts, “in terms of saving money and saving lives, the cost-benefit revolution has produced immeasurable improvements.” If this was a revolution, it was one most of those living through it missed. Throughout history people have totted up the pluses and minuses of public and private decisions. Benjamin Franklin advocated “prudential algebra.” Charles Darwin famously listed the pros and cons of marriage. What makes cost-benefit analysis distinctive is the presumption that everything that might be pertinent to the decision can be quantified by drawing on ideas such as “opportunity cost” (what resources or time might otherwise be used to do), externalities (advantages and disadvantages that are not captured in the market price) and discounting (the extent to which upfront advantages are given more value than those further down the road). According to Sunstein, this “revolution” began, improbably enough, with Ronald Reagan, but continues up to the present incumbent of the White House. For him it is an exclusively American revolution. In one of his few references to the world outside the US, Sunstein explains that he had the privilege of speaking to high-level policy advisers in three European countries, who were drawn to the idea of cost-benefit analysis but were “puzzled and a bit sceptical.” They worried about the difficulties of applying these insights outside the US. These policy advisers were presumably unaware of the EU requirement that all proposals with significant economic, social or environmental effects be accompanied by an “impact assessment.” These assessments are typically in cost-benefit form, which perhaps tells us a lot about their practical insignificance. But my perception of the whole field is rather different. There was a cost-benefit revolution—and it began in Britain in the 1960s. Cost-benefit analysis around the world still follows a template first set out in a 1962 study by Michael Beesley and Christopher Foster, who studied the construction of the Victoria Line in London. They demonstrated that although the line would generate little additional revenue, the value created in time savings and reduced congestion above and below ground far exceeded the costs. Fifty years later, there can be no doubt that they were right. The all-in cost of less than £100m, equivalent to a little over £1bn today, seems a very low price for what is now an indispensable part of the capital’s transport infrastructure. The most thorough cost-benefit analysis ever undertaken was commissioned for the 1968 Roskill Commission. Headed by a distinguished judge, the Commission supervised a dispassionate inquiry into London’s airport needs. The analysis recommended in 1971 that London should have a new airport at Cublington, about 40 miles northwest of the metropolis. The recommendation was not adopted. Politicians procrastinated—and have continued to procrastinate for the last 50 years, with results familiar to any user of Heathrow. The Commission came up with the right answer. But, like the abortive revolution of 1968, the cost-benefit revolution came and went. There has since been an accompanying decline, not just in Britain but around the world, in the integrity of official data and analysis. When I worked at the Institute for Fiscal Studies (IFS) in the early 1980s, it was difficult to persuade politicians or the press that our data was of a quality equivalent to that produced by government. Now no one would take government data seriously if the independent IFS had different figures. The IFS has enhanced its reputation over 30 years, but the larger cause is the decline in the impartiality of official information. In Britain the rot began under Thatcher with the redefinition of unemployment figures to present a misleadingly favourable impression, but it has since continued under all administrations. I recall my genuine shock the day that I heard a civil servant present encouraging trends on the number of rough sleepers in London, and realised that I could no longer accept that—or any—official claim without checking it out myself. Recognising this lack of credibility, Gordon Brown established a Statistics Authority in 2008 and defined a category of “national statistics,” prepared to professional standards; for other government data the requirements would be less demanding. The need for the distinction is itself illuminating. Nor does the US experience look much better. Reagan could hardly be said to have been committed to hard data and rigorous analysis. Sometimes it was hard to tell where the movie ended and real life began. Still, he demonstrated an integrity that was not reproduced in Karl Rove’s description of George W Bush’s tenure: “we create our own reality.” And today the US has a president who openly disdains the importance of factual accuracy. This combines with an extreme partisanship in which truth is determined not by what is said, but by who says it. If you ask an American for his or her view on climate change, abortion, gun control or the Affordable Care Act, you can usually predict the answer to all from the response to any one. These responses are the product of tribal identification rather than evidence. Far from being the beneficiary of a cost-benefit revolution, Trump’s America is clearly not a place where policy is the product of a rational consideration. The situation in Britain is similar. The abiding image of the 2016 Brexit referendum is the bus decorated with the lie that leaving the EU would release £350m a week for the NHS. But the Remain side was hardly better. George Osborne posed in front of a poster proclaiming that a Leave vote would cost each British household £4,300 per year—a figure discredited by its ludicrous precision. A claim to knowledge that the speaker could not possibly have is perhaps less reprehensible than outright falsehood, but not by much. Evidence-based policy is the mantra, but the reality is policy-based evidence. Take as an example HS2, the projected rail link between London and Birmingham and—perhaps eventually—the north. The legacy of Beesley, Foster and Roskill survives in debased form in WebTAG, a cost-benefit model which, in conjunction with the Treasury’s Green Book on project appraisal, the Department for Transport requires to be used for all major UK transport projects. The proponents of HS2 commissioned a study that demonstrated the benefits of the project; consultants for the opponents used the same methodology to reach the opposite conclusion. In the world of WebTAG, an individual’s time is given a monetary value depending on the mode of transport by which he or she travels. There are 13 such modes, each of them separately but peculiarly priced. The time of a taxi passenger is worth £13.57 per hour, whereas the taxi driver’s time is considered less valuable, at £9.94 per hour. Hedge fund managers walking to work and journalists cycling to their offices share a time value of £7.69 per hour, but the Deliveroo courier on her motorbike shares a time value of £13.57 with the taxi passenger. Absurd as it might seem to put an arbitrary value on time in the present, the model, even more implausibly, continues this level of precision into the future. Growth projections yield predictions of how valuable the time of each group will be in 2052, to the penny. If you would like to know how many people will be travelling in a car on a weekday evening in 2036, the WebTAG spreadsheet will provide an answer. By making up all the numbers. *** The central claim of cost-benefit analysis is that the value of everything can be expressed in monetary terms, enabling every project to be assigned a cash measure of benefit or disbenefit. The Roskill Commission acknowledged that the Norman church at Stewkley might have to be demolished to make way for Cublington’s runways; absurdly, it estimated the loss principally by reference to its insurance value of £50,000. Sunstein illustrates his approach with the example of removing toxins from the water supply. Individuals might be willing to pay $90 to remove a 1 in 100,000 chance of death from water poisoning, consistent with a value of $9m for a human life. Remarkably, this is what Sunstein regards as an “easy” case. We can just ask people how much they are willing to have added to their water bill. But almost everyone who is not an economist is understandably reluctant to put a value on human life—either their own or other people’s. Ford’s Pinto car had a fuel tank that was liable to explode in a collision. The company was excoriated for a calculation showing that the cost of making the car safer was greater than the value of the lives thereby saved. As it later emerged, though, the purpose of the company’s calculation was not to determine the design but to demonstrate the absurdity of formulating the problem this way. Discouraging smoking imposes substantial net costs on the public purse because the additional state pensions through increased life expectancy far exceed the costs of NHS treatment of smoking-related diseases. But does anyone really think this sort of calculation is appropriate? “How much would you pay to eliminate a 1 in 100,000 risk of death?” is not a question that relates to how ordinary people think, and it is foolish to attach any weight to the answers. If the value of a Norman church can’t be measured by simply looking at its insurance value, it is also hopeless to ask how much every individual would pay for the church to be spared from demolition, and irrelevant to compute what it would cost to provide a replacement worship facility. But in Sunstein’s world cost-benefit analysis should be applied to all regulation. How much would you pay to have Muslim immigrants excluded from the US? How much would you, a Muslim, pay to be allowed to enter the US? Amartya Sen once constructed the paradox of the Paretian liberal. If Mr Prude would rather read a pornographic book than allow Mr Lewd to read it, and Mr Lewd is sufficiently tickled by the idea of Mr Prude being forced to read it to prefer that outcome to the opportunity to read it himself, then cost-benefit analysis tells us that only Mr Prude should read the book. But does anyone think that this outcome is the right answer? What is wrong with these exercises is not just that they make economists seem deserving targets for Oscar Wilde’s characterisation of the cynic as a “man who knows the price of everything and the value of nothing.” It is not even that they appear to have no real impact on policy decisions—even the exemplary Victoria Line study was undertaken as the line was already under construction. The more serious problem is that by claiming falsely that all projects can be judged by reference to some standard template, the endless invention of bogus numbers gets in the way of informed judgments identifying key economic factors—factors that will necessarily differ substantially according to the nature of the decision. For example, the value to passengers of faster journeys is key to an assessment of the economic benefit of a high-speed line to Birmingham. Since WebTAG depends crucially on the value of time, this discussion has moved on to the ridiculous question of whether business people with laptops will be using their time productively on trains. Here it might help a bit to look instead at places like west Holland, Kent or the existing competition between train companies running lines from Birmingham to London and back—where travellers have the options of regular trains, faster intercity, and high-speed trains, at successively higher fares. But much more important is the question of whether rapid links between the capital and provincial cities help regional development, or contribute further to centralisation? There’s precious little data to draw on here that isn’t going to require careful—and debatable—judgments to interpret. We have never been in more need of a properly evidence-based policy process. But Sunstein’s “cost-benefit revolution” has made matters worse not better. Prudential algebra was always a cover for decisions made on other grounds-—Franklin wrote “so convenient a thing is it to be a reasonable creature since it enables one to find or make a reason for everything one had in mind to do.” And cost-benefit analysis today offers a bogus rationale for bad decisions—like HS2—made by reference to soundbites, prejudices and gut feelings. The requirement is for policy advisers who are quantitatively trained and properly sceptical, but also skilled in identifying the relevant economic issues and presenting them to the public. The Brexit debate revolves around the differences between a single market and a customs union, a free trade area and the rules of the World Trade Organisation, Norway minus and Canada plus—as well as the different relationships each crystallises. But insofar as we heard any economic argument during the referendum it consisted of the exchange of unfounded numerical assertions. It was only after the result that any of the substantive choices entered public discourse. It would be encouraging, if perhaps optimistic, to think that a rerun of the debate might be different. Economics can be immensely helpful in framing arguments. But it is instead all too often employed in the unproductive exchange of spurious statistics.