The fathers of behavioural economics: Amos Tversky, left, who died in 1996 and Daniel Kahneman, who won a Nobel Prize in 2002 ©

Behavioural economics: did Kahneman and Tversky change the world?

Not as much as Michael Lewis thinks
January 16, 2017
Listen: Jon Kay speaks to Tom Clark for Prospect's monthly podcast, "Headspace"

Read an interview with Michael Lewis here

Michael Lewis is an outstanding writer. All his successful books have a characteristic template: the elucidation of complex subjects through the story of larger-than-life individuals. Lewis’s first book, Liar’s Poker (1989), gave an early insight into the growing influence of financial services, but is most memorable for its portrayal of the grotesque figures who stalked the trading floor of Wall Street investment bank Salomon Brothers. In Moneyball (2003), he described how Oakland Athletics manager Billy Beane transformed baseball by elevating statistical analysis over conventional sporting wisdom. The Big Short (2010) explained a key element of the 2008 financial crisis—the meltdown in the US “sub-prime” housing market—through the actions of a few misfits and nerds who saw through the orthodox analysis.

His new book, The Undoing Project, follows the same model, but with much less success. Its subtitle is “a friendship that changed the world.” The friends are two Israeli-American academics, Daniel Kahneman and Amos Tversky, and their achievement was to create the subject of behavioural economics. The problem, however, is that Kahneman and Tversky are not sufficient as characters to carry the story, and the claim that their work “changed the world” is a gross exaggeration. Indeed, the continuing failure of economics to grapple with the way that real human beings behave dogged the discipline right up to 2008—and beyond. In January, the Bank of England’s chief economist, Andy Haldane, warned that its continuing failure as a predictive science invited the sort of derision once heaped on weatherman Michael Fish for failing to foresee 1987’s great storm.




The Undoing Project: A Friendship that Changed the World by Michael Lewis (Penguin, £25)




It is no criticism of the heroic pair to observe that they are not as interesting as the guys you loved to hate—like John Gut-freund and John Meriwether in Liar’s Poker—or struggled to love—like The Big Short’s Steve Eisman and Michael Burry. Kahneman and Tversky have their own stories. Kahneman was a boy in wartime France: his family fled from Paris to the south and survived by remaining unobtrusive, emigrating to Palestine shortly before the State of Israel came into being in 1948. Tversky was the son of Russian Jews who left the USSR in the 1920s. Both were too young to take part in Israel’s war of independence, but their subsequent careers were punctuated by military service and their part in the conflicts of 1956, 1967 and 1973.

Kahneman comes across here just like he is in real life—a normal, well-adjusted human being with a fairly conventional academic career. Kahneman would doubtless have been as uncomfortable as anyone else in the company of hedge fund managers planning to short the US mortgage market. Tversky is characterised here as the elusive genius of the pair, but his story was cut tragically short: he died suddenly in 1996 and hence failed to share the Nobel Prize for Economics, which was awarded to Kahneman in 2002. Nine years later, Kahneman published his unlikely bestseller, Thinking, Fast and Slow, on the subject of behavioural economics.

So why did an experimental psychologist win this most prestigious of prizes in economics? Since Paul Samuelson’s Foundations of Economic Analysis, published in 1947, mainstream economics has focused on an axiomatic approach to rational behaviour. The overriding requirement is for consistency of choice: if A is chosen when B is available, B will never be selected when A is available. If choices are consistent in this sense, their outcomes can be described as the result of optimisation in the light of a well-defined preference ordering.

In an impressive feat of marketing, economists appropriated the term “rationality” to describe conformity with these axioms. Such consistency is not, however, the everyday meaning of rationality; it is not rational, though it is consistent, to maintain the belief that there are fairies at the bottom of the garden in spite of all evidence to the contrary. And consistency is hard to assess in an uncertain and constantly changing world, in which there is no objective basis on which to decide whether two situations are the same, or different. I am consistent, but you are stubborn; I am flexible, but you are inconsistent.

The axioms of rational choice were applied not just to consumer decisions, but to judgments of risk. Almost immediately, the French economist Maurice Allais provided illustrations of plausible behaviours which violated the axioms of “rational choice.” But the reaction of most economists at the time was to note the anomaly and carry on—as has been the reaction of most economists ever since. In the 1970s, however, Kahneman and Tversky began research that documented extensive inconsistency with those rational choice axioms.

Most people who are not economists, and perhaps even some who are, would find it surprising that so much attention should have been paid to the derivation of propositions about economic behaviour from axioms, yet so little to examining what economic agents actually did. Ronald Coase, another winner of the Nobel Prize, liked to cite the observation (attributed to the Welsh economist Ely Devons) that “suppose an economist wanted to study the horse, what would he do? He would go to his study and ask himself, ‘What would I do if I were a horse?’”

So Kahneman and Tversky went down to the stables. Well, not exactly. What they did, as is common practice in experimental psychology, was to set puzzles to small groups of students. The students often came up with what the economics of rational choice would describe as the “wrong” answer. These failures of the predictions of the theory clearly demand an explanation. But Lewis—like many others who have written about behavioural economics—does not progress far beyond compiling a list of these so-called “irrationalities.”

This taxonomic approach fails to address crucial issues. Is rational choice theory intended to be positive—a description of how people do in fact behave—or normative—a recommendation as to how they should behave? Since few people would wish to be labelled irrational, the appropriation of the term “rationality” conflates these perspectives from the outset. Do the observations of allegedly persistent irrationality represent a wide-ranging attack on the quality of human decision-making—or a critique of the economist’s concept of rationality? The normal assumption of economists is the former; the failure of observation to correspond with theory identifies a problem in the world, not a problem in the model. Kahneman and Tversky broadly subscribe to that position; their claim is that people—persistently—make stupid mistakes.

Take, for example, the famous “Linda Problem.” As Kahneman frames it: “Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Which of the following is more likely? ‘Linda is a bank teller,’ ‘Linda is a bank teller and is active in the feminist movement.’”

The common answer is that the second alternative—that Linda is more likely to be a feminist bank teller than a bank teller—is plainly wrong, because the rules of probability state that a compound probability of two events cannot exceed the probability of either single event. But to the horror of Kahneman and his colleagues, many people continue to assert that the second description is the more likely even after their “error” is pointed out.

But it does not require knowledge of the philosopher Paul Grice’s maxims of conversation—although perhaps it helps—to understand what is going on here. The meaning of discourse depends not just on the words and phrases used, but on their context. The description that begins with Linda’s biography and ends with “Linda is a bank teller” is not, without more information, a satisfactory account. Faced with such a narrative in real life, one would seek further explanation to resolve the apparent incongruity and, absent of such explanation, be reluctant to believe, far less act on, the information presented.

Kahneman and Tversky recognised that we prefer to tell stories than to think in terms of probability. But this should not be assumed to represent a cognitive failure. Storytelling is how we make sense of a complex world of which we often know, and understand, little. We deal with ill-defined questions which may have equally ill-defined outcomes—the “mysteries” which Gregory Treverton, the departing Chair of the US National Intelligence Council, distinguishes from “puzzles,” which are determinate problems with right and wrong answers. And the subjects of the Kahneman-Tversky experiments wereconfronted with puzzles—and not allowed to say that they knew too little about the context to solve them. Because there wasn’t any context to the problems posed in the experiments, or none that made sense.

Paradoxically, the appeal of a racy narrative is the essence of Michael Lewis’s own success as an author. He understands well his readers’ predilections for villains and heroes. Of course our search for persuasive stories leads us to make mistakes, because our information is imperfect. Often we are too ready to construct a narrative from insignificant information, interpret what we learn in the light of our earlier assessments, and adhere stubbornly to narratives even when further information becomes available. These issues—premature judgment, confirmation bias and halo effects, and above all the assumption that WYSIATI (what you see is all there is)—are not faults from which Lewis is wholly immune.

The essence of the Linda Problem is that what you see really is all there is, and what there is seems incoherent. In the light of that incoherence, the term “likely” is simply not applicable. The meaning of the word is, without justification, equated to a particular concept of probability embraced by—and seemingly only by—people with a particular statistical training and worldview.

So we should be wary in our interpretation of the findings of behavioural economists. The environment in which these experiments are conducted is highly artificial. A well-defined problem with an identifiable “right” answer is framed in a manner specifically designed to elucidate the “irrationality” of behaviour that the experimenter triumphantly identifies. This is a very different exercise from one which demonstrates that people make persistently bad decisions in real-world situations, where the issues are typically imperfectly defined and where it is often not clear even after the event what the best course of action would have been.

Lewis’s uncritical adulation of Kahneman and Tversky gives no credit to either of the main strands of criticism of their work. Many mainstream economists would acknowledge that people do sometimes behave irrationally, but contend that even if such irrationalities are common in the basements of psychology labs, they are sufficiently unimportant in practice to matter for the purposes of economic analysis. At worst, a few tweaks to the standard theory can restore its validity.

From another perspective, it may be argued that persistent irrationalities are perhaps not irrational at all. We cope with an uncertain world, not by attempting to describe it with models whose parameters and relevance we do not know, but by employing practical rules and procedures which seem to work well enough most of the time. The most effective writer in this camp has been the German evolutionary psychologist Gerd Gigerenzer, and the title of one of his books, Simple Heuristics That Make Us Smart, conveys the flavour of his argument. The discovery that these practical rules fail in some stylised experiments tells us little, if anything, about the overall utility of Gigerenzer’s “fast and frugal” rules of behaviour.

In Kahneman’s Thinking, Fast and Slow, references to Gigerenzer are confined to two footnotes. Lewis gives considerably more space to him in The Undoing Project, but in unflattering vein. We hear that Tversky “couldn’t mention Gigerenzer’s name without using the word ‘sleazeball.’” Apparently, Kahneman and Tversky took the view that Gigerenzer’s critique—or, in my view, extension—of their work “ignored the usual rules of intellectual warfare.”

I have not found material in Gigerenzer’s writing that could justify such a reaction. And the short final chapter of Thinking, Fast and Slow edges towards conclusions similar to those suggested by Gigerenzer’s perspective from evolutionary psychology—we persist in allegedly “irrational” behaviour because, mostly, it works well for us.

For example, is it really irrational to be hopeful and cheerful? Optimism is a characteristic that sometimes leads us to make bad decisions or take foolish risks, but it helps us get through life. And makes the world a more innovative, as well as more agreeable, place. Kahneman seems to acknowledge the social utility of optimism when he titles the relevant chapter of his book “The Engine of Capitalism.” And, of course, a positive attitude to life makes us more attractive to potential mates, the property that evolution selects for.

Perhaps it is significant that I have heard some mainstream economists dismiss the work of Kahneman in terms not very different from those in which Kahneman reportedly dismisses the work of Gigerenzer. An economic mainstream has come into being in which rational choice modelling has become an ideology rather than an empirical claim about the best ways of explaining the world, and those who dissent are considered not just wrong, but ignorant or malign. An outcome in which people shout at each other from inside their own self-referential communities is not conducive to constructive discourse.

It is important that economists observe how people really make decisions about consumption and risk, in business and in finance, and do not simply assert that their behaviour must conform to certain pre-defined axioms. Observing the puzzle-solving capabilities of Princeton and Stanford students in controlled experiments is one approach, but not the only one, nor necessarily the best. Insight might be gained by observing what shoppers actually do when they push their trolleys round the supermarket, or by investigating how chief executives make business decisions, or studying what goes on in the trading rooms of investment banks.

So Lewis’s claim—or perhaps his publisher’s—to have described “a friendship that changed the world” is overblown. (The US publisher tones this subtitle down to the intriguingly ambiguous “a friendship that changed our minds”—changed our minds, or changed our minds about how we describe our minds?) Behavioural economics has not changed the world. It has barely begun to change the way we think about the world. But it should. Economists, and perhaps social scientists more generally, should devote more effort to inductive reasoning from observation, and less to deductive reasoning from axioms. “It takes a model to beat a model” is a mantra among economists. But the slogan is false. It takes an observation to beat a model. And Lewis does at least provide a list of observations that cast substantial doubt on conventional models, even as he fails to point us towards alternatives.

Purchase the book here on Amazon