Digest

Peter Kellner examines the reasons why the pollsters so badly underestimated the Tory vote in the 1992 election and asks whether it could happen again.
May 19, 1996

British journalism review

Spring 1996

Every single poll on election morning, 9th April 1992, predicted a hung parliament. Using very similar methods, they offered very similar figures-ranging from a Labour lead of 3 points (Nop), to a Conservative lead of 0.5 points (Gallup). Never before had the polls been so far from the true result: a Tory victory by 8 points.

According to a subsequent inquiry by the Market Research Society, the polls made a series of small errors which had the effect of inflating Labour's rating and reducing that of the Tories. The inquiry claimed that none of these errors were individually large; what was deadly was their cumulative impact.

Since 1992, pollsters have agreed about one specific failing at the last election, and disagreed about almost everything else. They agree that their samples contained too few middle class ABC1s and too many working class C2DEs. All the polling organisations have increased their proportion of ABC1s from 41-43 per cent to 47-48 per cent, and ICM, Nop and Mori have also adjusted both their samples and their final figures. But some problems remain.

Apart from its monthly quota polls for the Sunday Times, Nop conducts three random polls each month, in which interviewers are given names and addresses of specific electors drawn at random, and required to track down as many of them as possible over a seven-day period. Between September and December, they found that the Conservatives enjoyed a 5 point lead over Labour among AB voters (professional and managerial workers and their spouses) who comprise around 20 per cent of the electorate. Yet Nop, Mori and Gallup quota polls all showed Labour ahead among these voters. In Nop and Mori polls the leads were generally small; but Gallup showed Labour ahead among ABs by 10-15 points.

My own hunch is that when interviewers conduct quota polls, they tend to target the "easier" AB voters; they conduct street interviews with people who commute by train, and in-home interviews in reasonably well-to-do streets. They have less enthusiasm for remote villages and houses which are widely spaced and separated from the road by gravel drives. So they find too few of the richest, mainly Tory AB voters; and too many of those who work in the relatively less Tory worlds of education, health, media, law, the arts and public administration. That problem does not apply to random polls, whose interviewers must descend on distant hamlets and trudge up the right proportion of gravel drives-whether or not they want to.

Possible solutions are to give interviewers more precise instructions about where to find respondents; and to carry out more checks on demographic accuracy. The fact that Gallup usually produces the largest Labour lead overall may owe something to the fact that it carries out the least demographic checks. Thirty per cent of adults live in households with two or more cars, according to the census; but while ICM, Mori and Nop now make sure that their samples represent this, Gallup simply collects the information and does not weight its raw data by car ownership. Its combined November polls contained only 23 per cent of respondents in households with two cars or more, and as many as 32 per cent in no-car households (from which other polls draw only 24 per cent). Gallup's samples are significantly less opulent. No wonder they normally produce large Labour leads.

At the other end of the spectrum sits ICM. It recently switched from face-to-face to telephone polls, overcoming the distant village and gravel drive problem. On the other hand, non-telephone owners are generally poorer and more Labour-inclined than telephone owners. But the switch has had little impact on ICM's adjusted figures.

The distinctive feature of ICM's headline figures is that they are adjusted for what ICM's Nick Sparrow believes was the pollsters' biggest defect in 1992: their failure to take account of the "spiral of silence." This holds that some voters are wary of admitting their party loyalty to pollsters-especially when their favoured party is subject to intense criticism and thought likely to lose. So a small, but crucial, group of Tory supporters in 1992 either refused to take part in election polls, or replied "don't know." Sparrow believes that there is one compelling piece of information which shows that his rivals' figures are distorted by this "spiral of silence": when people in quota polls are asked how they voted in 1992, the results show that Neil Kinnock, not John Major, "won" the last general election. More people recall voting Labour than recall voting Tory. Sparrow argues that people recall accurately how they voted four years ago; therefore any poll that finds more Labour recalls than Tory recalls has a distorted sample. But an unpublished exercise by Mori in 1994 casts doubt on a vital element in Sparrow's argument: his belief that people recall accurately how they voted last time. On 6th-7th May 1994 Mori re-interviewed 368 electors it had previously contacted just a fortnight before. On both occasions Mori asked people how they had voted in the 1992 general election. No fewer than 53 respondents (15 per cent of the total) gave different answers. Nine people switched from Tory-recall, 11 switched from Labour. Two people who insisted in the first poll that they had voted Labour decided in the second poll that they had not voted at all. Five people who said they did not vote in the first poll were firmly in the Labour-recall group by the second poll.

There is an alternative explanation for the recall figures. Some people take politics seriously and have firm loyalties. But for others, voting is a peripheral part of their lives; they have no firm party attachments. We should not be at all surprised that a sizeable minority have no stable memory of what they did. If Labour is riding high, the Tories riding low and the Lib-Dems out of the news, it becomes a racing certainty that Labour's recall figures in any properly representative survey will be too high, while those of its rivals will be too low. That does not mean the "spiral of silence" theory is entirely wrong. All pollsters find that people who say they "don't know" how they would vote are disproportionately people who recall voting Tory in 1992.

Is there an answer? Nick Moon of Nop has tried to ensure that their surveys take into account the latest demographic data. Nop also weights its raw data to bring its recall voting figures into line with its random polls. Finally, it makes a small "spiral of silence" adjustment to allow for the continuing evidence that its "don't knows" tend to have a Tory bias. The combined effect of Nop's adjustments is usually to reduce Labour's lead, as published in the Sunday Times, by between 4 and 6 points. Has Nop got it right? Nobody can be sure. Research and experimentation will continue-by Nop and the other pollsters. Gallup is considering switching from quota to random polls for its regular surveys. If it does, I expect the changeover to produce a sharp reduction in the Telegraph's headline figure for Labour's lead; if it sticks with quota polls, I expect it to review its demographic weights. If all these things are done, then the current diversity (Gallup and ICM have varied up to 23 points on Labour's lead) will subside. But if the disputes and differences continue, then journalists and politicians alike must prepare themselves for a bumpy and bewildering ride at the next election.