Spare a thought for the pollsters

No prizes are awarded to the opinion polling company that gets closest to the eventual election result, but that doesn't mean the stakes aren't high
May 20, 2005
>There's no prize for the opinion poll that gets closest to the election result, but from inside a polling company the stakes seem very high. The commercial implications are nothing special – political polling makes up around 1 per cent of Mori's business these days – but there is a whole load of professional pride at stake. And from a wider perspective the accuracy of polls matters. In particular, turnout is going to be one of the deciding factors in this election – and a key influence on turnout is how close people perceive the race to be.

But while many will be obsessing over the smallest differences between polling companies, the two facts we should be most surprised about are how consistent different polls are, given the very varied ways in which they are done; and how close they've managed to get to the result in the past, given the practical difficulties of the task.

In this campaign, just about all non-internet polls (around 30 of them) have remained within the range: Conservative 32 per cent ±3 percentage points, Labour 39 per cent ±3 percentage points and Lib Dem 21 per cent ±3 percentage points. The internet polls are equally consistent, but around slightly different averages – Conservative 34 per cent, Labour 36 and Lib Dem 22. These variations – and the difference between the polls and the actual results in all recent elections except 1992 – are well within margins of error.

Different companies use the internet, telephone and face-to-face interview techniques. Each has different strengths and weaknesses, and there are good reasons to think they should come up with quite different results. But data collection is only the start of the differences. Three big decisions we pollsters need to make when we get our interviews back are:

- What factors do we weight the data by?
- Do we try to account for how definite people are to vote?

- What do we do with those people who say they don't know or refuse to tell us how they will vote?

No two polling companies take exactly the same approach to all of these. And they are not just technical niceties – the decisions made here make a considerable difference to the results.

Every company applies some sort of weighting to the data – where the profile of the interview sample is made to match the profile of the population at large. There is little argument over weighting on things like sex, age and work status.

But there are heated discussions among pollsters on whether and how to weight to take account of how people have voted in the past. It seems sensible to try to do this – the one political fact we know with certainty is the proportion of the vote that each party received at the last election. But a lot of people either don't remember who they voted for, or deliberately change their voting history to show themselves in a better light. (A classic example is that of JFK: his victory in the 1960 presidential election was tiny, but the proportion who subsequently claimed to have voted for him increased throughout his term and jumped to two thirds after his assassination.)

We haven't seen such a dramatic leap in claimed Labour voters, but nevertheless the 42 per cent of voters who actually voted Labour in 2001 has now jumped to around 55 per cent. The problem is that the degree of "over-claiming" changes over time, and there is therefore no simple consistent way of applying weighting for this factor. This is why some argue against this approach.

There is similar scope for argument when we try to account for the fact that only 55-60 per cent of people will actually vote. Some companies include only those who say they are certain to vote in their published results, others give individuals a weight based on their stated likelihood, and others do not weight at all. Typically the more stringent the criteria, the better for the Conservatives and the worse for Labour - the Conservative vote tends to be tne "strongest."

Some companies also reassign a proportion of those who say they don't know how they'll vote to the party they say they voted for last time. This adjustment has a particularly significant effect in the final polls as people become more reticent – around one in 12 of those who say they'll vote say they don't know who for or refuse to tell us. This approach was designed to deal with "shy Tories," but now in fact boosts Labour's share.

There is no demonstrably correct choice for any of these – largely because the chance to test which gets us closest only comes around once every four or five years and the drivers of voter behaviour change all the time.

But in any case, in the end, there are two key limitations that none of this tweaking can get around. Firstly, the votes cast by most of us don't matter – the election will be decided in a small number of marginal seats. All discussions of the polls assume a "uniform swing" - consistent changes in voting patterns across all constituencies – which never happens. We are looking more at the marginals, but no survey is large enough to provide solid predictions.

And voters' minds are changeable, some more so than others. In particular, a third of Labour voters say they may still change their mind. These are the highest figures seen in recent election history. That's why we'll be interviewing up to 9pm this (Wednesday) evening in our final prediction poll – and then again on election day as people exit the polling stations.

So spare a thought for all us pollsters this Friday – there'll be no prizes given out but at least one of us is guaranteed to lose.

Update 12.05.05

Despite all my excuses and warnings these turned out to be the most accurate polls ever. All the final polls were well within their roughly three percentage points margin of error, and as the table below shows, the average error on party share was the lowest we've seen, at only 0.5 per cent.

The errors that remain still tend to favour Labour, but they are hardly worth worrying about, and in fact the adjustment that at least one polling company used to deal with any "spiral of silence" actually moved their final prediction further away from the result.

And the exit poll by Mori/NOP for the BBC and ITV got it just about spot on, predicting a majority of 66, compared with the current count of 67 (the by-election in Staffordshire South is very likely to make it exactly right).

Exit polls are not predictions in the same way as final polls, as they measure how people say they have actually voted. They are also based on much larger samples (around 18,000 interviews in this case) and mainly focused on marginal constituencies. But this election was the first where we had to attempt to factor in a significant amount of postal voting – methodologically tricky. A lot of credit should go to the academic teams (led by John Curtice, David Firth and Colin Rallings) who came up with final figures after crunching the data through their model.

So everyone's a winner - except perhaps the American pollsters, whose performance in recent presidential elections now looks even less impressive.

For more information see or

article body image