If there is to be another election, those in politics, the media and wider society need to become more careful and sceptical consumers of pollsby Will Jennings / September 6, 2017 / Leave a comment
The performance of the opinion polls in the 2017 general election was, much like the experience of 2015, as they say, ‘sub-optimal’.
This was a highly unusual election in terms of polling, though. It was unusual firstly because of the size of the surge in Labour support during the campaign—which exceeded the change in voters’ intentions in all other elections since 1945. It was also unusual because—seemingly unnoticed by most—the polls actually got the Conservative vote share right for once!
This historical tendency had been for the polls to under-estimate Conservative support and over-state Labour. For once this did not happen, with Labour’s vote under-estimated by 5 percentage points on average. In fixing an old problem, the pollsters discovered a new one: cranking the methodological levers a little too hard and reducing the Labour vote share. By contrast, the UKIP vote was over-estimated quite substantially—something which has to date been given little attention, despite its potential importance for the final result.
It was the combined polling error, missing the Conservative-Labour lead, that was the cause of surprise when the exit poll was released at 10pm on Election Night. In historical perspective that error was not out of the ordinary—it had been larger in the general elections held in 1951, 1970, October 1974, 1992, 1997 and 2015. What mattered was that it critically tilted the balance of seats in parliament: Theresa May’s Conservatives failed to win a majority, undermining the entire reason for calling the election.
Putting the exit poll in context
Compared to other countries, the historical record of polling in Britain is in fact relatively good—especially given that majoritarian, two-party systems tend to experience higher polling errors. This is largely down to the ‘margin of error’ being greater for parties receiving a larger proportion of the vote—which tends to favour polling in countries with more fractured electoral competition. The poll errors at the 2015 and 2017 UK elections are towards the upper end of recent international elections, but still need to be put in context.
For instance, while the average error in the first round of the 2016 French presidential election was much lower, the four leading candidates each received in the region of 20 per cent of the vote, meaning that in pure statistical terms the error should have been lower. Notably, the French polls were substantially off in the run-off, exceeding the poll error at the 2017 UK election—contra to the conventional wisdom.
Recent polling ‘misses’ have prompted some to suggest that polling globally is in crisis—with more volatile electorates make it more difficult to get the result right. These claims have been fuelled by surprise results in the US and the UK (twice), which upset the conventional wisdom. But is this true?
Same as it ever was
Based on a study of over a thousand polls for over two hundred elections in more than thirty countries since 1942, there is no evidence that polling errors have increased over time—or have suffered a sudden and existential crisis in the face of the populist turn of electorates in some countries. In many ways, this is remarkable given that ‘polling’ has evolved through many methodological forms in this period, from face-to-face random samples, to telephone polling, to online panels using a wide range of weighting and adjustment techniques.
While the pollsters may have good and bad days at the office, there is nothing to suggest there has been a fundamental change in the ability of polls to measure voting intentions across the world. Each election throws up new and distinct challenges, but the basic principles of political polling remain the same.
The press gang
Arguably the biggest challenge that polling faces in the 21st century is how it is presented in the media and understood in wider society. Polls feed our desire to know the future, so invariably they are attention-grabbing. Yet their limits and uncertainties—although hardly a secret—are quickly forgotten in the rush to report them.
A good example is the popular belief that the 2016 US presidential election was a major polling “miss.” While the national polls did slightly over-state Hillary Clinton’s share of the vote, the size of the error was fairly typical by historical standards. The shock and surprise at Donald Trump’s victory resulted in large part from polls in key states that suggested he faced major obstacles to winning the Electoral College. On top of this, the probabilities being assigned to a Trump victory by forecasters—not pollsters—suggested it was a one in ten chance.
While few of us would cross the road if there was this probability of being hit by a bus, this was widely interpreted as being a near impossible event. And yet Trump did win.
Telling the correct story
In the same way, the 2017 UK polls told essentially the correct story: that the Conservatives would win a greater share of the vote (they did) and that Labour hugely increased its support as the campaign went on (they did).
While the closeness of the election was a surprise, this hardly should have been a surprise after the experience of 2015. Digesting the polls with just a little caution would, therefore, have recognised that a Labour over-performance was at least possible.
Instead, the conventional wisdom was that Corbyn was an electoral disaster waiting to happen and that Labour could not do better than the polls suggested.
The media plays a critical role in perpetuating the conventional wisdom about the state of public opinion, or the extent of polling misses at past elections. Yet the quality of reporting of polls varies wildly—with TV broadcasters tending to be cautious in their use of polls, and most broadsheets being fairly judicious. Certain tabloids have extremely poor track records in how polls are reported. Recent examples include the publishing of stories based on unrepresentative straw-polls and using questionable ‘polls’ based on methods that are mysterious at best, as was the case ahead of the Stoke Central by-election in 2016.
For the polling industry, the challenge in future will be to learn from recent elections without assuming that what worked (or didn’t work) in 2017 will necessarily work next time. Electorates change and so do survey respondents, and as such pollsters must hit a moving target with tools that are rarely tested.
More broadly, those in politics, the media and wider society need to become more careful and sceptical consumers of polls, at the same time as recognising they can provide important insights into the state of public opinion and support for political parties.