The reputation of the UK polling industry has been restored following its performance in the December 2019 election.
The snap election held in the UK on 12 December 2019 was an important test for the country’s opinion pollsters. They had widely been regarded as having got it ‘wrong’ in the two previous elections in 2015 and 2017, while the record of the polls in the 2016 referendum on Britain’s membership of the European Union had been widely criticised. The industry could not afford its reputation to suffer another battering – but it appears to have avoided that fate.
Ten companies conducted polls whose fieldwork continued through to at least 10 December, that is, just two days before polling day, and which thus can be regarded as ‘forecasts’ of the likely outcome. The results of these polls are shown in the table.
Table: Results of final polls of vote intention in the 2019 UK Election
Company | Fieldwork Dates | Conservative | Labour | Liberal Democrat | Brexit Party | Greens | Others |
% | % | % | % | % | % | ||
BMG | 6-11.12 | 41 | 32 | 14 | 3 | 4 | 5 |
Deltapoll | 9-11.12 | 45 | 35 | 10 | 3 | 3 | 4 |
Ipsos MORI | 9-11.12 | 44 | 33 | 12 | 2 | 3 | 6 |
Kantar | 9-11.12 | 44 | 32 | 13 | 3 | 3 | 5 |
Number Cruncher Politics | 8-10.12 | 43 | 33 | 12 | 4 | 3 | 6 |
Opinium | 10-11.12 | 45 | 33 | 12 | 2 | 2 | 6 |
Panelbase | 10-11.12 | 43 | 34 | 11 | 4 | 3 | 5 |
Savanta ComRes | 9-10.12 | 41 | 36 | 12 | 3 | 2 | 6 |
Survation | 10-11.12 | 45 | 34 | 9 | 3 | 3 | 6 |
YouGov | 4-10.12 | 43 | 34 | 12 | 3 | 3 | 5 |
Average | 43 | 34 | 12 | 3 | 3 | 5 | |
Result | 45 | 33 | 12 | 2 | 3 | 6 |
The polls by Ipsos MORI and Survation were conducted by telephone. All other polls undertaken online. Sample sizes ranged between 1,009 and 3,174, except for YouGov whose poll of 105,612 respondents was analysed using multi-level regression and post-stratification. All figures are for Great Britain (excluding Northern Ireland).
Conservative lead
All of the final polls put the Conservatives well ahead of their principal rivals, the Labour Party, and so, by implication at least, suggested that – given the country’s use of a single member plurality electoral system that tends to advantage whichever party comes first – the Conservatives would win a parliamentary majority. As a result, given that what most voters and commentators wish to ascertain from the polls is who will ‘win’, the polls are widely regarded as having made the right ‘call’. It was certainly a much better performance than in 2015, when on average the polls put the Conservatives and Labour neck and neck but the Conservatives enjoyed a seven-point lead, and, indeed, in 2017 when the final estimate of an eight-point Conservative lead compared with what proved to be a three-point one. Indeed, one company, Opinium, was not only spot on in its estimate of the share of the vote won by the Conservatives and Labour, but also in its figures for most of the smaller parties too.
That said, none of the polls overestimated the Conservative share of the vote and, at nine points, the average Conservative lead over Labour was three points below what actually transpired. The ‘errors’ in the polls were not simply randomly distributed on either side of the true estimate of the Conservatives’ performance, but rather demonstrated a slight tendency to overestimate the Conservatives’ strength relative to that of Labour. This replicates a pattern that has been in evidence at most recent elections (with 2017 being a notable exception).
Room for improvement?
So, what lessons did the polls learn from 2015 and 2017 – and why might there still be room for improvement?
The substantial overestimate of the Conservatives’ strength in 2015 occasioned an independent inquiry (sponsored by the British Polling Council and the Market Research Society) into what went wrong. Its principal conclusion was that the polls had too many Labour voters in their samples. There has long been evidence that polls tend to be more successful at contacting Labour supporters than their Conservative counterparts within the limited fieldwork periods they employ, a tendency they attempt to counteract through stratification and/or weighting. However, in 2015 these efforts came to nought – the underestimate of the Conservative lead in the published estimates simply replicated an underestimate that was also in evidence in the unweighted data.
A key reason for the oversampling of Labour voters appeared to be a failure to estimate correctly the extent to which younger voters were less likely than their older counterparts to vote. This mattered because in 2015 support for Labour increased (and that for the Conservatives fell) among younger voters while the opposite happened among older citizens. Those younger voters whom the polls interviewed did support Labour in large numbers – but they proved atypical in their willingness to vote in the first place.
The industry’s response in 2017 was two-fold. One was to try to improve the quality of their samples, including securing the participation of more people who do not vote. The other was to weight their data so that their estimates of the age gap in turnout matched the evidence from high quality academic surveys of what the size of that gap had been in 2015. Trouble is, in taking the second step many companies appear to have overcompensated for whatever over-representation of Labour voters existed in their samples. Indeed, at one point, the average Conservative lead in the unweighted data in the final polls was only slightly less than the actual lead of 2.5 points and was much closer than the eight-point lead the polls reported after weighting their data.
This time, most companies dropped their attempts to weight their data to match pre-determined turnout targets, simply relying on respondents’ self-reported probability of voting. That step ensured that they did not repeat the error that occurred in 2017. However, younger voters continue to be much more likely to vote Labour, still leaving the polls potentially vulnerable to the problems that beset them in 2015. That in the event they have come much closer to the mark suggests significant progress has indeed been made in overcoming it. At the same time, however, the fact that there was once again some tendency to underestimate the Conservative lead over Labour suggests there is still room for improvement.