Polling & Politics

2019 was a great year for UK polls

After a few difficult years, 2019 turned out to be a wonderful year for British pollsters. Pollsters had a hard time in 2015-17, over-estimating Labour in 2015, mostly missing the narrow Leave victory in 2016, and not anticipating a hung parliament in 2017.

In 2019 Ipsos MORI called the May EU election correctly for the Evening Standard, predicting the massive Brexit party win, and accurately showing the vote share of all the parties to within on average 0.75% of the actual result, and overall the polls performed well.

In the December 2019 General Election nearly all the final polls forecast a Conservative majority.  The final average of the pre-election polls was 43% for the Conservatives, 33% for Labour and 12% for the Lib Dems. These were pretty much right on vote share, with the exception of one or two companies who showed a much smaller Conservative lead.

For example Ipsos MORI’s final prediction from before the election had only an average error of 0.3% per party and predicted a large Conservative majority. Given that all polls, even when executed perfectly, have natural margins of error, no one should expect them to always exactly match “real results”. Each poll will normally have a margin of error of circa 4% – in other words if it says Conservatives will get 43%, the “range” in which one would expect results to lie, if a census of all voters had happened, would between 47%-39%. This makes the under 2% average error of the 2019 polls reasonably impressive.

What changed?

British pollsters have improved the representativity of their samples, and have avoided over-correcting for previous errors.  In 2017, for example Ipsos MORI took stated likelihood of turnout as normal and then adjusted it based on recorded actual turnout differentials from the British Election Study, which had the effect of over correcting for an over statement of Labour in 2015, and thus under representing Labour in our final poll for the 2017 election. Simply, in 2019 most pollsters seem to have worked hard to ensure a representative sample, including education level, which is becoming more and more important, but then just “trusted” their data, with fewer adjustments. This has produced some of the most accurate polling in decades.

One unique challenge of British politics is that unlike most proportional representation systems in Europe, there is only a weak link between vote share and seat share. This matters as 35% can give you a big majority (Blair, 2005), and 36% can give you a hung parliament (Cameron 2010).  A one per cent difference in vote share can make a 50 seat difference in what you end up with. To address this challenge, the 2019 election saw the use of MRP models to try and predict seat share as well as vote share. There was a lot of excitement in the 2019 campaign about MRP. This stands for ‘multi-level regression with post-stratification’ and is a statistical technique used to model opinion poll results into seat predictions.  It models how different constituencies may vote, rather than what % of the public nationally may vote for a particular party.   In 2017 YouGov had an accurate estimation of a hung parliament based on its MRP model, which did better than its conventional poll.  However, 2019 proved it is not infallible. Back in 2017 Lord Ashcroft model estimated a Conservative majority over 60 instead of the hung parliament which occurred, and the final 2019 Yougov model suggested around a 28 seat majority, well below the 80 seat majority achieved by Boris Johnson.  While the technique is by no means perfect, it does offer a way of estimating seat share where 650 separate constituency surveys would be impractical or too expensive, and which normal polls are not designed to provide. 

Exit polls

Finally 2019 again saw the triumph of the Exit poll. Ipsos MORI carried out the official general election exit poll at the 2019 UK General Election on behalf of the three major broadcasters, working jointly with a team of political science academics led by Sir John Curtice.  

It works by having teams of three interviewers at each of 144 polling stations for the entire time they are open to voters: 15 hours in total. These have been carefully chosen from 10s of thousands of polling stations, with varying degrees of marginality, and where a simple stratified random sample – or “one in N” of those leaving the polling station are approached and interviewed. Despite the added complication this time of winter weather – in the first December election since 1923 – our teams battled the cold, dark and wet to collect more than 23,000 interviews.

For the fifth election in a row the result of the Exit poll, published as soon as polling stations closed at 10pm, predicted the final result with a great accuracy. The exit poll predicted the Conservative Party would win 368 seats, Labour would get 191, while the Scottish National Party and Liberal Democrats would gain 55 and 13 seats respectively. The totals for each party were 365, 203, 48 and 11.

Hopefully the generally accurate pre-election polling combined with another excellent Exit poll, and very accurate polling in the EU elections, means pollsters’ reputations are recovering. Professor Will Jennings of Southampton University has studied all elections globally since 1942, including Britain, and concluded that polling is actually no worse – but no better – than in the past.  Jennings’ estimates suggest an average two percent error for each party in each election over 70 years: in 2019 in Britain pollsters have done rather better than that!

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.
Please note that your e-mail address will not be publicly displayed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles