By Kathy Frankovic
It’s not often that pre-election polls go horribly awry. If anything, pre-election polling has improved over time. In retrospect, the furor over polling in the 2015 UK election and the 2016 US race has dissipated, and many agree there wasn’t all that much wrong in the polling. The analysis and reporting may have been a different story.
But sometimes the methods are a problem. The October 2017 Calgary mayoral election polls were so far off that the Canadian MRIA (Marketing Research and Intelligence Association) commissioned an academic study of both methods and reporting. The study was completed earlier this year, but the MRIA disbanded in August. So the academics Christopher Adams of the University of Manitoba, Paul Adams of Carleton University, and David Zussman of the University of Victoria) released the study themselves. The study make for instructive reading – something all researchers should do — and describes how polls conducted in the weeks before an election can set the tone for media coverage of campaigns in the critical final weeks. The authors see multiple “victims” in how the polling was conducted and reported – not only the pubic, the candidates, and the academic commentators (who themselves were attacked – including by lawsuit — because of their criticism), but also the research company itself.
Briefly, in 2017 the two-term Calgary Mayor, Naheed Nenshi, was running for re-election. His opponents were a relatively unknown businessman, Bill Smith, and city council member Andre Chabot. When Mainstreet Research produced its first public poll on the race at the end of the September, the results were, to many, shocking. The Mayor was losing to Smith by nearly nine percentage points. Chabot, the council member, was far back in single digits. In the second Mainstreet poll, released a week later, Smith’s lead had increased to more than 16 points. In the third poll, released just before the election, Smith retained his double-digit polling lead, staying ahead of Nenshi by nearly 12 points.
The election result? Nenshi 51.4%, Smith 43.7%, a nearly eight-point victory, with Chabot trailing. That was a change of nearly 20 points on the margin between the final poll and the election – and a turnaround in the leader.
The polling differences were well beyond any possible margin of error. The polling company called the situation a “perfect storm.” There was a lot about the polls and how they were reported that were far from “perfect.”
The three Mainstreet polls were all conducted using Interactive Voice Response, a technique that has been used successfully in other elections. But there were sampling issues. The telephone sample was drawn from a directory which grossly underrepresented young voters. Those young voters who were in the directory were weighted up, but this group may have been unrepresentative of the majority of young voters (it was noted in the report that there was shock when the young voters in the first poll supported the challenger Smith, which was counter to political expectations).
The sampling concern was so severe that Mainstreet changed it in later polls – but the end result was the same. Sampling by city ward also resulted in a reduced number of cell phones in the final survey, and another case of too few young people.
[There was another factor that may have exaggerated the error: the turnout in 2017 was enormously higher than in the previous Mayoral race in 2013 – nearly 50% higher! Any modeling that relied on participation in the previous Mayor’s race would have grossly underestimated the number of voters, misjudged their demographic composition, and perhaps the preference of the electorate as well.]
According to the report, there was an even greater problem. That may have been the way the polls were interpreted and defended in public. The papers that reported the polls stood behind them when they were questioned, the organization that conducted the polls did the same, perhaps even more aggressively. Three weeks of charges and counter-charges ate up media time that might have been spent discussing issues. The accuracy or inaccuracy of the Mainstreet Poll results, which had the megaphone of two major news organization in the community, dominated election coverage.
The report notes the lack of media skepticism about the polls – underscoring a lack of training in how polls should be reported. It is not comforting to me when the author of a story about the differences between other published polls which showed the incumbent ahead and the Mainstreet poll which indicated he would lose were reported by a national television and internet news organization’s weather anchor (and it is even more troubling if that person is the staff member most qualified to report such a story). It appears, from the report, that the few other polls (which showed the incumbent leading), never received the same level of coverage as the Mainstreet polls.
There are broader issues for researchers that we can be learned from the report:
- Polls sponsored by (or identified as linked to) specific media publications still deserve scrutiny by the sponsoring media themselves. While the competitive journalistic instincts may be to stand behind a story, that works only when the story is one sourced personally by reporters. Reporters weren’t involved in conducting the polls, and should have asked the right questions about it and utilized journalistic skepticism. ESOMAR has created several resources for journalists: FAQs, A Guide (co-authored by WAPOR) for Opinion Polls, and a free, international course for journalists. The more journalists can learn about what makes a good poll, the better off we all are.
- Whenever polls differ, there is debate. Pollsters are challenged and usually defend their work. But apparently, questions in this case were frequently met with hostility, and even the threat of lawsuits. The polling company then found itself in an even worse political and business position after the election. The report found internal concerns about the findings (and the methods change could be viewed as a result of that concern), but the external presentations by the company betrayed none of that worry.
- The MRIA (before its demise this year) had strong rules about transparency in poll reporting, stronger than even the ESOMAR requirements. No public poll met those requirements. This could either be due to the general lack of transparency by research organizations, or it may suggest the value of reviewing those requirements, which some firms may regard as too onerous in a rapidly-moving election season. But the report’s authors suggest that the MRIA (and now, whatever Canadian organization replaces it) do a yearly report on publicly released polls. Even more important, that the organization react to polling controversies in real time. While that would require a more active stance by a large organization (sometimes difficult), by intervening sooner, it could limit the intensity of the public reaction. In other words, it could limit or even stop the snowballing of controversy that made this case so damaging to the research industry.
In almost every catastrophe, there are many reasons for the failure. This case, like so many others, was a “perfect storm” of problems: methodological issues, journalistic misunderstanding, and the hubris of those who could not accept that there could be problems. Only after the election was the full magnitude of the error revealed. The MRIA report, though its release was delayed by the association’s demise, is the perfect case study for how election polling can go horribly wrong.
Kathy Frankovic is a member of the ESOMAR Professional Standards Committee and an international polling expert.