Polling & Politics

Reflections on the US polling elections

By Kathy Frankovic

It’s fair to say that the American election polls in 2018 were quite successful.  Newspaper headlines and stories after the midterm elections for the most part praised them.  However, their tone was sometimes one of surprise and shock, reflecting the long-term impact of the criticism that followed the 2016 presidential election.

That year, the U.S. pre-election polls averaged a three-point lead for Hillary Clinton in the national popular vote, and she did win the popular vote by just over two percentage points.  While that would elsewhere have been seen as a success, the American electoral system selects a winner through the Electoral College, where votes are allocated based on the number of Senators and Representatives each state has in Congress.  In most states, the popular vote winner takes all that state’s electoral votes.  By winning Michigan, Wisconsin and Pennsylvania by a combined total of 79,000 votes, Donald Trump won a majority in the Electoral College.  This state by state counting received less attention in 2016 than it should have (after all, as recently as 2000 the candidate who won the national popular vote also lost in the Electoral College).

The national popular vote for the House of Representatives means little, as seats are allocated district by district.  But “generic ballot” national polls are common, and this year indicated that Democrats would have a clear lead in votes cast nationally.  They did, winning in the national House vote count by eight percentage points.  But as in Presidential elections, capturing a majority of the vote overall doesn’t mean that you will win enough seats (just as it may or may not give you a victory in the Electoral College).  Democrats won 1.4 million more House votes nationally than Republicans did in 2012, but still would up with 33 fewer seats.

The media reporting of the accuracy of pre-election polls after the election came with caveats.  With so many races to poll, there were bound to be errors.  New York Magazine noted in its headline that “The Polls Were Fine, If Not Perfect.”  The Washington Post asked if it was “Another Bad Night for Political Polls?”  And then answered the question: “Not Really.”  State polls still leant towards Democrats, but only by 0.4%, much better than in 2016.

Midterm elections must be seen as a collection of many races – the 35 Senate elections and the 435 House races.  So national polls are not enough.  This year, however, there was an even greater focus on individual House races, particularly those that were likely to be close, or were viewed as having the potential to change sides.  There were several creative attempts to deal with the large number of such races, using a combination of new and older methodologies.  The old ways of doing things are definitely under challenge.

The New York Times’ Upshot paired with Siena College to conduct polls in dozens of competitive House districts.  It used voter lists provided by a vendor, instead of making telephone calls using random digit dialing.  It sampled within Congressional districts, making adjustments based on the availability of telephone numbers for subgroups, relying on outside information for data not available on voter lists, like education.  It then created turnout estimates.  This is difficult in the U.S., as voting is not compulsory and can be very low in non-presidential years.  This year, the usual low turnout was expected to rise from the 36.7% of the vote eligible population that voted in the 2014 midterm election.  The turnout rate jumped twelve percentage points, as nearly half the eligible population turned out in 2018, the highest in more than 50 years.  The methodology is reported here and here.

Since The Times and Siena College polled in what were expected to be competitive districts, the polls basically showed election that were very close – within the sampling error – in nearly all of them.  The Times decided to show results in real time as each interview was completed.  This is an example.  While the poll was being conducted, red and blue dots appeared in the location about an hour after an interview was completed.  As is the case with most telephone polls, the vast majority of calls do not result in an interview.  So watching the polling “live” could be a slow and lengthy process, not necessarily an exciting one.  This was viewed as a way of making election polling more transparent to the public.

CBS News partnered with YouGov for its Battleground Tracker, using YouGov’s online panel, with oversamples in contested districts.   But it also used information about voters throughout the country to improve the estimates in the contested districts, CBS News and YouGov were able to make better estimates of the final House outcomes.  The questionnaires were somewhat longer than those used by the Upshot, and the Battleground Tracker conducted its polling online, not through telephone calls.  Both of these approaches required a very large number of interviews.

The final Battleground estimate was 225 seats for Democrats and 210 for Republicans.  With a large margin of error (plus or minus 13 seats on each number), the final estimate fell within the final outcome of what appears to be 235 Democratic seats (some races are still not officially settled).

This year, even the traditional exit poll had a challenger.  The exit poll, invented in the 1970’s, has changed as American have changed how they cast ballots.  With a growing share of the vote cast before election day (through absentee and early voting), Edison Media Research, which has conducted the media exit poll for more than a decade, now supplements traditional exit polling at precincts with pre-election telephone polls and polls at physical early voting locations.  This year, the Associated Press partnered with the National Opinion Research Center (NORC) and Fox News to expand the reach of election day polling, creating the APVoteCast:  40,000 pre-election interviews using registration-based sampling (which The New York Times Upshot also used), 6,000 interviews with the NORC AmeriSpeaks  probability-based online panel, and more  than 90,000 interviews with non-probability online panelists.  With approximately 60 different questions, the AP VoteCast would not only tell who had won, but provided issue and demographic information.

Overall, the election polls of 2018 generally did well, but some pollsters appear to have decided to use the concerns of 2016 as a starting point to develop new methods of understanding election behavior.

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.
Please note that your e-mail address will not be publicly displayed.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Related Articles