Can we rely on political polls anymore?

Published by: Richard Colwell

2016.11.23

The recent US Election results appear at first glance to have been another bad day for polling.  Indeed, many media outlets spent much of the day after the results were announced asking RED C for comment on whether polling was finished as an industry.

Before we consign political polling to the dustbin, let’s just take a step back and look at this through more considered analysis.  Most polls conducted across the world are done so using some form of random sampling, either by phone, face to face or through internet panels.  The aim being to replicate a sample of the population and so provide guidance on likely outcomes and how voters are feeling.

Using this approach, all polls conducted by professional organisations come with a clear heath warning, that even when using random selection and adding quotas to ensure the sample is as representative as possible, there is still a margin of error of at least + or -3% on any one figure.  In fact, given that true random sampling is increasingly expensive, and that many polls are analysed based on a sub sample of likely voters, that margin for error could possibly be even more than that.

National polls can also only really measure share of the vote among the electorate, or in countries with proportional representation, the share of the first preference vote.  Any extrapolation to seats or Electoral College votes is data modelling based on past performance.  As such, polls themselves should really only be judged against the result of the popular vote.

In the case of the US election the average of all the final polls recorded by the Real Clear Politics site was a +3% lead for Clinton.  This firstly tells voters that the race is very close, in fact it means the two main players are within margin of error territory of each other, and should have meant that pundits were saying the race was really too close to call.  But the pressure put on pollster to name a winner, and the desire by like minded individuals to read the result you want to see from polls, means that a +3 average for Clinton in the national vote is taken as a gospel win for Clinton. in terms of College Votes.

In the event, the final result saw that Hillary Clinton did in fact win the popular vote, much as the national polls had predicted. The margin of that victory is predicted to end up at around +1% vs. Trump, so well within the error variance applied to national polls.  Trump’s win in Electoral College votes is down to where he did well, rather than the national vote overall.

So the US national polls correctly informed people that the race was close, they also correctly predicted that Clinton would win the popular vote.  Extrapolations based on the polls however didn’t correctly work out Electoral College seats, with many of these seats falling on a knife edge of a close result, and as a result very small changes in vote behaviour had a significant impact on the Electoral College seat allocation.

So could the polls have done better?

Absolutely, there are still some question marks about the accuracy of polling, in particular how difficult it is to reach all types of voters in national polls, and why many of the state polls were not as accurate as pollster would have liked.  While Clinton’s share of the vote was predicted pretty accurately, Trumps share of the vote was not predicted so well.  Most of the national polls underestimated his share of the vote by at least the outer limits of the 3% margin, if not greater.   Three key issues potentially had an impact, and while most pollsters are trying to adjust their methods to account for these, more needs to be done to gain further accuracy.

Shy Voters – We have seen this phenomena in Brexit polling and in our own polling here in Ireland. It is where voters who support a candidate or idea, do not particularly want to tell people their views, as they are somewhat embarrassed to admit to it.  They either avoid taking part in the polls altogether, or claim to be undecided.  I have seen some question this theory with regards to Trump, based on the fact that most Trump supporters are hardly shy and retiring types.  But what we are talking about here is not a core Trump supporter.  It is the lifelong Democrat who decides this time they are voting Republican.  It is the women who on one hand despise how Trump treats women, but on the other hand can’t bring themselves to vote Democrat.  Trying to uncover shy voters is a vital part of getting a closer result when polling.

Representative Samples – it is getting increasingly difficult to get representative sample of the population to take part in polls. Controlling samples by standard demographics such as age, gender and region no longer guarantees a representative sample of how people vote.    This is because vote behaviour is increasing being based on other aspects, such as education level.  The divide between the higher educated and the lesser educated is a growing influence on political opinion.    In Brexit whether you had third level education or not was the most discriminatory factor in whether you voted to remain or leave.  Voters with postgraduate qualifications split 75 to 25 in favour of remain. Meanwhile, among those who left school without any qualifications, the vote was almost exactly reversed: 73 to 27 for leave.  A similar picture was seen between supporters for Trump and Clinton.

This divide in society is being further emphasised by the way we live our lives.  How often do those with a third level education marry someone without one anymore?  If you have a third level degree, how many of your close friends don’t?  Free and easy movement means like minded people tend to herd together in society.  On top of that, they are exposed to messages that always reinforce their view.  Choice of mainstream and social media allow us to cocoon ourselves with views that mirror our own, from journalists we agree with, to friends on Facebook who share the same view.   How many people have you heard since Brexit claiming no one they knew voted to leave?   This divide is very important for polling, because it is actually far easier to persuade those with college degrees to take part in polling.  Further control of samples by education is also vital if we are to truly represent all points of view accurately in our polls.

Turnout – when we conduct polls, most pollsters ask people how likely they are to vote in order to try and take account of turnout. If we were to include everyone then the polls would almost definitely be wrong based on turnout levels that are usually at about 70%.  But judging actual turnout is very difficult and this can influence the overall poll results.  It is clear in the US Election that many Democratic voters didn’t turn out to vote.  Trumps total popular vote was almost identical to that achieve by Mitt Romney four years earlier, but Clinton secured 6.6 million voters less than Barack Obama.  It is clear that a better understanding of potential turnout in key states may have helped pollsters to suggest that the result was going to be even closer than the polls suggested.

So should we listen to polls anymore?

Of course, but with their limitations in mind.  Political polls help to inform the media and the electorate on changes and trends in voter behaviour.  In the US they clearly showed a very tight race that could have gone either way.  In the run up to and during the last General Election in Ireland, polls correctly identified that the government parties would not be returned, accurately predicted the collapse of the Labour vote, and the strength of the Independent vote.  Latterly they showed the start of a move to Fianna Fail and signs of a downward move for Fine Gael.  Polling during the campaign on issues also helped inform commentators about the disconnect between the “Keep the Recovery Going” message, and the extent to which the recovery was felt by voters.

But polls should not be the main story of any election and all that people talk about.  They help inform, but the results come with a health warning for a reason, (that the results are accurate within a 3% margin of error), and those reading and analysing the polls should be clear what this means.  For example, in a 50/50 race with one side at 48% and the other of 52% the poll is really “too close to call”, as the margin of error means the result could be the other way around.  Polls should also be read based on what they are actually measuring.  A national poll measuring the popular vote share can only accurately measure that share.  Once you start modelling this, the chance for further error increases significantly – as such any seat analysis on the back of polls should carry a further health warning.

Use polls to help inform and understand.  Don’t make them the main story. Be aware of their limitations and what they are representing.

Top...