10 Questions You Should Ask About Poll Results
Polls provide a good source of information about public opinion, and can be valuable tools for a basis for accurate, informative news stories. However, it is important that you are happy that the poll results are accurate before they are published or taken seriously.
To help you distinguish scientific polls, we have put together 10 questions to ask before reporting any results. This publication is designed to help you to judge what poll results you should take seriously.
A number of the questions here will help you decide whether or not a poll is a “scientific” one worthy of coverage – or an unscientific survey without value.
1. Who did the poll?
Reputable polling firms will provide you with the information you need to evaluate the survey. Because reputation is important to a quality firm, a professionally conducted poll will avoid many errors.Check if the company is a member of AIMRO or ESOMAR, and is a legitimate market research organisation that conducts research professionally outside of published polls.
2. Who paid for the poll and why was it done?
It is also important to be aware of who actually paid for the survey, because that tells you – and your audience – who thought these topics are important enough to spend money finding out what people think.
Polls are not conducted for the good of the world. They are conducted for a reason – either to gain helpful information or to advance a particular cause.
It may be that a news organisation wants to develop a good story. It may be that a politician wants to be re-elected. It may be that a corporation is trying to push sales of its new product. Or a special-interest group may be trying to prove that its views are the views of the entire country.
All are legitimate reasons for doing a poll.
What you need to be careful about is whether the motive for doing the poll, creates doubts about the validity of the results to the extent that the numbers should not be published.
3. How were people chosen for the poll?
One of the key reasons that some polls reflect public opinion accurately and other polls are a poor representation of opinion is how people were chosen to be interviewed.
In scientific polls, the pollster uses a specific statistical method for picking respondents.
In unscientific polls, the person either picks himself to participate, by phone in or opt in; or people are called or stopped on the street without any statistical method or safeguards in place to ensure that the people picked for the poll represent the population in question.
It’s no good just ringing 500 or a 1000 people from your address book for a poll, as this will only be representative of your address book, not everyone in the country.
Another less obvious example is a phone poll that only calls landlines. As over 25% of the population now only have mobile phones it is important that any phone poll includes mobile numbers, otherwise it only represents those with landlines rather than the population as a whole.
To make sure a poll reaches an accurate sample of people, you need to ensure that everyone you want to represent has an equal chance of being selected. Even then you also need to make sure the final sample spoken to represents your target.
Reputable polling companies use a combination of random sampling and quotas to obtain accurate polls. With random sampling, a polling company either uses a list of randomly-drawn telephone numbers or visits randomly-drawn addresses. The polling company then contacts people on those telephone numbers or at those addresses, and asks them to take part in the survey.
They then set quotas — for example, on age and gender — and seek out different people who, together, match the profile required.
For instance for a national sample they will make sure they have the same proportion of men in their sample, as there are in the whole population of Ireland. This type of quota will be set on several key demographics such as age, gender, region and social class to make sure the final sample is made up of exactly the same type of people in the population as a whole and to the same proportions.
4. How many people were interviewed for the survey?
Because polls give approximate answers, the more people interviewed in a scientific poll, the smaller the error due to the size of the sample, all other things being equal.
However a common trap to avoid is that this means “more is automatically better.” While it is absolutely true that the more people interviewed in a scientific survey the smaller the sampling error, however a poorly conducted poll with very large sample can give you a very wrong result. Other factors may in this case be more important in judging the quality of a survey, than sample size.
AIMRO advises that for a properly conducted scientific national poll, a minimum sample of 1000 interviews is required and 500 interviews for a published local area poll.
5. How can a poll of 1000 people be representative of all the electorate?
As long as a sample is taken randomly and quotas used to ensure that it is representative of all adults, the size of the universe doesn’t matter; rather the margin of error is based on the size of the sample.
To give an example of this, if you make a cake with lots of different ingredients, as long as it is mixed properly, you do not need to eat the whole cake to see what it tastes like; just a small slice will tell you within a margin of error what the rest of the cake will taste like.
This is also well illustrated in a quote made by a famous US market researcher which reads, “If you don’t believe in random sampling, next time you are in for a blood test, ask the doctor to take it all.”
6. What is the possible error for the poll results?
A properly conducted poll of 1000 interviews provides a sample error of just + or -3% at 95% confidence level. If we increase the sample size the sample error only falls very slowly after this, for example a sample of 2000 interviews has a margin of error of around + or – 2%.
Local or regional polls can be conducted on a slightly smaller sample size of 500 interviews. However this does increase the possible error to + or – 4.5%.
Unscientific polls conducted without random selection and proper quotas set on who is interviewed, will have a much higher error, and may produce very misleading findings.
7. How were the interviews conducted?
Firstly it is important to make sure that the poll was conducted outbound by a company, rather than a phone in or self completion poll. In this case the polling company decides whether a person should be interviewed or not.
These types of polls are normally conducted face to face or by phone in Ireland. Both methods are good as long as they have followed the scientific approach that meets all the previous points made – such as random sample selection, ensuring everyone being represented by the poll can be selected, and setting quotas to ensure the poll is representative.
Online polls should still be treated with caution in Ireland, as the over 55’s are not adequately represented in online panels. Also, even an online survey that accurately sampled all those who have access to the Internet would still fall short of a poll of the population of Ireland, as at least one in three adults do not have Internet access.
8. What area (national or region/constituency) were the people chosen from?
Because polls aim to represent certain audiences, it is also important for a journalist to know who the poll is trying to represent.
For example, you need to know if a sample was drawn from among all adults, or just from those in one constituency or in one city. A survey of people in Dublin can only reflect the opinions of people in Dublin – not of all adults.
9. When was the poll done?
Events have a dramatic impact on poll results. Your interpretation of a poll should depend on when it was conducted relative to key events. Even the freshest poll results can be overtaken by events.
Poll results that are several weeks or months old may be perfectly valid, but events may have erased any newsworthy relationship to current public opinion.
10. What questions were asked?
Reputable polling companies always ensure that questions are phrased that do not bias the result one way or another, and allow the voter to agree or disagree with the topic. Perhaps the best test of any poll question is your reaction to it.
On the face of it, does the question seem fair and unbiased? Does it present a balanced set of choices? Would most people be able to answer the question? Does the question allow you to give either a positive ro negative reaction to it?
Any poll conducted by a reputable AIMRO company will include details on the method used, the sample size, who the poll was conducted for, when the poll was conducted, the target audience, quota that were set, any weighting conducted and the margin of error. If any adjustment is made to the results outside of this, details will also be provided.
Any poll results that do not include ALL this information should be treated with caution.