polls and surveys Stories based on public opinion polls must include basic information for an intelligent evaluation of the results. Carefully word such stories to avoid exaggerating the meaning of poll results.
Saturday, May 05, 2007
Polls & Surveys: What AP Says
Every story based on a poll should include answers to these questions:
1. Who did the poll and who paid for it? Start with the polling firm, media outlet or other organization that conducted the poll. Be wary of polls paid for by candidates or interest groups; their release of poll results may be done selectively and is often a campaign tactic or publicity ploy. Any reporting of such polls must highlight the poll's sponsor, so readers can be aware of potential bias from such sponsorship.
2. How many people were interviewed? How were they selected? Only a poll based on a scientific, random sample of a population – in which every member of the population has a known probability of inclusion – can be considered a valid and reliable measure of that population's opinions. Among surveys that do not meet this criterion:
• Samples drawn from panels of people who volunteer for online polls. These cannot be considered representative of larger populations because panel members are self-selected – often including "professional respondents" who sign up for numerous surveys to earn money or win prizes – and exclude people without Internet access. (Online panels recruited randomly from the entire population, with Internet access provided to those who don't already have it, are valid.)
• Balloting via Web sites, cell phone text messaging or calls to 900 numbers. These too are self-selected samples, and results are subject to manipulation via blog and e-mail campaigns and other methods. If such unscientific pseudo-polls are reported for entertainment value, they must never be portrayed as accurately reflecting public opinion and their failings must be highlighted.
3. Who was interviewed? A valid poll reflects only the opinions of the population that was sampled. A poll conducted only in urban areas of a country cannot be considered nationally representative; people in rural areas often have different opinions from those in cities. Many political polls are based on interviews with registered voters, since registration is usually required for voting. Polls may be based on "likely voters" closer to an election; if so, ask the pollster how that group was identified and what percentage of the voting population it totaled. Are there far more "likely voters" in the poll than turnout in comparable past elections would suggest?
4. How was the poll conducted – by telephone or some other way? Avoid polls in which computers conduct telephone interviews using a recorded voice. Among the problems of these surveys are that they do not randomly select respondents within a household, and they cannot exclude children from adult samples
5. When was the poll taken? Opinion can change quickly, especially in response to events.
6. What are the sampling error margins for the poll and for subgroups mentioned in the story? The polling organization should provide sampling error margins, which are expressed as "plus or minus X percentage points," not "percent." The margin varies inversely with sample size: the fewer people interviewed, the larger the sampling error. If the opinions of a subgroup – women, for example – are important to the story, the sampling error for that subgroup should be noted. (Some pollsters release survey results to the first decimal place, which implies a greater degree of precision than is possible from a sampling. Round poll results to whole numbers. However, the sampling error margin – a statistical calculation – may be reported to the first decimal place.)
Also consider the wording and order of the questions asked in the poll. Small differences in question wording can cause big differences in results, and the results for one question may be affected by preceding questions. The exact question wording need not be in every poll story unless it is crucial or controversial.
When writing and editing poll stories, here are areas for close attention:
–Do not exaggerate poll results. In particular, with pre-election polls, these are the rules for deciding when to write that the poll finds one candidate is leading another:
• If the difference between the candidates is more than twice the sampling error margin, then the poll says one candidate is leading.
• If the difference is less than the sampling error margin, the poll says that the race is close, that the candidates are "about even." (Do not use the term "statistical dead heat," which is inaccurate if there is any difference between the candidates; if the poll finds the candidates are tied, say they're tied.)
• If the difference is at least equal to the sampling error but no more than twice the sampling error, then one candidate can be said to be "apparently leading" or "slightly ahead" in the race.
–Comparisons with other polls are often newsworthy. Earlier poll results can show changes in public opinion. Be careful comparing polls from different polling organizations. Different poll techniques can cause differing results.
–Sampling error is not the only source of error in a poll, but it is one that can be quantified. Question wording and order, interviewer skill and refusal to participate by respondents randomly selected for a sample are among potential sources of error in surveys.
–No matter how good the poll, no matter how wide the margin, the poll does not say one candidate will win an election. Polls can be wrong and the voters can change their minds before they cast their ballots.