As landlines fall into disuse and survey response rates plummet, pollsters are searching for new ways to measure public opinion.
A democracy relies on knowing what people want and addressing their needs. However, the opinion polls often used to figure out the public’s will have been returning wildly conflicting numbers and increasingly unreliable results. Several organizations not only failed to predict the strong Republican showing in the 2014 U.S. midterms but also misjudged recent elections in Britain and Israel.
In an era when Netflix can pinpoint which movies its customers crave, the science of knowing what the same individuals think about political candidates and public policy is getting murkier. “Our old paradigm has broken down, and we haven’t figured out how to replace it,” Cliff Zukin, a past president of the American Association for Public Opinion Research, lamented in a New York Times op-ed last June. But researchers aren’t giving up. Refining new methods that employ texting, online panels of respondents, embedded audio and video, big data, and incentives such as prepaid gift cards, they hope to restore confidence in measuring the nation’s pulse.
Modern public opinion polling began in 1936, when pioneer George Gallup correctly predicted Franklin Roosevelt’s landslide victory over Alf Landon. While often accurate, election polling has experienced major blunders since then – most famously in the 1948 presidential vote, when the Chicago Daily Tribune’s banner headline mistakenly declared “Dewey Defeats Truman.”
No One at Home to Take Your Call
One challenge in modern polling is the precipitous drop in landline ownership – from 98 percent to 60 percent in a decade. By contrast, more than 90 percent of adults now have a mobile phone, according to the Pew Research Center’s Internet & American Life Project. Not only do cellphones lack a white pages-style directory of numbers. The Federal Communications Commission prevents automatic dialers from calling them. That stymies pollsters such as Rasmussen Reports and Public Policy Polling, which use only automated dialers, says Stanford University social scientist Jon Krosnick, a past principal investigator of the National Science Foundation’s American National Election Studies (ANES). Considered the gold standard of public opinion research, the ANES still uses the costly technique of face-to-face interviewing, although in 2012 it began gathering supplemental data on the Internet.
Another concern is plummeting survey participation. In 1997, the response rate for a typical Pew Research Center telephone survey was 36 percent; by 2012, it was 9 percent. This raises concerns that results may not represent the general population’s sentiments.
Classic, high-quality polls involving random samples of demographically representative segments of the entire population have shown no decline in accuracy. Even when response rates drop as low as 10 percent, “as long as those conducting the survey tried to get a random sample and tried to get everyone, you could get a representative response,” Krosnick says. Those who do respond, he adds, “may have slightly more free time than others, or are slightly more interested in expressing an opinion, but factors such as politics, lifestyle, and past experiences – the kind of things people are polling for – have minimal correlation to whether they are surveyed or not.”
But here’s the rub: Growing mobile-phone use and falling response rates make classic polling substantially more expensive. Not only must pollsters pay interviewers to dial potential respondents’ cellphones by hand. Poor response rates mean having to contact far more people than previously required. Higher costs have led to a drop in such surveys. For instance, Gallup is not planning to conduct any polls for the 2016 presidential primaries and has not pledged to track the presidential election itself. Higher costs also have resulted in a proliferation of sloppy polling. “There’s a world of people who have money and who just want data, and it doesn’t matter to them if the data are accurate or not,” contends Krosnick.
Ironically, the advent of the Internet has enabled poor-quality polling. “People got the idea that they could do polls way cheaper than they could in the past – for instance, you could use banner ads, pay people a little money to do surveys, throw out a net to try to scoop who you could,” says Krosnick. Companies have been selling the data as representative of the country, he explains, “but these are not random samples, and that’s why inaccuracies have appeared in polls in recent years. They’re claiming scientific results, and they’re not scientific.”
Researchers have begun experimenting with new polling methods to help improve accuracy. One major strategy involves randomly calling and recruiting people into communities, or “panels” that can be interviewed again and again using online surveys. Zogby Analytics, for example, creates online pools of prescreened respondents and randomly emails them invitations for surveys. “We’re not simply looking to, say, make sure that 12 percent of our panel are African American, but scrutinizing those 12 percent and making sure they’re representative of the country and not mainly from Nebraska,” explains John Zogby, senior analyst at the firm.
In addition, Zogby increasingly does mobile-to-Web polling, sending a text invitation to take a survey with a link that clicks through to a secure website. “We have to take advantage of all the new technologies,” Zogby underscores, adding that “at least 70 percent of our work is now done by Internet – people don’t like talking on the phone as much anymore.”
YouGov, a global market research and data company that Stanford University political scientist Douglas Rivers helped to develop, has an online panel of more than 3 million participants in over 30 countries. Instead of randomly selecting potential respondents, YouGov purposefully recruits people based on whether they are representative of a target population by age, gender, social class, and other demographic variables. Participants, 1.6 million of whom are in the United States, earn points that can be redeemed for prepaid gift cards.
One potential problem with online panels is that they can exclude people who lack computers or Internet access, thereby skewing results. Knowledge Networks, a company co-founded by Rivers and now deceased colleague Norman Nie that was acquired by GfK in 2011, gets around that digital divide by providing recruits with both. “This approach has produced remarkably accurate results,” says Krosnick, noting that it has been adopted in England, France, Sweden, and Norway. “The future of polling is in online surveys,” agrees Michael Traugott, a professor of communication studies at the University of Michigan. He cites such pluses as the speed and cost of data collection, and the ability to embed audio and video – which cannot be done on the telephone.
Still, pollsters often are wary of panels whose members aren’t randomly selected. “There are significant issues with how representative the views” of such respondents are, cautions Traugott. There also is “quite a bit of variation in the industry on how these panels are created,” adds Frauke Kreuter, professor at the University of Maryland’s joint program in survey methodology. “Sometimes people are recruited via a phone survey, other times it’s an advertisement online with a button asking people to be part of a poll, so there may be a lot of uncertainty involved.” Greater transparency of how researchers collect and analyze data might help, suggests Traugott, adding,“it’s difficult to evaluate or compare methods until we have full disclosure of methodology.”
Repeated contact of panelists raises other concerns. “If you develop what are essentially semiprofessional respondents, does that shape what they say? Will it lead to less accurate predictions of what the general population does and thinks?” asks Clifford Lampe, associate professor of information at the University of Michigan.
Researchers have begun exploring options beyond online panels. A joint venture between YouGov and Microsoft, for instance, polled the political views of Xbox Live users for the 2012 presidential election. More than 20,000 unique respondents answered the daily polls, and about 30,000 answered each of three to five questions during the presidential debates. Although participants skewed heavily younger and male, the sheer numbers enabled Xbox/YouGov pollsters to question nearly 1,000 respondents who said they were completely undecided before one debate. They judged Barack Obama the winner by a 51-to-17 percent margin.
Google Consumer Surveys infers the gender, age, and geographic location of potential respondents from their browsing history and IP address. Customers then can select a target audience to question, with respondents receiving access to premium online news and entertainment, or credits for books, music, and apps. Google Consumer Surveys predicted Obama winning the 2012 popular vote by 2.3 percentage points, close to his actual margin. “All these new methods are not understood very well yet,” cautions Kreuter, who will be watching to see how well they forecast the 2016 election results.
Query Data, Not People?
Future polling may eschew interviews and instead infer how people think or will behave based on the giant volume, or “corpuses,” of data they generate while shopping online, tweeting, or browsing. For instance, data scientists have scanned Twitter postings to gauge global mood, searched for early warning signs of stock-market changes by analyzing how often people performed Google searches for financially related words; and investigated views on disease-related Wikipedia pages to monitor and forecast the spread of illnesses.
Michigan’s Lampe remains skeptical of claims made by big-data investigators that the information not only is “more readily accessible than interviews, but that it’s more valid, since you’re not asking people if they have an opinion, but you’re seeing what opinions they’re expressing on their own.” He says there’s truth in critiques from the traditional survey community that these data are not representative of the whole population, observing that “only 18 percent of the Internet-using population uses Twitter.”
Big data may end up complementing rather than replacing traditional polling. “It’s apples and oranges,” Lampe argues, calling it a “mistake” to equate Twitter users with respondents and tweets as responses. “You can’t use the data to say ‘14 percent of white males think this.’ You have to analyze the corpus as a whole to understand it.” Meanwhile, the 2016 pack of presidential hopefuls already has surprised political handicappers. No matter the margin of error, future horse races still ride on differences of opinion.
By Charles Q. Choi
Charles Q. Choi is a New York-based freelance writer specializing in science.
Design by Michelle Bersabal
Images courtesy of Thinkstock