• October 23, 2014

As People Shun Pollsters, Researchers Put Online Surveys to the Test

For a certain sample of the university population, few words evoke more horror than these: "President Alfred M. Landon."

It is the "Dewey Defeats Truman" of public-opinion polling, a bedrock method of the social sciences. During the 1936 U.S. presidential election, The Literary Digest sent questionnaires to 10 million readers, and, on the basis of the responses, predicted that Landon, the Republican challenger, would handily defeat President Franklin D. Roosevelt. The magazine had called the previous four presidential contests correctly but, that year, could hardly have been more wrong: Roosevelt carried 46 states.

For decades, the Digest debacle has stood as a warning to survey researchers, as academic pollsters call themselves: Watch your methods and, above all, draw conclusions based on truly random selections.

Yet now a host of researchers, spurred by the rising cost of telephone polls and plummeting participation rates, are pushing to use a new generation of online-only surveys for their research. This work, which relies on subjects volunteering to be polled, carries great promise of allowing researchers to expand experiments beyond the usual suspects. But it also carries perils. At worst, it puts the field at risk of another Landon moment.

"Right now we don't know if the methods they employ are, or are not, going to have a catastrophic failure," says Robert Santos, chief methodologist at the Urban Institute, a nonpartisan think tank.

"The vast majority of social scientists would say these volunteer polls have no reason to be right," says Robert M. Groves, provost of Georgetown University and a former director of the U.S. Census Bureau. "They have no theory behind them at all."

Such arguments nag at Andrew Gelman, a professor of statistics and political science at Columbia University. "There's a reason why people aren't sticking with the old stuff," he says. Every survey now requires massaging data to account for low response rates; the ideal poll no longer exists. Researchers need new methods, he says. "Traditional polls are not so wonderful."

This simmering debate popped into public view this month, when the American Association for Public Opinion Research, the discipline's professional body, sent a letter to The New York Times warning against its recent use of online surveys. The letter was broadly written and seemed to indict the whole discipline of online polls.

Members of the association's email list were soon in a furious internal debate, and Mr. Gelman prominently criticized the statement. The association's president, Michael W. Link, regrets some of the language chosen for the letter. It was meant as a caution to the public—especially news outlets—and not as a condemnation of the research, he says. "Maybe the statement could have been a little clearer."

However it was meant, the letter highlighted the curious state in which survey research finds itself. As Mr. Groves has written, if there were a war between our guts and our statistics, the quants have won. Data are the currency of business, government, science, even higher education. There has never been more interest in polling; following Nate Silver, the media have rowed toward data analysis. The truth, if it can be found, simply must be hidden among those numbers.

Yet at this moment of demand, polling is in crisis. The costs have spiraled out of control. The public is harder than ever to reach. Landlines are dwindling, and rare is the person who takes an unknown call on her cellphone. Robocalls and junk polls clog the air. We all want to know what the public thinks—but who has the time to talk?

A Changing Landscape

Decades ago, it was different. If Gallup rang with a poll, it made sense to respond—such requests would be rare, and you’d help shape an important national question. Maybe 80 percent of the subjects would respond. Today polls are plentiful and often slanted toward political ends. It’s not rational to respond, and few people do: often only 10 percent of the subjects. This makes thorough surveys prohibitively expensive as pollsters chase nonresponders. The U.S. Census spent billions of dollars reaching the final 2 percent of the population in 2010, Mr. Groves says.

Back in 2008, Brian F. Schaffner, now a political scientist at the University of Massachusetts at Amherst, saw that cost inflation as he worked at the National Science Foundation. It was there he first encountered an online survey method pioneered by Douglas Rivers, a professor of political science at Stanford University, that was increasingly appearing in the field's top journals. Those surveys were cheap, requiring little manpower. But were they accurate?

At its heart, polling is driven by the statistical fundamental of probability sampling: You can extrapolate information about a larger body, like the American public, only if the subset surveyed is randomly selected from the entire population. That’s relatively easy to do with street addresses or telephone numbers; in an ideal world where everyone responded, that’d be it. But if few people reply, the data have to be "reweighted," divided into clumps based on demographics like age, race, sex, and ethnicity, which are then scaled to match their shares of the population

It’s like cooking, Mr. Gelman says. Give chefs diverse, fresh ingredients, and they’ll toss them on the grill and serve them simply. But the ingredients of survey research are rarely so fresh. They need sauce.

Online surveys vary but tend to be united by one fact: Their respondents are self-selected. This gives traditional pollsters heartburn; the Literary Digest respondents, for example, were self-selected. The most prominent online survey is YouGov, which employs Mr. Rivers. Its respondents fill out polls to receive gift cards; millions are signed up. For an individual study, YouGov creates a sample of panelists modeled on the randomized demographics of, say, a Census Bureau survey. For proponents of the method, that is analogous to reweighting after traditional polls.

"I frankly don’t agree with the notion that it’s an untested theory," Mr. Schaffner says. "No matter how you are adjusting surveys, you’re making assumptions."

The difference is the reliance on volunteers, according to Mr. Groves. People who choose to take many surveys must be different from the general public. "That's a killer problem with volunteer panels," he says.

If that’s a killer problem, it applies broadly, however, Mr. Gelman says: "If the nonresponse is high, any survey is an opt-in survey."

Flocking to YouGov

While researchers debate its methods, YouGov has continued to gain traction for its election predictions, which have been comparable in accuracy to those of traditional polls. Social scientists have flocked to it. Mr. Schaffner reviews dissertation proposals at the science foundation; many plan to use online panels. Some focus on views of elected representatives; some on race and the evaluation of political leaders; others on how behavior is changed by transparency initiatives.

"Online polling has really democratized the study of public opinion," Mr. Schaffner said. "It's opened it up to a lot more innovation."

In his own work, Mr. Schaffner compares how identically online and telephone surveys perform. For a study this summer in the journal Political Analysis, he found them equally accurate, differing little in their estimates of political indicators and correlative relationships. When he presents such analyses at conferences now, the audience is large and welcoming; a couple years ago, he says, that was not the case.

The public-opinion association’s letter to the Times seems like an artifact from that time, Mr. Schaffner says. That may be due in part to the natural conservatism a standard-keeping body should possess. But online polling is also a threat: Academics have long been consumers of phone surveys, and online work could cut into the revenue of survey firms, which employ many members of the association. Indeed, late this summer, researchers at Harvard University began recruiting their own online panel, cutting out the middleman.

For his part, the association's president, Mr. Link, applauds YouGov's work; his concern is more about the lone academic unfamiliar with survey research who might conduct a faulty, nonrandomized poll. If such polls repeatedly make headlines and then are questioned, the reliability of the entire trade could fall into doubt.

In the meantime, research on online surveys should continue. "They need to take it when they think it’s working and try to break it," says Mr. Santos, of the Urban Institute. Online-survey evidence is comparable to the results of a drug studied on 30 people, he says. Promising, but you wouldn’t release it to the public yet. Leave it for the researchers. "They know the risks they are taking, and they can crash and burn."

That burn can be painful. The Literary Digest never recovered from the 1936 poll. Its name sullied, the magazine closed two years later, bought out by Time.

subscribe today

Get the insight you need for success in academe.