skip to main content

Five Things You Should Know about Election Polling

0

By Monika L. McDermott, Ph.D.

 

Monika L. McDermott, Ph.D., is professor of American politics/political behavior. McDermott is also a survey research practitioner who has conducted election surveys at the Los Angeles Times Poll, the CBS News Election and Survey Unit, and the Center for Survey Research and Analysis at the University of Connecticut. She is currently an election and polling analyst for CBS News and The New York Times. Photo by Chris Taggart

Monika L. McDermott, Ph.D., is professor of American politics/political behavior. McDermott is also a survey research practitioner who has conducted election surveys at the Los Angeles Times Poll, the CBS News Election and Survey Unit, and the Center for Survey Research and Analysis at the University of Connecticut. She is currently an election and polling analyst for CBS News and The New York Times.
Photo by Chris Taggart

At this stage of an election, public polls—conducted by the media, universities and private polling companies—provide us with daily doses of information about how Americans feel about the candidates. Most importantly, they tell us who is ahead and who is behind in the vote. Lately there has been some controversy surrounding election polls and their accuracy. In order to facilitate understanding of the issue and the polls themselves, here are some things Americans should know about election polls.


1. Why pollsters conduct election polls:
Election pollsters poll because survey results sell. For media outlets, polls produce content that the public eagerly consumes. For universities and private polling firms, conducting and publicizing election polls provides name recognition and credibility (Quinnipiac who?). As a result, “horserace” numbers—the head-to-head election match-up results—come at a rapid pace at this stage of a campaign. Americans are conflicted over this deluge. In a 2008 Pew poll, a majority of Americans said they would like to see less coverage of horserace polls. At the same time, research shows that the public is fascinated by horserace coverage, and even seeks it out. Horserace results are a guilty pleasure for the public: like reality TV, no one seems to like it, but an awful lot of people consume it.

2. Who pollsters are actually polling.The survey population varies depending on the poll’s purpose, or, in the case of elections, when the poll is conducted. There are three relevant populations in election polling: American adults; registered voters; and likely voters. When conducting a poll well in advance of an election, many pollsters will interview the general population because it is too early to know if people will register and vote. During an election year, registered voters are a common population because they are a concrete and eligible group (and more likely to vote than the non-registered). As the election nears, pollsters begin the search for “likely” voters, those who will actually show up to vote.

3. What a “likely voter” is. Likely voters are a slippery bunch to net. Since pollsters want to interview only those who are actually going to vote (to accurately gauge the state of the race), they need to weed out those who say they will vote, but won’t. There are two common ways to do this: 1. Drop individuals from the analysis who are deemed unlikely to vote; or 2. Apply a statistical weight to the data, depending on a person’s probability of voting. Both methods typically consider key factors such as an individual’s past turnout record, their level of engagement with the election, and their stated likelihood of voting.

4. What poll bias is and where it comes from. The nature of a likely voter sample directly impacts the results of a survey. Counter to accusations, however, reputable pollsters do not try to bias their samples. In fact, it is in their best interests to be as accurate as possible—credibility depends on accuracy. At the same time, likely voter models are not always as accurate as a pollster might like. This year the big danger is over-representing Democratic voters. Democratic voters turned out in exceptionally high numbers in 2008—making up 7 percent more of the electorate than did Republicans, a larger gap than usual. Few experts expect Democratic turnout in 2012 to match that of 2008, leaving open the possibility that likely voter models based on 2008 turnout will be biased towards Obama. That said, pollsters are not likely to consider only the 2008 election when constructing voting models, and no two models are exactly alike. As a result, it is implausible that these models are systematically skewed in any direction.

5. Attacking pollsters’ methods is the last refuge of a trailing candidate. As sure as the sun rises every day, the candidate who is trailing in the polls will cast aspersions at specific polling organization, or polling method, or both. For example, the Romney campaign, trailing in the polls recently, launched a national fight against the likely voter models of the major polls, claiming they are biased towards Obama because they include too many Democrats. But before anyone considers Romney a temporary sore loser, we have to remember back to May of this year when an Obama aide said (on MSNBC) about a CBS News poll that had the President trailing: “We can’t put the methodology of that poll aside. Because the methodology was significantly biased. It is a biased sample.” Campaigns are usually silent about polling method when their candidate is in the lead.

Share.

Comments are closed.