Your feedback ensures we stay focused on the facts that matter to you most—take our survey

Pliny's Point polling methodology

From Ballotpedia
Jump to: navigation, search
Poll Image share on Facebook Share on Twitter Share via Email


For Ballotpedia's presidential approval, congressional approval, direction of the country, and generic congressional ballot polling results, we take an average of the most recent polls (from the last 30 days, with some exceptions for major news events) on one or more of these topics conducted by the following sources:

Ballotpedia chose to include polls from these organizations in our averages because we regard their methodologies as broadly trustworthy and reliable. If you know of other outlets who do aggregate polling on these topics, email us. We average the results and show all polling results side-by-side because we believe that paints a clearer picture of public opinion than any individual poll can provide. The data is updated daily as new polling results from the above sources are published.


Understanding polling

Below we briefly highlight three aspects of public polling that illustrate both the complexity of polling and how polls tend to differ from one another. Understanding these concepts is key to interpreting what polls mean and underscores the value of aggregating polling results.

Featured polls methodologies

SourceSample sizePercent margin of error (+/-)Contact methodFiveThirtyEight Grade*
CBS News1,111 adults4Live phoneN/A
The Economist / YouGov1,327 registered voters3.2OnlineB
Fox News1,020 registered voters3Live phoneB
Gallup1,500 adults3Live phoneB-
Pew Research2,504 adults2.3Live phoneB+
Quinnipiac University1,514 registered voters3.1Live phoneA-
Rasmussen Reports (Pres. Approval)1,500 likely voters2.5Automated phone + onlineC+
Rasmussen Reports (Direction of Country)2,500 likely voters2Automated phone
Reuters / Ipsos2,744 adults2.1OnlineA-
USA Today / Suffolk University1,000 adults who identified as registered voters3Live phoneB+
The Wall Street Journal / NBC900 adults3.27Live phoneA-
The Washington Post / ABC1,014 adults3.5Live phoneA+
Politico/Morning Consult1,987 registered voters2OnlineN/A
Investor's Business Daily/TechnoMetrica (IBD/TIPP)904 adults3.3PhoneA-
Public Policy Polling887 registered voters3.3Phone + onlineB+

Last updated August 24, 2017

*FiveThirtyEight pollster ratings are calculated based on historical accuracy in predicting elections, sample sizes, methodology, etc. Find out more here.

Contact method

Pollsters use a variety of different methods to contact potential survey participants. From the 1930s to the 1980s, pollsters generally did their work through direct contact: going door-to-door, a remarkably expensive and time-consuming method.[1] Nowadays, pollsters rely upon telephones and the internet. Neither of these approaches comes without challenges. Fewer Americans today, for example, live in households with landlines than they did 20 or even 10 years ago. On the other hand, not every American—particularly in older generations—has a cell phone. To get around this, many pollsters call a combination of landlines and cellphones for a survey. An additional problem is that, with the rise of caller-ID, fewer people pick up the phone to participate in surveys—part of a systemic problem in the modern polling industry known as the response rate. Some pollsters have to looked to the internet as a workaround for this issue, but analysts continue to debate the accuracy and dependability of online polls.[2][3]

A study by FiveThirtyEight found that variances in polls about President Trump's favorability stemmed primarily from the collection method. Polls of registered or likely voters tended to be more favorable to Trump than those that polled adults generally. Automated or online polls also resulted in more favorable rankings than those conducted with live phone calls. The data for these findings was taken from polls conducted between Feb. 1 and Feb. 19, 2017.[4]

There are also differences among polling firms in who contacts the participants. Some phone-based surveys use live-interviewers, while others use automated interactive voice responders.[3]

The sample and margin of error

Pollsters can’t realistically contact every American adult throughout the country and ask their opinion on a given issue. Instead, they try to contact a representative sample—usually anywhere between 1,000 and 2,000 individuals—that accurately represents the country’s population as a whole. Pollsters, with the help of statisticians, demographers, and data experts, use a variety of techniques to create a representative sample. This typically involves using complicated probability formulas and algorithms to ensure random sampling and to increase the likelihood of contacting an accurate cross-section of the U.S. adult population. Some pollsters also create panels of respondents that they believe reflect the actual population and poll them repeatedly over a span of time. These polls are usually called tracking polls. Oftentimes, pollsters weigh their respondents to account for various demographic measurements. For example, a pollster might weigh more heavily the responses from a specific demographic group if that group was poorly represented in the random sample in relation to the country’s estimated demographic composition. The same might be done if a group appears to be overrepresented.

Samples are also where margins of error (MoE) come into play. The MoE describes the potential range of variation for a poll’s results in the context of its representative sample and the actual population. For example, if a poll with a margin of error of 3 percentage points showed that 47 percent of respondents approve of candidate X, that means the pollster believes, based on the representative sample in the poll, anywhere between 44 and 50 percent of the actual population approves of candidate X. Generally speaking, a larger sample size means a smaller MoE, while a smaller sample size means a larger MoE. Other factors, such as the poll’s design, probability formulas, and weighting methods, can also affect MoE.[5][6]

Question framing

Though all polling firms, in general, are after the same goal—to find out what the public thinks about a given topic or issue—they don’t always ask their questions the same way. Studies have found that differences in how questions are worded—even subtle differences—can lead to a range of results. In 2003, for example, Pew Research found that when they asked respondents if they “favor or oppose taking military action in Iraq to end Saddam Hussein’s rule,” a total of 68 percent responded that they favor military action. But when Pew added to the end of that question, “... even if it meant that U.S. forces might suffer thousands of casualties,” 43 percent responded in favor of military action.[7]

The number of possible answers that pollsters provide to respondents has also been known to produce different results. With questions about presidential approval and disapproval, for instance, some firms only give respondents the options of saying approve or disapprove. Other firms, however, give respondents more flexibility by allowing them to respond with answers such as “strongly approve” or “somewhat disapprove.” Again, these slight differences have historically led to differing results among polling firms.[8]

Trust in sources

Public perception of the various sources cited here varies. Pew Research published a study on this topic in 2014, detailing how members of various idealogical groups (conservatives and liberals) trusted or distrusted popular media organizations. The results from this study for the news organizations included in Ballotpedia's polling data are listed below. By providing a variety of sources of polling results side-by-side, we hope to mitigate the influence of potential bias. All of the major news sources selected for Ballotpedia's polling index were rated as more trusted than distrusted in the overall results from all respondents.[9]

Trust levels in polling sources by ideology

The following table includes broad summaries of the data found in a survey by Pew Research. Blue indicates more distrust of the news source by conservatives, while red indicates more distrust by liberals. Full results are available here.

SourceTrust by ideology
The EconomistMostly trusted by all groups (except those identified as consistently conservative, where trust/distrust were about equal)
ABC NewsSomewhat not trusted by conservatives
CBS NewsSomewhat not trusted by conservatives
Fox NewsMostly not trusted by liberals
NBC NewsSomewhat not trusted by conservatives
The New York Times (sometimes polls in conjunction with CBS)Mostly not trusted by conservatives
PoliticoMostly not trusted by conservatives
USA TodaySomewhat not trusted by conservatives
Wall Street JournalMostly trusted by all groups
The Washington PostMostly not trusted by conservatives

See also

Footnotes