What Makes a Good Poll Good?

What makes a good poll good?

In recent times it has become all too easy to take information found on TV or the internet as fact. We look at poll aggregators like Real Clear Politics, Fivethirtyeight and pollster.com but rarely do we ever question the underlying methodologies of the polls being aggregated. Rarely do we ever question what Rasmussen or Selzer do to get the numbers that we so widely share and discuss. As such we thought it would be a worthwhile exercise to delve into the methodologies of three respected polling agencies in this election to understand better election polling.

Monmouth University

Monmouth University has emerged in recent elections as one of the premier polling institutes in the country. They are one of only 6 pollsters that have garnered an A+ rating on fivethirtyeight’s pollster ratings. So how do they do it? For starters, Monmouth relies on live caller interviews i.e. if you were polled by Monmouth, you would have been called by an actual person asking you questions. The respondents are selected are mixed in their characterisation. In a September poll of 802 people for example, 402 respondents were picked off of a roll of registered voters (even split between landline and cell phone) while the rest of the 400 were picked through a random digit dial(again an even split between landline and cell phone). Notably pollster.com lists the sample as being drawn from all registered voters, showing that poll aggregators might not be completely accurate. Both of the above samples as well as all calling were outsourced to outside organisations.

To make the sample representative, Monmouth weights respondents. In the same September poll mentioned above for example the following table shows the demographics adjusted by weightage. You will notice that they are almost evenly distributed across demographics:

 

untitled

Notably, Monmouth is careful to note that sampling error might have taken place within certain subgroups

SurveyUSA

As the only poll that does not call cellphones and is ranked A or higher by fivethirtyeight.com, SurveyUSA uses interesting yet effective methodology to construct a reliable poll. In fact, SurveyUSA eliminates bias by not hiring human interviewers at all. They ARE the interviewers. On their website, SurveyUSA points out that other highly rated polls still use outsourced human interviewers to gather data. Inherently, a unknown human caller creates an opportunity for bias. Because the largest polls outsource to call centers, they remain hidden to the quality of the callers they hire. Imagine an interviewer mispronounces a name, speaks with a heavy accent, or even adds his/her own opinion to the question; whether intentional or not, that may change someone’s answer. By interviewing people themselves, the pollsters at SurveyUSA believe they have eliminated bias from respondents. They feel the quality of employee for SurveyUSA is much higher than that of a call center, as their employees are trained to remain impartial, enunciate clearly, and speak slowly. SurveyUSA is owned and operated by journalists who have perfected the language of polling.

Although they do not outsource to call centers, they do purchase random telephone “samples” from companies that expertly randomize. This is a further attempt to limit the amount of bias in their sample. By ensuring a random sample, SurveyUSA can confidently carry out their procedure with not much concern for sampling error. SurveyUSA has created some unique methods of polling, which has propelled them to become one of the premier polls in American politics.

 

Selzer & Co

FiveThirtyEight called Ann Selzer of Selzer & Co “the best pollster in politics,” but do her polls really deserve that title?

The Bloomberg Politics Poll by Selzer & Co takes a straightforward approach to earn their A+ rating. They used 1002 US adults who are likely votes in this election for their September 26th poll and weighted them based off of age, race, and education. The percentages for these weights were determined by a random sample of 1326 US adults by randomly selecting landlines and cell phones.

Their method for sampling has given evidence to a lack of party bias; it possibly results from Selzers ideology of “Keep your dirty hands off your data” (fivethirtyeight.com). Most polls had a mean-reverted bias towards one party or another. The Selzer & Co pollsters received a 0.0 bias rating for their 37 polls analyzed, meaning they were not bias towards one party or another. The lack of bias is significant since all the other polls that received an A/A+ rating were bias one way or another.

However, the lack of bias might not have played in their favor. Selzer & Co was in the fourth to last for the number of races called correctly of the polls with an A/A+ rating. Also, they predicted that Trump and Clinton would be in a deadlock at 46% each before the first debate and 43% to Trump and 46% to Clinton if the third parties were added. Unfortunately, their model failed to predict the outcome of the first two debates and Trump’s video with Billy Bush.

ABC/ Washington Post

ABC News/Washington Post is another organization that according to FiveThirtyEight has an A+ rating due to its “historical accuracy and methodology of polls.” However ABC also has the lowest percentage of races called correctly at 78%, leaving a 5% gap between its historical accuracy and the historical accuracy of the other A+ pollsters. This indicates that FiveThirtyEight must have put a higher importance on the methodology that ABC uses to conduct their polls.

This methodology was observed in the poll conducted between October 10-13th, a landline and telephone questionnaire of 1,152 adults randomly selected, then categorized as likely or registered voters who answered 23 questions intended to explain voter attitudes.

There were several aspects of the poll that were effective- for example, the poll initiates the questions by asking for how closely the individual is following the election, their likelihood of voting, likelihood of changing their minds and enthusiasm regarding the election as expressed in a range of responses- allowing for non-attitudinal responses as well. It also asks for the individual’s opinion on not only Clinton and Trump but also third party candidates, which beg greater accuracy when determining the spread. It further has used several of these questions in prior polls that shows consistency.

However there were also several issues with the poll. Regarding population outreach only those individuals with landlines are able to respond to the poll which results in a certain demographic polled, despite it being random. The questions in reference to party alignment may not have been problematic had there not been 3 similar style questions in a row, which may seem accusatory to one who is not politically engaged.

Finally the poll revealed the mean-reverted bias that FiveThirtyEight pointed out in their profile of ABC. This was demonstrated with questions that explicitly asked for the appropriateness of Trump’s comments during the debate as well as 5 of the 23 questions regarding womens rights, Bill Clinton, and Trump- not even mentioning Secretary Clinton. The poll doesn’t ask for party ID until the end of the questionnaire.

Ultimately the October 13th poll reflected the typical ABC poll which FiveThirtyEight anticipated with a 3 pt average error (Clinton 47%, Trump 43%).

Conclusion

In sum, analysing the methodologies of these polls has some interesting implications for the way we view and interpret polls. While all the polls have differences in their methodologies and their line of questioning, they all yield similar results and evident from their ratings on fivethirtyeight. On the flip side it is worth noting that looking into the differences in polling methodologies makes one begin to question the accuracy of poll aggregators. While the results of all of the polls are similar, differences in methodologies result in different skews and errors in polls. While errors in different polls might cancel each other out there also exists a risk to reaffirm a certain kind of bias if all the polls were guilty of that kind of bias. Suffice it to say that a simple aggregation like those done on pollster.com or RCP might be good resources to get an idea of where candidates stand but a aggregator like fivethirtyeight which accounts for differing methodologies and results might be more accurate in nature.

http://www.bloomberg.com/politics/articles/2016-09-26/national-poll

http://fivethirtyeight.com/features/selzer/

https://www.monmouth.edu/polling-institute/reports/MonmouthPoll_US_092616/

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s