Skip to main content

News / Articles

Polling: Why It Matters and How to Understand It

Published on 10/19/2024

Here we go. Early voting has begun in MoCo, which means citizens registered to vote with an ID can now cast a ballot, either absentee or early through Monday, November 4. Or, they can vote on election day, Tuesday, November 5. (Next week we will cover what you’ll experience when you go to vote.) 


Meanwhile, pollsters are doing their thing, taking a snapshot of public sentiment leading up to the election. Each poll predicts the outcome, treating the moment as if the election had happened that week.


Pollsters have been doing this since the 1930s, October 3 attendees of Lunch with the League learned from students of Wabash College political science professor Shamira Gelbman. 


Her class presented the history of polling, modes of gathering poll responses, how sampling works, what to understand about reporting on polls, and how to tell the difference between “leading” questions from politicians and questions that genuinely seek to measure public opinion by pollsters. 


You might wonder, why all the focus on polling? What does it really tell us, especially after pollsters predicted the wrong winner of the presidency in 2016 and overstated the margin of victory in 2029? So, why do news organizations sponsor polling and report on it throughout the election year?


Polls are like pictures, windows into the context surrounding voting. Like clean windows allow for clear sight, so does accurate polling. One of the most reputable polling agencies, the Pew Research Center, notes that polling “gathers and publishes information about the well-being of the public and about citizens’ views on major issues. And it provides an important counterweight to people in power, or those seeking power, when they make claims about ‘what the people want.’” 

Polling both helps representatives understand their constituents and is a check-and-balance against any outlandish claims about “the will of the people.”


As Gelbman’s Wabash students presented, polling began face-to-face in the 1930s. Over the decades, it moved to telephone landlines, but as people gave up landlines for the internet, pollsters often conduct polls online and sometimes incentivize people to participate. Each mode of polling has limitations that affect the results. Face-to-face, humans tend to adjust their responses to avoid perceived “unwanted answers.” This is called social desirability bias. 


Explanatory note here: As social creatures, humans communicate mostly by body language. The second most powerful means of communication is tone of voice. Words make up only part of how we communicate. Eliminating body language by using phones reduced signals that could sway the respondents. But when polling moved online, where human contact was eliminated altogether, people tended to answer in less authentic ways as they scroll quickly through surveys to get to the completion incentive. 


How do pollsters know this? When polling moved to phones, pollsters saw more accurate alignment between reported answers and voting outcomes. But as more Americans ditched their landlines, the trouble of accurate answers and finding good population samples roared up (probably again). 


Samples matter in polling. Reputable pollsters concern themselves with understanding the demographics of the American population, and they attempt to gather data from a sample group that reflects the whole of us. When analyzing poll results, they have to consider the age, education level, race/ethnicity, gender and socioeconomic status of their sample population and weight the answers to account for any discrepancies. Prior to 2016, they failed to account for education levels, not realizing how important it had become in gauging how people vote.


Samples that reflect the population accurately are difficult to obtain so polling organizations weigh outcomes when their samples don’t include a precise proportion of the population. Weighting is where the math comes in. For instance, the number of black Americans is about 15% of the US population. If the pollsters reach 5,000 people but 15% of those are not black Americans, they’ll weight their samples to adjust for each demographic, trying to predict based upon those weighted calculations. 


Polling organizations with integrity also have to craft neutral questions, that is questions that don’t use strong emotional words that lead people to give biased responses instead of their true opinions. As the survey platform Survicate explains, “Leading questions subtly guide or influence respondents to answer in a particular way, introducing bias into survey responses. This bias can impact the reliability of data collected, affecting the overall validity of survey results.” 

Eight types of leading questions are frequently used by non-neutral groups, such as political parties and candidates’ campaigns, and these questions are designed to direct thought and behavior, not measure it:

  • Assumptive questions prompt respondents to agree. 
  • Choice-based questions limit answer options to exclude what might disagree with the politician or party (these are frequently on local representatives’ surveys). 
  • Suggestive questions direct you in how to answer. 
  • Yes/No questions that nudge the respondent to agree with the politician or party. 
  • Tag questions add a “wasn’t it” or “don’t you” to elicit your implied agreement. 
  • Confirmation questions are phrased with something that the questioner assumes to be true. 
  • Presupposition questions presuppose unconfirmed information. 
  • Negative questions frame the question negatively, leading the respondent towards a specific kind of answer. 
  • Finally, some questions have embedded commands. 

You can quiz yourself: What type of leading question is this example from Jeffrey L. Bernstein, a political scientist at Eastern Michigan University? “Who do you trust more to protect America from foreign and domestic threats?” and offers choices of (a) President Trump or (b) a corrupt Democrat.” And what type of leading question is the following: “What do you find most disturbing about the Trump presidency?” 


Question verbiage is carefully evaluated by pollsters trying to conduct a scientific survey. For them, even verbs matter, as Bernstein notes in his 2021 article for the survey platform Cengage. If a question asks “How much of a priority is “protecting” the environment to you?” how would your answer change if the verb was “save”? Consider which of the following is more neutral: “Given the large number of Americans without health care...” or “Given the number of Americans without healthcare....” Small differences in question-wording can affect polling outcomes, which are neck-and-neck right now. 


Gelbman’s class explained how to understand the terminology of polls, like the “margin of error,” which accounts for the lack of precision in a sample size. They explained what a margin of error between 2-4% means when one candidate is polling at 46% and the other at 44%. The students showed how it could lead to a span of a few percentage points at the top of the span or it might mean the underdog wins by 1-2%. 


As the students reported, some polls are funded by news organizations, but conducted by outside organizations. Others are both funded and conducted without support from news organizations. Either way, the spin comes when various media organizations interpret the numbers in their reporting. (Because numbers are just numbers to pollsters.)


As you enter the solitude of the ballot box, having seen the election predicted by pollsters, it may help to realize polls are mere snapshots, not meant to influence voters to give up on a candidate that doesn’t seem likely to win. But a close poll can show us all that our ballot matters.