Some battleground polls missed 2016. Are they better for 2020?

Supporters attend President Donald Trump's Keep America Great Rally at the Rupp Arena in Lexington, Kentucky, Nov. 4, 2019.

Yuri Gripas/Reuters

November 5, 2019

Is President Donald Trump going to win reelection? Numerous national polls show him losing to all three of the leading Democratic candidates: Elizabeth Warren, Joe Biden, and Bernie Sanders. But, as many voters learned the hard way in 2016, just because your candidate is leading in the polls doesn’t mean you should stock up on confetti.

For one thing, national polls – which were largely accurate in 2016 – only indicate the likely popular vote. Predicting the Electoral College results requires a state-by-state analysis, which can portray a very different picture.

Last time around, an overwhelming majority of polls showed Hillary Clinton leading in key swing states, but Mr. Trump swept to victory on the back of incredibly narrow margins in Wisconsin, Michigan, and Pennsylvania. This week a new round of surveys shows that President Trump remains close or ahead in those state battlegrounds that won him the Oval Office three years ago. 

Why We Wrote This

A number of swing state pollsters misjudged candidate Donald Trump’s rise three years ago, in part due to flaws in their methods. Here’s how they’ve tried to improve for President Trump’s reelection race.

The miss on state polls in 2016 was due to numerous factors, but pollsters bore the brunt of the criticism. So, have they adjusted their methods to be more accurate? In some cases, yes. At the same time, fundamental changes in both the polling and media industries present an ongoing problem. And no one is claiming complete confidence – especially when Mr. Trump is involved.

“The things that we can fix, we’ve fixed,” says Patrick Murray, director of Monmouth University Polling Institute in New Jersey. “But then there are things that are never fixable because each election presents its own sets of challenges.”

Howard University hoped to make history. Now it’s ready for a different role.

A snapshot of the race

It’s still a long way to Election Day 2020, but one year out, polls can give a sense of where the race stands at this moment.

A new Washington Post/ABC survey shows President Trump behind Mr. Biden by 17 points, behind Senator Warren by 15 points, and behind Senator Sanders by 14 points. Poll averages show the Democratic contenders with smaller but still substantial margins.

But the national popular vote doesn’t elect the president. The Electoral College does, on a state-by-state basis. At the moment that’s presenting a somewhat different picture.

A just-released New York Times Upshot/Siena College series of state polls shows President Trump either ahead or within the margin of error of his three main Democratic rivals, among likely voters, in six key battlegrounds, from Michigan to North Carolina.

Which measure matters most? That might not be settled until the election is over.

Ukraine’s Pokrovsk was about to fall to Russia 2 months ago. It’s hanging on.

“... the national poll versus state poll discussion over the next 12 months is going to be painful,” tweeted Dante Chinni, a data journalist and director of the American Communities Project, on Nov. 5.

Polling, explained

Poll results sound so definitive, like the results of a running race. But polling and the statistical modeling underlying the results involve a set of assumptions that are not usually made explicit, says Michael Traugott, a research professor emeritus at the University of Michigan’s Institute for Social Research. Two of the trickiest aspects have to do with figuring out how representative survey respondents are of the general population and the people who will turn out to vote.

If a pollster were to survey 500 students on a 2,000-student campus about what their favorite flavor of ice cream is, she could evaluate how closely her sample of 500 students matches the total population of 2,000 students. She might look at gender, year in school, and other factors, such as whether she got a disproportionate sample of pistachio lovers because she interviewed people outside an exotic nut shop. Then she could weight the results accordingly so that she wouldn’t overemphasize the pistachio lovers’ input.

But if she were trying to capture preferences ahead of an election to decide which flavor of ice cream would be served on campus, she would also have to guess which of the 2,000 students would turn up to vote, and how similar they would be to her sample. Will all the Ben & Jerry’s Phish Food lovers be sacked out on their couches from overindulgence?

In 2012, the U.S. electorate was significantly different than in 2016, which made it difficult for pollsters to accurately predict who would be voting. In addition, voters’ preferences in 2016 were much more correlated to education level; in particular, white men without college degrees voted disproportionately for Mr. Trump. But many pollsters didn’t factor in education level since it hadn’t mattered much in the past.

After the election, Andrew Smith, director of the University of New Hampshire Survey Center in Durham, weighted his data to education levels and found his results were then precisely in line with the actual election result.

But the lack of education weighting didn’t account for all the discrepancies in state polls.

“I have weighted to education since I was a wee babe in diapers,” says Charles Franklin, director of the Marquette Law School Poll in Wisconsin. “Unfortunately that didn’t keep me from being wrong – I still had Clinton up by 6 [points].”

The main reason for that, he says, was the fact that undecided voters disproportionately cast their ballots for Mr. Trump.

A famous polling miss

In the 1948 presidential election, the Chicago Tribune had to go to press before all the results were in. So, relying on polling, they infamously declared, “Dewey Defeats Truman.”

They were wrong. Postmortems pinned the miss on the use of quota sampling, a technique in which interviewers decided who to contact within certain parameters, such as nine white men under age 50 in a rural area. Ever since, pollsters have used random sampling to ensure more accurate results. That is best done by live telephone interviews, selecting randomly from all current phone numbers. But in the age of caller ID, about 95% of the people called don’t answer, or refuse to talk.

At the state level, the response rates are somewhat higher, but still plummeting. Dr. Smith in New Hampshire says that just four years ago, his team was able to complete about 0.9 interviews per hour. Now it’s down to about half that, doubling the costs of a survey from about $25,000 to $50,000 or more.

That is driving many survey outlets to explore alternatives to random sampling, which Dr. Smith says has been the gold standard for decades.

“You’re seeing a movement away from that, which to me is a fundamental change in the industry,” he says.

Misinterpretation by journalists

Likewise, fundamental shifts in journalism have compounded the problem.

As newspaper budgets shrink, many don’t have the funds to conduct polls at the state level – or retain veteran reporters to interpret them.

“We have a lot of journalists who weren’t well-trained to begin with in terms of statistics, and besides that they’re getting younger and they’re less experienced,” says Dr. Traugott, who got his start as a research assistant for George Gallup in the 1960s.

For example, if a poll were to show Ms. Warren 2 points ahead of Mr. Biden with a +/- 3 point margin of error, journalists should report that as a statistical tie. But instead, they often present such results as showing an unequivocal lead.

Moreover, if journalists can’t or don’t evaluate the relative credibility of polls, then they drive down the market for higher-quality telephone polls, which at the national level cost $100,000 compared with as little as $5,000 for an online poll, according to a report on 2016 election polls by the American Association for Public Opinion.

The more care and effort you put into collecting the data, the easier it is to analyze the data, says Andrew Mercer, a senior research methodologist at Pew Research Center.

“The flip side of that is the easier and quicker your data collection is ... in order to be accurate with those, you really have to have a good understanding of what are the important variables that predict how somebody is going to behave and make sure you’re able to adjust those variables in your analysis,” he says.

How to tell if a pollster is good, or not

So, what is a voter to do? No. 1, look at how transparent the organization is about its methodology. One indication of that is whether it is a member of AAPOR’s Transparency Initiative. While transparency doesn’t guarantee accuracy, it tends to signal credibility.

For a professional estimate of accuracy, FiveThirtyEight.com assigns letter grades to each election polling organization in its list of latest polls, but users have to dig deep to discover the methodology behind each poll, including whether it was done by phone or online.

Some experts recommend that voters look at polling averages, with the hope that bad polls will cancel each other out. But Dr. Smith urges caution about that approach.

“If you took a glass of pure Poland Spring water, and mixed it together with a glass of water from a mud puddle, would you want to drink it?” he asks.