In Iowa and beyond, don't be surprised if polls aren't accurate
One recent poll found that 44 percent of Democrats would support accepting refugees from a fictional country. Another poll found that 30 percent of GOP primary voters were in favor of bombing it.
Charlie Neibergall/AP
A few weeks ago in this space, I urged skepticism about election forecasts that rest on historical data. The reason is simple: History does not give us enough cases to work with. So what about polls? Surely public opinion surveys will tell us what’s going to happen, right?
Not necessarily. We can be confident that a given result is accurate only within a range of values, say, plus or minus three percent. So if a race is very close, a poll cannot tell us who is really ahead.
Mind you, that kind of uncertainty crops up even under the best of circumstances – and the circumstances of 2016 are far from the best. Low response rates can mess up surveys, because people who refuse to answer may have different views and characteristics from those who patiently take all the questions. The problem is getting worse. More and more Americans are just hanging up on pollsters, and the easy availability of voicemail and caller ID enable them to screen out unfamiliar callers.
Cellular phones mean further difficulties. Pollsters can save money by using automatic dialers, but federal law forbids the using such devices for unsolicited calls to cellular phones. About half of Americans now live in cellular-only households, and excluding them from a survey practically guarantees that the results will be wrong. Including them means potentially more representative samples, but it also raises costs, which may force pollsters to reduce sample sizes, which means lower quality.
Internet surveys can be cheaper, but they are still in their infancy. Researchers can use random-digit dialing to get a decent sample of telephone numbers, but there is no equivalent for e-mail addresses. And 15 percent of Americans – including a large fraction of those over 65 – do not use the Internet at all.
Suppose that a pollster could somehow surmount all these obstacles and get an excellent random sample. Other daunting challenges would remain. One is the phenomenon of “nonattitudes.” Not wanting to appear foolish or ill-informed, survey respondents will often give answers when they have no real knowledge or opinion concerning the question. One recent poll found that 44 percent of Democrats would support accepting refugees from a fictional country. Another poll found that 30 percent of GOP primary voters were in favor of bombing it. If you are trying to explain voting intentions by looking at people’s issue positions, you need to consider the strong possibility that you are looking at a mirage.
False responses are another problem. To put it bluntly, people often lie when pollsters ask if they have engaged in socially approved activities such as voting. Election pollsters have long noted that many nonvoters claim to have cast ballots. In comparison with actual tallies, some polls have overstated turnout by as much as a quarter. So if you are asking about the last election in order to reckon the respondents’ likelihood of turning out in the next one, you could be working with some bad data.
And then there is the chance that honest answers do not predict actual behavior. Respondents may very well intend to vote in a caucus or primary for candidate X, but then stuff can happen. They might end up staying home because the weather is foul or the kids need help with their homework. And they might change their minds at the last minute, which is quite possible if they do not have a strong commitment to begin with.
So in the weeks ahead, the results of primaries and caucuses might be quite different from what the polls say. Don’t be surprised if you’re surprised.
Jack Pitney writes his Looking for Trouble blog exclusively for the Monitor.