Opinion | It’s impossible to trust polls — or ignore them. Here’s how to stay sane.

It seems impossible to trust polls – and just as impossible to ignore them.

The polling industry faces familiar problems: only 6 to 7 percent of people will take a call from a pollster; Polls failed to find segments of Donald Trump’s base in 2020; and predicting exactly who will vote is mathematically tricky.

But surveys are essential. You are the only ones data source that direct asks people how they will vote and credibly tries to represent the entire electorate.

This situation makes political junkies miserable. They constantly check FiveThirtyEight, RealClearPolitics, and other poll aggregators for fresh data, but can’t shake nagging doubts about every number on screen.

As a professional political junkie, I also feel this pain. So I’ve collected thousands of Senate polls from past elections, crunched the numbers, and developed some rules of thumb that can help the most wired, frenetic poll-users stay sane throughout 2022.

Rule 1: Recent poll averages get most races right – but they don’t tell us much in close races.

Some good news about polls: Historically, they’ve done a pretty good job of predicting* outcomes. I used my dataset to calculate poll aggregates — that is, averages of the data designed to dampen the impact of outliers and get an accurate reading of what the polls said each day — for nearly 400 past Senate elections.

I found that about nine to ten weeks before Election Day, a simple poll correctly predicted the winner in 83 percent of the races. By election day, that percentage had risen to 93 percent.

Some of the races in my dataset were easy choices—pollsters don’t usually have trouble correctly measuring races in deep red Nebraska or dark blue Massachusetts. But 83 percent isn’t too shabby. In numerous races that puzzled pundits — like the 2018 Senate races in Texas and North Dakota — the early poll leader won.

Unfortunately, if we only look at close races, the early polls are less meaningful. In my data set, candidates who had a one to three point lead on Labor Day won only 63 percent of the time. That’s an advantage, but it’s very thin.

Simply put, if you’re trying to understand the standings of a Senate race, a poll average is a great place to start beginning. But if your ideal candidate leads by just a few points, you should hold off on buying champagne.

Rule 2: The actual error rate is huge. It gets smaller as Election Day approaches — but it never goes away.

When pollsters publish their data, they dutifully report the “margin of error” — a statistic that attempts to estimate how much the randomness associated with polls could skew their results. This number is typically in the range of three to four points.

Unfortunately, that number is a wild underestimate. Randomness isn’t the only source of error in polls — estimates of the likely electorate can be far off target; Races can see late shifts in voter preferences; and non-response bias (when some voters refuse to talk to pollsters while others happily pick up the phone) is always a risk. All of these factors can cause a survey to miss the mark. At this point in the campaign, polls routinely miss the bottom line by six points.

As Election Day approaches, the polls become more accurate. Swing voters make their own decisions Partisans come home, these and polls pick up on these changes. If a poll average gives a candidate a mid- to high-single-digit lead, they’ll be a strong favorite heading into Election Day.

But mistakes never disappears. It’s perfectly normal for the poll average to be off by a few notches on election day. And in tight elections, that can make a big difference.

Rule 3: No state or party is safe from a voting error on election day.

Missing two to four points on Election Day doesn’t seem like a big deal. But the parties are equal — and small mistakes can cause surprises and upsets.

The 2020 election is a perfect example. Heading into Election Day 2020, Democrats looked competitive in many key races — including red states like Iowa, North Carolina, South Carolina and even Kansas. But the polling bug got them the most.

In race after race, polls underestimated GOP candidates. The Democrats hoped for an election defeat with victories in the red states. Instead, they got a narrow 50:50 majority.

We don’t know if the polls will be wrong this year. And if they do, we don’t know which party they might be underestimating. Those who confidently predict survey failures often end up wrong.

But the possibility of polling error on Election Day is high — and that could make political junkies uncomfortable, regardless of other polling data they know.

*Note that early polls are not specifically designed to predict outcomes – they represent a snapshot of current public opinion. Regardless, analysts and news consumers routinely use them as a predictive tool.

Polls were collected from FiveThirtyEight, RealClearPolitics, Argo Journal, various pollster archives and news reports. On a given day, the poll aggregate is the average of the three most recent polls taken within the previous 30 days. If only one or two polls were published, this is used as the average. Partisan polls are excluded, as are polls conducted more than 90 days before the election. For surveys involving multiple voter populations (e.g., registered and likely voters), the likely voter model was used. Surveys of all adults were excluded. The data covers elections from 1992 to 2020, but not all races were surveyed. As Election Day approaches, the number of races with proper voting increases. Other aggregation methods result in different win rates and error estimates.

Leave a Reply

Your email address will not be published. Required fields are marked *