The risks of polling and why we will keep doing it

U.S. House Speaker Paul Ryan, from left, Steve Handel and Karen Handel participate in a ceremonial swearing-in June 26 on Capitol Hill. Karen Handel, a Republican, defeated Democrat Jon Ossoff in Georgia’s 6th Congressional District runoff on June 20. (Photo by Drew Angerer/Getty Images)

U.S. House Speaker Paul Ryan, from left, Steve Handel and Karen Handel participate in a ceremonial swearing-in June 26 on Capitol Hill. Karen Handel, a Republican, defeated Democrat Jon Ossoff in Georgia’s 6th Congressional District runoff on June 20. (Photo by Drew Angerer/Getty Images)

Our poll of the 6th Congressional District race, conducted two weeks before the June 20 runoff, missed the mark. There’s just no getting around it. It showed Democrat Jon Ossoff with a 7-point lead among likely voters, with a margin of error of 4 percentage points. Republican Karen Handel ended up winning by 4 points.

Most of the polls conducted just before the election suggested a slight lead for the Democrat, but ours overestimated his standing the most.

We asked Abt Associates, the pollster The Atlanta Journal-Constitution has used for five years, what went wrong. The company, which conducts polling for several national news organizations, has a good track record and reputation in the polling world.

But no polling company gets it right all the time. Respected pollsters, using sound statistical methods, can produce results that don’t hold up well once the election is done.

Abt recently completed a review of its methods and the results. It determined that a combination of factors led to the miss by our 6th District poll. Polling is complicated, but the main problem was traced to the way Abt adjusted the group of respondents to better reflect the district.

In an effort to make a sample match the larger population, pollsters routinely adjust the weight they give certain groups that they may not have a big enough sample of. In this case, the group of 745 likely 6th District voters included too few young, female and minority respondents. As a result, the pollster gave those respondents’ answers more weight when calculating the results.

Typically, such adjustments don’t have a big impact on the final results, Abt said. But in this case, the demographic groups adjusted for tended to vote disproportionately for the Democrat.

Every poll comes with a measure of uncertainty, and that uncertainty translates into risk for the news organizations that commission and publish them. We accept that risk — and the heartburn that comes when they miss — because polls remain the best available tool for measuring public opinion. And in spite of some well-publicized failures, that hasn’t changed.

Horse race polls provide a snapshot of a race at the time they are conducted. They aren’t reliable predictors of an outcome — particularly those conducted a week or more before Election Day. Too much can change in the final days of a campaign.

The information they provide, however, offers clues about the state of the race and what voters are thinking. And polls that go beyond the “who will you vote for” question provide insight into what voters care about. Our last poll asked about issues and voter experiences. It told us, for example, that voters were most concerned with health care costs and government spending and that the House Republican health care bill was deeply unpopular (by margins large enough to overcome any weighting errors). And it told us that voters overwhelmingly felt that the race was important.

Election polls aren’t the only surveys we do. Each year we commission a poll before the start of the Legislature to find out what issues matter most to Georgians — and we frequently find that Georgians’ top priorities don’t match what lawmakers spend their time on.

We don’t solely rely on the numbers. For every poll we conduct, we also call dozens of respondents directly. This allows us to get a better understanding of their views and what is driving them.

We know many readers have questions about polling, so we always publish detailed explanations of how our polls are conducted, as well as the data we collect, so readers can evaluate the results themselves. We can’t eliminate the uncertainty, but we can be transparent about how our polls are conducted.

Much has been written about the accuracy of polling, particularly after the 2016 presidential election. The American Association for Public Opinion Research spent months conducting a thorough analysis of 2016 polling and concluded that the national polls were “generally correct and accurate by historical standards” in reflecting the popular vote, but polling in some key states significantly underestimated Donald Trump’s support. The report explains its conclusions in great detail and is worth a read.

The AJC’s last poll of the presidential race in Georgia, conducted by Abt, showed Trump ahead two weeks before the election. He won by 5 points, a larger margin than our poll showed but within its margin of error.

Gallup, one of the nation’s best-known survey research organizations, announced in 2015 that it would no longer conduct election polls. The decision came after its review of problems with its 2012 presidential election polling. Gallup Editor-in-Chief Frank Newport said at the time that the organization would instead focus on understanding the issues of the day.

“This isn’t based on a lack of faith in the process or the value of horse race polling in general, but rather a focus on how our particular firm’s contribution to the process can be most effective in keeping the voice of the people injected into the democratic process,” he wrote at the time.

In a column titled “Polling is getting harder, but it is a vital check on power,” statistician Nate Silver, the editor of FiveThirtyEight, urged organizations not to give up on election polling.

“Left to their own devices, politicians are not particularly good at estimating prevailing public opinion. Neither, for the most part, are journalists. One reason that news organizations like The New York Times and (FiveThirtyEight partner) ABC News continue to conduct polls — at great expense and at a time when their newsrooms are under budgetary pressure — is as a corrective to inaccurate or anecdotal representations of public opinion made by reporters based mostly in New York and Washington. Polling isn’t a contrast to ‘traditional’ reporting. When done properly, it’s among the most rigorous types of reporting, consisting of hundreds or thousands of interviews with statistically representative members of a particular community.”

We aren’t in New York or Washington, but we still see polling as one small but important component of our overall coverage. We wrote more than 200 stories and blog posts about the 6th District race, breaking news of developments large and small, examining issues and providing the kind of depth and context that only experienced journalists who know the community could offer.

Polling is not for the faint of heart. The uncertainty won’t go away, but we can and will learn from each poll we commission, even the misses.