News reporting of survey results have been increasingly problematic over the years. Very few report the methodological details the American Association of Public Opinion Researchers argues is necessary to understand the reliability and validity of survey results. Almost all "cherry-pick" what results to include in stories (and what not to include). They often rephrase the questions when reporting on them - sometimes for clarity or due to space limitations, sometimes to fit particular storylines. And when they run their own surveys, or pay for others to do the surveys, they often suggest topics or specific questions and wording (again to fit existing themes, memes, or storylines).
Despite all that, however, most people could trust that the survey results themselves were real.
A recent story in the Salt Lake Tribune suggests that even that last aspect of trust may be misplaced. The story describes how the editors at the Tribune thought that one set of survey results they had commissioned was "wrong," in the sense of being inconsistent with other survey findings. And so they told the research firm conducting the survey to "fix" them. And the research firm did - by changing how they weighted survey results. The reasons for the change, and the mechanism (changing weighting) were apparently not given when the paper printed a story on the "adjusted" results.
Now there are many reasons one poll result may be inconsistent with others, even if the surveys are competently done and are reported accurately. First, something could have actually changed people's opinions and the survey responses accurately reflect that change. Second, there could be differences in terms of target population or sample weighting - that the difference in survey results is linked to differences in who the results are generalized to (for example, adults, registered voters, likely voters; or sampling only urban voters and not voters from all areas candidates will represent). Again, here the difference is likely "real", but the results are about different things and shouldn't necessarily be directly comparable in the first place.
The inconsistent survey might result from having a "bad" sample, or outlier, in the sense that it is not representative. It's easy to forget that when generalizing a sample to a larger population, you start with the assumption that the sample is representative - and there's always a chance that assumption is wrong. That's what things like confidence intervals are for. The most common, the 95% confidence interval, still indicates that there's a 1 in 20 chance that your sample is really not representative. And in big election campaigns, with dozens of surveys and samples, it's not that unlikely that you'll get one that is way off.
If that was the case here, it's not unreasonable to conclude that that specific sample is problematic and unreliable. And the question becomes how to best deal with the survey and its results. In a later discussion of the issue, reporters at the Tribune asked some "experts" about handling such a case. An ethics expert at the Poynter Institute said the paper should have asked about the "inconsistent" results before publishing the initial report, and inform their readers about any concerns. A professor at Brigham Young University who focuses on political polling also said that a big shift in numbers should trigger some concern, and that the lack of transparency in reporting methods and issue was "troubling."
But the newspaper went beyond raising issues about validity and informing their readers of potential problems. They asked their pollster for a fix. The pollster indicated that the sample was, at least in terms of political affiliation, representative of the Utah electorate. But that particular poll focused on local Salt Lake City elections, and said that "intuitively" you would expect that urban area to have more Democrats and so, after the paper questioned the results, he changed the sample weighting formula to match his "intuition."
As an occasional public opinion researcher (and AAPOR member), I have a problem with "fixing" a bad sample. Changing sample weightings won't fix the sample if it is that rare "bad" one, it would just add more layers of potential error and distortion - particularly if you change it to a more "intuitive" sample breakdown, or because the news organization funding the survey thinks the results should be something else. I've been down that road - back when I worked for a consulting firm, I was asked to "fix" an analysis - that the client needed the value of a group of broadcast stations to be 30% higher than the models we used indicated. By shifting some of our growth estimates to the high end (rather than use median estimates) and tweaking some financial forecasting assumptions I came close (25% higher); but I also warned my bosses that while that model was defensible, it was different from our standard model and was likely to be questioned. They went ahead with the "fixed" number and analysis, and the IRS did question it (and ended up not accepting it as a valid estimate of "fair market value.")
If it's looks like one survey result is an outlier, the ethical action from a research perspective is to use it, but certainly label it as such and discuss why it might be so, and what that suggests about the accuracy and validity of the findings; or to drop the outlier as being unrepresentative. In journalistic terms - report the results, along with concerns about its accuracy and representativeness (basically what the Poynter and BYU experts recommended), or chose not to report the results if you have reason to suspect their accuracy or representativeness - and tell the news consumers why.
That's not what the Tribune did - they reported the problematic results initially as accurate, then asked the research firm to "fix the numbers", and then reported the adjusted numbers as accurate, saving any discussion of the issues or process for a follow-up piece - a piece that was, most likely due to questions about why the two sets of results were so glaringly different, or why the adjusted numbers were supposed to be any more accurate than the initial numbers.
That's bad enough, but they way that the Tribune editor and head of the research firm talked about it, it appeared that "fixing" poll results in this manner was not uncommon, or considered to be unethical or something they needed to inform readers about. It makes me wonder just how common this practice is, and how much trust we can place in sponsored survey results.
Sources - It's Now Public: Editors Rejigger Polls, Commentary Magazine
Revised poll: Crockett-McAdams race is a virtual tie, Salt Lake Tribune
No comments:
Post a Comment