Wednesday, October 24, 2012

Pardon My Stats- Misreporting

One of my fears about the potentially dismal future of journalism is the rampant display of ignorance in reporting almost anything touching on science, research, polling, and statistics.  The source of today's rant is a  headline.
Survey: Obama Scores Higher As 'True Leader'
The problem starts with the first word: the research finding being reported isn't from a survey at all - it's from a series of focus groups.  Focus groups that aren't random or representative of the general population (for instance, there are equal numbers of Republicans and Democrats, but no independent voters at all.  Focus groups differ significantly from standard surveys, in that how they phrase questions are often inconsistent, and can be influenced highly by lead-in questions and responses - so there may be little consistency in the wording that participants are responding to.  Furthermore, people in focus groups are asked to give their responses publicly, so there's a strong chance of "socially desirable" or "bandwagon" responses (a bias towards giving an answer the moderator wants, or just following along with what others in the group say).
The most glaring problem, though, is that this group of respondents weren't randomly selected, and thus whatever answers they provide only pertain to themselves.  There is no scientific or statistical basis for generalizing this sample of 400 participants to the U.S. population.  Furthermore, we know from the statement of 200Rep/200Dem makeup that this sample isn't even remotely representative.    The lack of random sampling also means that there is no viable basis for standard statistical analysis - which researchers use to base judgements about statistically significant models or differences.
  Another hint that this really isn't a survey is that the researchers provide none of the background on methodology, or precise question wording, that the American Association of Public Opinion Researchers recommend as the minimal reporting necessary to understand and evaluate survey results.  Of these the most serious for political polling is the precise wording of questions, and the nature of lead-in questions - there can be as much as a 30-40% swing in survey results based on context and wording.

So forget any sound basis for these representing what "voters" think - what can you say about this sample's responses to some general inquiries.  Will either candidate "offer them a better quality of life."  (Let's forget for the moment the fact that neither is capable of personally doing so, and that quality of life is vague term covering a wide range of different potentialities).  Two thirds of that sample had no opinion - and the researchers report that among those with an opinion, "President Obama has the higher score by a "significant margin"".  First, there's no score here - just how many gave that response, and the researchers don't give either counts or percentages.  Assuming they've at least got the order right - that means that at least 66 (of 400) thought President Obama could give them a better life.  But did Obama outpoll Romney by 5 or 25?  We don't know, as they don't say.
  Then there's the "true leader" item. What makes a "true leader" - a decisive authoritarian, a strict follower of some creed (political philosophy, religion, or moral/ethical codes), or a true democrat that follows majority opinion?  The researchers indicate that the numbers (or scores) for Obama are again "significantly higher" than Romney's, although the numbers for Obama have been falling and the numbers for Romney are increasing.  But there is apparently no precise number or indicator of significance that the researchers are willing to share.
   If you want to get to what is "scientifically" observable results, here goes; the researchers asked a non-random, non-representative group of people some questions about the two main Presidential candidates.  A lot didn't really have any opinions or preferences, some liked Obama, some (a few less) liked Romney, and some are shifting their preferences from Obama to Romney.  From that, the researchers presume to offer interpretations than aren't supported by their methods and results.  Certainly, no claim about "voters" in a more general sense is demonstrably supported by the results as described.
  The other thing "research reports" like this tell me is that these aren't researchers I'd trust to know or validly describe  the real world.  They seem to be better at telling clients what the clients want to hear.

As for the reporter and story - they seem to accurately repackage the information they were provided - and the reporter does note that the researchers actually provide no real numbers,  and does place quote marks around one of the uses of "significant margin."  This might suggest some awareness that this "research" may be more spin that science.  But apparently not enough to question whether to pass along  - as generalizable survey research - vague results and unsupportable generalizations about what "voters" think.  It's easy, and makes for good headlines and conclusions, to talk about the opinions and attitudes of "voters" rather than of just this one sample of 400.  But that would be at a minimum sloppy, and at worst a clear display of the reporter's ignorance of the science of public opinion research.  And that yarms the reporters and the news organizations credibility and value as journalists.

Source -  Survey: Obama Scores Higher As 'True Leader',  Media Daily News

No comments:

Post a Comment