Deciphering Stats Reporting

 Questions to Ask When Reading a Research Report or Article

  • What are the main factors being studied in the research?
     
  • Does the study describe things with statistics or does it draw conclusions based on statistics?
     
  • Does the report say there are links between various factors in the research or does it go further and imply a causal relationship between those factors?
     
    • NOTE: Causality is extremely tough to demonstrate; when a research report infers or states causality be wary.
       
  • Does the study claim to "prove" something?
     
    • NOTE: Usually, the only way we can "prove" anything directly is to obtain data from every member of a population. Research results based on a sample simply cannot prove a thing.
       
  • Does the study get its data from an entire population or from a sample? If the latter, then:
     
    • What is the population being generalized to? Is it the US, or the state, or the city, or some other group?
       
    • What's the size of the sample used? Was it large enough? Typically, to generalize validly to the general US population with any degree of meaningful confidence in the result, you'll need some 400 people in the sample or more; 1000 randomly selected subjects is typical
       
    •  What was the sampling error? Usually, this is quoted as a plus or minus some percent.
       
  • Can you identify any biases present that weren't controlled for?
     
    • Were the participants in the study volunteers? Volunteers can react or respond very differently from non-volunteers, especially in survey research. So the results obtained from any study that is solely performed on a group of volunteers--and any results obtained from it--is usually suspect.
       
    • Who sponsored the research? Do they have a vested interest in how the study turned out? If so, be wary of the results reported. If such is available, look for evidence that they might have "tampered" with the results. All too many times this happens, although it's not inevitable.
       
    • A sample should be representative of the population for its results to accurately reflect what might happen in the population. Consequently, if the sample isn't drawn properly, there'll be a problem generalizing from the sample's results to the population at large.

      So . . . what kind of sampling was used? A true random sample? A stratified random sample? If so, that's good. If the sample's drawn some other way, however, the results are in danger of being biased. The question to ask yourself is: Is the conclusion stated by the writer warranted based on the sampling scheme used? 
       
  • Can you identify any confounding factors that muddle the conclusions?
     
    •  if you can think of any other factor or factors that might have caused the reported results . . . and if the researchers haven't controlled for those factors, they're "confounding factors" . . . and the results of the research are potentially compromised. 
       
    • If even one uncontrolled confounding factor exists, it'll be tough to tell whether the factor being investigated, or whether the confounding factor that was left uncontrolled, or whether it's an interaction between the two that's responsible for the results obtained.
       
  • What statistics did they use to describe their data? Did they choose the most appropriate ones for the question they were investigating? Or did they choose ones that tended to support some favored hypothesis?
     
  • What kind of statistical tests did they apply to reach to infer their conclusions?  Can you figure out what they were from what's presented in the article? Were they the correct ones? Which ones should they have used?
     
  • Were the results presented as "statistically significant"? If not, the differences could have been due to chance rather than due to the factor being examined in the research study.
     
    • If the research is based on a sample and the article reports differences, but doesn't indicate whether the differences are "statistically significant", be suspicious.
       
  •  How were the data presented? 
     
    • All percentages? Percentages need to be backed up by numbers (measurements) so you can tell how meaningful they really are. Be suspicious if all that's reported are percentages.
    • All numbers (measurements)? This is usually good.
    • Or a combination of both? Both aid meaningful interpretation of the data.
       
  • Were there adjectives or other "descriptors" used to characterize the numbers or differences between numbers? If so, that's a dead giveaway that someone's trying to influence you or lie to you . . . or that they're biased somehow in their interpretation.
     
    • As an example, suppose the Cleveland Cavaliers beat the Detroit Pistons 102-97 in a basketball game. One of Cleveland's TV stations might describe the difference of 5 points as a "crushing victory"; while one of Detroit's TV stations might say that the Cavaliers "edged out" the Pistons. Both TV stations are describing the same event . . . and their respective biases show in how the difference of 5 is being described. In and of itself, the difference of five, however, is just that a difference of five. Objective reporting would have given the score and omitted the descriptors.
        
  • Look at the graphs they used (if any):
     
    •  Do they have a zero point on the graph? If not, the graph will probably not give you enough "context" to judge how different various measures are. Be suspicious if there's no zero point on the graph and a case is being made for "large differences" among the various factors being presented.
       
    • Is (are) the graph's scaling "fair"? By choosing a "non-consistent" scaling (such as 10's on the "y" axis of one graph and 1000's on another graph that the first is being compared to, the presenter can mislead you so that you arrive at the wrong interpretation.
       

After you've asked the above questions, then assess the total impact of the article by asking yourself:

    • Are the conclusions of the study valid? Why or why not?
    • Is the article's headline supported by what's presented in the article? Should you believe the result?
    • Is there anything in the study that might prompt further worthwhile research?

    If at least one of these questions isn't answered by a "yes", then the study--as it's presented in the article--is worthless.

These same questions can be applied to any research effort or report. Use them to determine how much you should "buy into" the claimed results.

Back to Top

arrowl2Return to Math/Science Page

Comments

 

Latest Version! NetObjects Fusion 10 

 

Copyright 1998 Rich Hamper 

All Rights Reserved

 

Last Updated:

Sunday, January 20, 2008