Statistics Don't Lie--People Do

If the data for research is "good" data and the appropriate statistical tools are used correctly and the analyses are handled the way they should be, statistics will objectively and consistently point to what is likely to be a "true" result.

HOWEVER, people do lie in their use of statistics. How?

In Data Collection, by

  • Falsifying data
     
  •  Choosing the wrong data--choosing something that looks like a valid support for an argument, but really isn't

 

In the Analysis of Research Based on Samples (e.g., surveys), by

  • Reporting differences that aren't "statistically significant" . . . as if they were. If the results aren't statistically significant, as far as statistics is concerned, the differences aren't "real".
     
  • Unjustifiably generalizing to a population based on a sample that's not representative of the population (for example, using a sample consisting of just American women to generalize to the total population of the U.S.).
     
  • Implying links between factors when such a conclusion is not statistically justified by the research data. If you read such keywords as "suggests"; "may" as in "may be linked to", "may help explain"; "some" as "in some people", "in some cases" be wary,
     
  • Never mentioning the risks associated with "false positives"--getting a positive result when the result is really negative--or "false negatives"--getting a negative result when the result is really positive--when recommending a course of action based on the conclusions of the study done (medical research studies all too often do this).
  • False positives and false negatives are always possible in any test procedure. So the standard procedure is to do additional testing if a test comes in positive. If those subsequent tests arrive at the same diagnosis consistently, all's well and good.

    However, if they don't and the positive result turns out to be a mis-diagnosis ("false positive") inevitably you'll have been put through tests that involve additional time, money, and sometimes additional pain or complications. You really need to know about all of this up front before the first test is performed.

 

In the Presentation of the Results in a Graph or Chart by

  • Showing no zero point on graphs
     
  • Breaking the axis such that a zero point can be presented and small changes can be made to look bigger than they really are
     
  • Omitting labels on one or more axes, leaving the reader to conclude the wrong thing
     
  • Uneven/deceitful plot scaling
     
  • Reporting results as percentages without indicating the actual numerical bases (i.e., "detached statistics")
     
  • Quoting %'s of %'s (usually purposely used to confuse the reader and make results seem more "dramatic" than they really are.
     
  • Selectively choosing to report statistics that support a desired result rather than those that wouldn't
     
  • Tacking on adjectives and adverbs when describing numbers and differences between numbers (e.g., a meager 4%, a huge 4% difference) to influence your interpretation of the results, These are strategies used in marketing, politics, and propaganda. Numbers are just numbers; they don't have attributes. 

By Reporting an "Expert's" View

All "experts" are NOT created equal. Supporting or critiquing the results of a research study by quoting an "expert" should send up an immediate warning flag in your mind as far as the credibility of the study's conclusions are concerned. An expert's opinion is just that--opinion--it's not objective science . . . or necessarily even remotely valid.

RULE OF THUMB: Be critical in your reading of any newspaper, magazine, web, or report. If any of the above tactics are used, be suspicious of the results. Also, be suspicious of any newscast that uses such tactics in reporting research.

Back to Top

arrowl2Return to Math/Science Page

 

Comments

 

Latest Version! NetObjects Fusion 10 

 

Copyright 1998 Rich Hamper 

All Rights Reserved

 

Last Updated:

Sunday, January 20, 2008