TOPICS


If the data for research is "good" data and the appropriate statistical tools are used correctly and the analyses are handled the way they should be, statistics will objectively and consistently point to what is likely to be a "true" result.

HOWEVER, people do lie in their use of statistics. How?

In Data Collection, by:

  • Falsifying data
  • Choosing the wrong data-– choosing something that looks like a valid support for an argument, but really isn't
  • “Cherry-picking” the data-– selecting a subset of the data collected such that the data supports a “statistically significant” result.

Sometimes they even make their data conveniently unavailable to other researchers so that others are unable to verify the results.


In the Analysis of Research Based on Samples, by:

  • Reporting differences that aren't "statistically significant" . . . as if they were. If the results aren't statistically significant, as far as statistics is concerned, the differences aren't "real".
  • Unjustifiably generalizing to a population based on a sample that's not representative of the population (for example, using a sample consisting of just American women to generalize to the total population of the U.S.).
  • Implying links between factors when such a conclusion is not statistically justified by the research data. If you read such keywords as "suggests"; "may" as in "may be linked to", "may help explain"; "some" as "in some people", "in some cases" be wary,
  • Never mentioning the risks associated with "false positives"(getting a positive result when the result is really negative) or "false negatives" (getting a negative result when the result is really positive) when recommending a course of action based on the conclusions of the study done (medical research studies all too often do this).

    False positives and false negatives are always possible in any test procedure. So the standard procedure is to do additional testing if a test comes in positive. If those subsequent tests arrive at the same diagnosis consistently, all's well and good.

    However, if they don't and the positive result turns out to be a mis-diagnosis ("false positive") inevitably you'll have been put through tests that involve additional time, money, and sometimes additional pain or complications. You really need to know about all of this up front before the first test is performed.


In the Presentation of the Results in a Graph or Chart by:

  • Showing no zero point on graphs– making small changes seem bigger than they really are
  • Without appropriate warning, by breaking the y-axis such that a zero point can be presented This is better than no zero point, but still makes small changes seem bigger than they are. The reader, if they don’t register the break, can be deceived in the same fashion that they’re deceived by no-zero-point graphs
  • Omitting labels on one or more axes, leaving the reader to conclude the wrong thing
  • Uneven/deceitful plot scaling
  • Reporting results as percentages without indicating the actual numerical bases (i.e., "detached statistics")
  • Quoting %'s of %'s — usually purposely used to confuse the reader and make results seem more "dramatic" than they really are.
  • Selectively choosing to report statistics that support a desired result rather than those that wouldn't
  • Tacking on adjectives and adverbs when describing numbers and differences between numbers (e.g., a meager 4%, a huge 4% difference) to influence your interpretation of the results, These are strategies used in marketing, politics, and propaganda. Numbers are just numbers; they don't have attributes.


By Reporting an "Expert's" View

All "experts" are NOT created equal. Supporting or critiquing the results of a research study by quoting an "expert" should send up an immediate warning flag in your mind as far as the credibility of the study's conclusions are concerned. Such an “expert's “opinion is just that–opinion–it's not objective science . . . or necessarily even remotely valid.

Also be aware that a true expert in one area of expertise does NOT make that person an expert in other areas of expertise. Question the knowledge and credentials of all “experts” before regarding them as “experts” in any area of expertise outside their own.

RULE OF THUMB: Be critical in your reading of any newspaper, magazine, web, or report. If any of the above tactics are used, be suspicious of the results. Also, be suspicious of any newscast that uses such tactics in reporting research.