Skip to main content

Posts

Showing posts with the label Type I error

Christie Aschwanden's "The Case Against Early Cancer Detection"

I love counterintuitive data that challenges commonly held beliefs. And there is a lot of counterintuitive health data out there (For example, data questioning the health benefits associated with taking vitamins  or data that lead to a revolution in how we put our babies to sleep AND cut incidents of SIDS in half ). This story by Aschwanden for fivethirtyeight.com discusses efficacy data for various kinds of cancer screening. Short version of this article: Early cancer screening detects non-cancerous lumps and abnormalities in the human body, which in turn leads to additional and evasive tests and procedures in order to ensure that an individual really is cancer-free or to remove growths that are not life-threatening (but expose an individual to all the risks associated with surgery). Specific Examples: 1) Diagnosis of thyroid cancer in South Korea has increased. Because it is being tested more often. However, death due to thyroid cancer has NOT increased (see figure below)...

Regina Nuzzo's "Scientific method: Statistical errors"

This article from Nature is  an excellent primer on the concerns surrounding the use of p -values as the great gate keeper of statistical significance. The article includes historical perspective on how p -values came to be so widely used as well as some discussion on solutions and alternative measures of significance. This article also provides good examples failed attempts at replication (good examples of Type I errors) and a shout out to Open Science Framework folks. Personally, I have revised my class for the fall to include more discussion of and use of effect sizes. I think this article may be a bit above an undergraduate, introduction to statistics class but it could be useful for us as instructors as well as a good reading for advanced undergraduates and graduate students.

The Economist's "Unlikely Results"

A great, foreboding video  (here is a link to the same video at YouTube in case you hit the paywall) about the actual size and implication of Type II errors in scientific research. This video does a great job of illustrating what p < .05 means in the context of thousands of experiments. Here is an article from The Economist on the same topic. From TheEconomist

Washington Posts's "GAO says there is no evidence that a TSA program to spot terrorists is effective" (Update: 3/25/15)

The Travel Security Agency implemented SPOT training in order to teach air port security employees how to spot problematic and potentially dangerous individuals via behavioral cues. This intervention has cost the U.S. government $1 billion+. It doesn't seem to work. By discussing this with your class, you can discuss the importance of program evaluations as well as validity and reliability. The actual government issued report goes into great detail about how the program evaluation data was collected to demonstrate that SPOT isn't working. The findings (especially the table and figure below) do a nice job of demonstrating the lack of reliability and the lack of validity. This whole story also implicitly demonstrates that the federal government is hiring statisticians with strong research methods backgrounds to conduct program evaluations (= jobs for students). Here is a summary of the report from the Washington Post. Here is a short summary and video about the report from ...

Cracked's "The five most popular ways statistics are used to lie to you"

If you aren't familiar with cracked.com, it is a website that composes lists. Some are pretty amusing ( 6 Myths About Psychology That Everyone (Wrongly) Believes ,  6 Things Your Body Does Every Day That Science Can't Explain ). An d some are even educational, like "The five most popular ways statistics are used to lie to you" . from cracked.com The list contains good points to encourage critical thinking in your students. Some of the specific points it touches upon: 1) When it is more appropriate to use median than mean. 2) False positives 3) Absolute versus relative changes in amount 4) Probability 5) Correlation does not equal causation And you'll get mad street cred points from undergraduates for using a Cracked list. Trust me.

NPR's "Data linking aspartame to cancer risk are too weak to defend, hospital says"

This story from NPR is a good example of 1) the media misinterpreting statistics and research findings as well as 2) Type I errors and the fact that 3) peer-reviewed does not mean perfect. Here is a print-version of the story , and here is the radio/audio version... (note: the two links don't take you to the exact same stories...the print version provides greater depth but the radio version eats up class time when you forget to prep enough for class AND it doesn't require any pesky reading on the part of your students).