When we talk false positives in psych stats, it is usually in the context of NHST, which is abstract and tricky to understand, no matter how many normal curves you draw on the dry erase board. We also tend to frame it in really statsy terms, like alpha and beta, which are also tricky and sort of abstract, no matter how many times you repeat .05 .05 .05.
In all things statistics, I think that abstract concepts are best understood in the context of real-life problems. I also think that statistics instructors need to emphasize not just statistics but statistical thinking and reasoning in real life. To continue on a theme from my last post, students need to understand that the lessons in psych stats aren't just for performing statistics and getting a good grade, but also for improving general critical thinking and problem-solving in day to day life.
I also think that our in-class examples can be too sterile. They may explain Type I/II error accurately, but we tend to only ask our students to consider the balance between false positives and false negative, but not all of the external factors that may be at play when deciding how many false positives/negatives we are willing to tolerate in a given circumstance.
This NPR story about Type II error is a great example of psych stats thinking applied to a real and urgent problem. Specifically, it describes the Type II error rate the FDA tolerates when it comes to manufacturers developing new COVID-19 at-home tests.
The problem: The FDA's standards are such that they will not approve any COVID-19 test if it has too many false negatives. The story states that, with some proposed tests, Type II errors occur because the test doesn't catch the virus early in its progression. This is all reasonable thinking, right? False positives are bad, if not downright deadly, when it comes to COVID-19 testing.
But test manufacturers argue that if the test a) can be performed and interpreted at home, b) provides rapid results and c) is cheap enough that you could use one every other day, false negatives aren't that big of a deal.
In messy reality, the really accurate tests (low Type II error) are taking 7-10 to get back to people. Our medical testing system is overwhelmed. It takes medical personnel (who are also overwhelmed) to administer those low Type II error tests.
So...maybe a higher Type II error rate is very tolerable under such circumstances. Which perfectly demonstrates how messy the real world is, right? I love that and think your students could have a great debate about that topic. Usually, when we say, "How do you reduce Type II error?" you say "Increase Type I error" and that just means you set a smaller p-value. What do the scientists suggest here? How may increased Type I error rates hurt people in this situation?
I think a lot of us are struggling with how many COVID examples we should use in our classes as we prepare for Fall 2020. I am, too. I think we should use them, but also lighter examples (go to my blog and search for Taco Bell, Dennis Quaid, and/or salsa, for a few silly examples).
Comments
Post a Comment