Skip to main content

Posts

Hausmann et al.'s Using Smartphone Crowdsourcing to Redefine Normal and Febrile Temperatures in Adults: Results from the Feverprints Study

As described in Wired's pop piece, the average body temperature for healthy adults isn't 98.6℉. Instead, data suggests that it is 97.7℉. Here is a link to the original study by  Hausmann, Berna, Ggujral, Ayubi, Howekins, Brownstein, & Dedeoglu . 1. This is an excellent theoretical example for explaining a situation where a one-sample t-test could answer your research question. 2. I created fake data that jive with the results, so you can conduct the test with your students. This data set mimicked the original findings for healthy adults (M = 97.7, SD = .72) and was generated with Andy Luttrell's Data Generator for Teaching Statistics . 97.39 97.45 97.96 97.35 96.74 99.66 98.21 99.02 96.78 97.70 96.90 97.29 97.99 97.73 98.18 97.78 97.17 97.34 97.56 98.13 97.77 97.07 97.13 9...

The Tonight Show: Nick Jonas scores as Joe Jonas on Buzzfeed quiz.

Explain validity to your students AND earn some "I'm still hip!" street cred using this The Tonight Show clip that features a Buzzfeed quiz AND exactly one Jonas brother. Nick Jonas took a "Which Jonas Brother are you?" Buzzfeed quiz. He scores as Joe Jonas. Ergo, the Buzzfeed assessment measure is not valid. It does not properly assess what it purports to assess. Watch the video for yourself. If you want to take this example a step further, you could have your students take the original quiz, discuss the questions and their ability to discern which Jonas Brother is which, you could describe Nick Jonas as a Nick Jonas Subject Matter Expert, maybe Social Desirability got in the way of Nick answering the questions honestly, etc. Another thing I've noticed as my blog and I have aged together: There are now generations of Buzzfeed quiz assessments that provide great examples for different age groups: Gen X: Shirley Manson did not score as Shirley ...

Watson's For Women Over 30, There May Be A Better Choice Than The Pap Smear

Emily Watson, writing for NPR, describes medical research by Ogilvie, vanNiekerk, & Krajden . This research provides a timely, topical example of false positives, false negatives, medical research, and gets your students thinking a bit more flexibly about measurement. This research provides valuable information about debate in medicine: What method of cervical cancer detection is most accurate: The traditional Pap smear, or an HPV screening? The Pap smear works by scraping cells off of a cervix and having a human view and detect abnormal cervical cancer cells. The HPV test, indeed, detects HPV. Since HPV causes 99% of cervical cancers, its presence signals a clinician to perform further screen, usually a colonoscopy. The findings: Women over 30 benefit more from the HPV test. How to use this example in class: - This is a great example of easy-to-follow  research methodology and efficacy testing in medicine. A question existed: Which is better, Pap or HPV test? The questi...

My favorite real world stats examples: The ones that mislead with real data.

This is a remix of a bunch of posts. I brought them together because they fit a common theme: Examples that use actual data that researchers collected but still manage to lie or mislead with real data. So, lying with facts. These examples hit upon a number of themes in my stats classes: 1) Statistics in the wild 2) Teaching our students to sniff out bad statistics 3) Vivid examples are easier to remember than boring examples. Here we go: Making Graphs Fox News using accurate data and inaccurate charts to make unemployment look worse than it is. Misleading with Central Tendency The mean cost of a wedding in 2004 might have been $28K...if you assume that all couples used all possible services, and paid for all of the services. Also, maybe the median would have been the more appropriate measure to report. Don't like the MPG for the vehicles you are manufacturing? Try testing your cars under ideal, non-real world conditions to fix that. Then get fined by the EPA. Mis...

Wilke's regression line CIs via GIFs

A tweet straight up solved a problem I encountered while teaching. The problem: How can I explain why the confidence interval area for a regression line is curved when the regression line is straight. This comes up when I use my favorite regression example.  It explains regression AND the power that government funding has over academic research . TL:DR- Relative to the number of Americans who die by gun violence, there is a disproportionately low amount of a) federal funding and b) research publications as to  better understand gun violence death when compared to funding and publishing about other common causes of death in America. Why? Dickey Amendment to a 1996 federal spending bill. See graph below: https://jamanetwork.com/journals/jama/article-abstract/2595514 The gray area here is the confidence interval region for the regression line. And I had a hard time explaining to my students why the regression line, which is straight, doesn't have a perfectly rectangula...

Using Fortnite to explain percentiles

So, Fortnite is a super popular, first-person-shooter, massive multi-player online game. I only know this because my kid LOVES Fortnite. With the free version, called Battle Royale, a player parachutes onto an island, scour for supplies, and try to kill the other players. Like, there is way more to it than that, but this is my limited, 39-year-old mother of two explanation. And, admittedly, I don't game, so please don't rake me over the coals if I'm not using the proper Fortnite terminology to describe things! Anyway, my brain thinks in statistics examples. So I noticed that for each Battle Royale match starts with 100 players. See the screen shot: This player is parachuting on to the island at the beginning of the skirmish, and there are still 100 players left since the game is just starting and no one has been eliminated. Well, when we introduce our students to the normal curve and percentiles and z-scores and such, we tell them that the normal curve represen...

Talking to your students about operationalizing and validating patient pain.

Patti Neighmond, reporting for NPR, wrote a piece on how the medical establishment's method for assessing patient pain is evolving . This is a good example of why it can be so tricky to operationalize the abstract. Here, the abstract notion in pain. And the story discusses shortcomings of the traditional numeric, Wong-Baker pain scale, as well as alternatives or complements to the pain scale. No one is vilifying the scale, but recent research suggests that what a patient reports and how a medical professional interprets that report are not necessarily the same thing. From Dr. John Markman's unpublished research: I think this could also be a good example of testing for construct validity. The researcher asked if the pain was tolerable and found out that their numerical scale was NOT detecting intolerable. This is a psychometric issue. One of the recommendations for better operationalization: Asking a patient how pain effects their ability to perform every day tas...