Skip to main content

Posts

Showing posts with the label NPR

Bad credit scores as a predictor of dementia

NPR aired this story by Sarah Boden  about the relationship between risky financial behavior and dementia. It consists of Boden interviewing people caring for individuals with dementia and dementia researchers. Before the NPR story, Boden published a related piece to a Pittsburgh NPR station . The Pittsburgh piece is a more formal report with many links to helpful information. Among the research Boden describes is this study by Nicholas et al. (2020),  which finds that people exhibit poor financial decision-making up to six years before a dementia diagnosis. Here is a press release about the study, in case you want to give more advanced students a primer or earlier UG students a sheet for understanding the research.  The audio version of this story is very compelling. It includes interviews with several people who have been left heavily in debt because of poor decisions made by family members before their diagnosis. It also offers some solutions that could be implemented ...

Suicide hotline efficacy data: Assessment, descriptive data, t-tests, correlation, regression examples abound

ASIDE: THIS IS MY 500th POST. PLEASE CLAP. Efficacy data about a mental health intervention? Yes, please. The example has so much potential in a psych stats classroom. Or an abnormal/clinical classroom, or research methods. Maybe even human factors, because three numbers are easy to remember than 10? This post was inspired by an NPR story  by Rhitu Chatterjee. It is all about America's mental health emergency hotline's switch from a 10-digit phone number to the much easier-to-remember three digits (988), and the various ways that the government has measured the success of this change. How to use this (and related material) in class: 1) Assessment. In the NPR interview, the describe how several markers have improved: Wait times, dropped calls, etc.  Okay, so the NPR story sent me down a rabbit hole of looking for this data so we can use it in class. Here is the federal government's website about  988  and a link to their specific  988  performance data,...

Type I/II error in real life: The FDA and the search for an at-home COVID-19 test

When we talk false positives in psych stats, it is usually in the context of NHST, which is abstract and tricky to understand, no matter how many normal curves you draw on the dry erase board. We also tend to frame it in really statsy terms, like alpha and beta, which are also tricky and sort of abstract, no matter how many times you repeat .05 .05 .05. In all things statistics, I think that abstract concepts are best understood in the context of real-life problems. I also think that statistics instructors need to emphasize not just statistics but statistical thinking and reasoning in real life. To continue on a theme from my last post, students need to understand that the lessons in psych stats aren't just for performing statistics and getting a good grade, but also for improving general critical thinking and problem-solving in day to day life. I also think that our in-class examples can be too sterile. They may explain Type I/II error accurately, but we tend to only ask our stude...

NPR's The Pandemic Is Pushing Scientists To Rethink How They Read Research Papers

This NPR story by Richard Harris describes science's struggle to keep up with the massive amount of COVID-19 research, much of which is coming out of China. How does science, and society, judge the quality of these papers?   How to use in class: 1.  How do scientists assess the quality of research? By reading pre-registered reports and pre-prints : The report explains pre-prints and pre-registration! The good: The research gets out faster. Reviewers can compare pre-planned analysis to the actual analysis. The bad: The media gets too excited about pre-prints. The report describes the totally overwhelming number of pre-prints for COVID-19 related research: One of the scientists interviewed in the piece describes how he used pre-registered information to assess a COVID-19 research paper: 2. How do scientists assess the quality of an article: By the author and their academic affiliation. The report describes the bias that may exist when we lean on author/affiliation heuristics i...

Using Pew Research Center Race and Ethnicity data across your statistics curriculum

In our stats classes, we need MANY examples to convey both theories behind and the computation of statistics. These examples should be memorable. Sometimes, they can make our students laugh, and sometimes they can be couched in research. They should always make our students think. In this spirit, I've collected three small examples from the Pew Research Center's  Race and Ethnicity  archive (I hope to update with more examples as time permits). I don't know if any data collection firm is above reproach, but Pew Research is pretty close. They are non-partisan, they share their research methodology, and they ask hard questions about ethnicity and race. If you use these examples in class, I think that it is crucial to present them within context: They illustrate statistical concepts, and they also demonstrate outcomes of racism.   1. "Most Blacks say someone has acted suspicious of them or as if they weren't smart" Lessons: Racism, ANOVA theory: between-group dif...

Planet Money's The Modal American

While teaching measures of central tendency in Intro stats, I have shrugged and said: "Yeah, mean and average are the same thing, I don't know why there are two words. Statisticians say mean so we'll say mean in this class." I now have a better explanation than that non-explanation, as verbalized by this podcast: The average is thrown around colloquially and can refer to mode, while mean can always be defined with a formula. This is a fun podcast that describes mode vs. mean, but it also describes the research the rabbit hole we sometimes go down when a seemingly straightforward question becomes downright intractable. Here, the question is: What is the modal American? The Planet Money Team, with the help of FiveThirtyEight's Ben Castlemen, eventually had to go non-parametric and divide people into broad categories and figure out which category had the biggest N. Here is the description of how they divided up : And, like, they had SO MANY CELLS in their des...

A big metaphor for effect sizes, featuring malaria.

TL; DR- Effect size interpretation requires more than numeric interpretation of the effect size. You need to think about what would be considered a big deal, real-life change worth pursuing, given the real-world implications for your data. For example, there is a  malaria vaccine with a 30% success rate undergoing  a large scale trial in Malawi . If you consider that many other vaccines have much higher success rates, 30% seems like a relatively small "real world" impact, right? However, two million people are diagnosed with malaria every year. If science could help 30% of two million, the relatively small effect of 30% is a big deal. Hell, a 10% reduction would be wonderful. So, a small practical effect, like "just" 30%, is actually a big deal, given the issue's scale. How to use this news story: a) Interpreting effect sizes beyond Cohen's numeric recommendations. b) A primer on large-scale medical trials and their ridiculously large n-sizes and tra...

Free beer (data)!

I am absolutely NOT above pandering to undergraduates. For example, I use beer-related examples to illustrate t-test s,   correlation/regression , curvilinear relationships , and data mining/re-purposing . Here is some more. This data was collected to estimate how much more participants would pay for their beer if their beer was created in an environmentally sustainable manner. The answer? $1.30/six pack more. And 59% of respondents said that they would pay more for sustainable beer. NPR talked about it , as well as ways that breweries are going green. Here is a link to the original research . How to use in class: 1) The original research is shared via an open source journal . So, an opportunity to talk about open source research journals. 2) They data was collected via mTurk, another ancillary topics to discuss with your budding research methodologists. 3) The authors of the original study shared their beer survey data ! Analyze to your heart's content. 4) How c...

Watson's For Women Over 30, There May Be A Better Choice Than The Pap Smear

Emily Watson, writing for NPR, describes medical research by Ogilvie, vanNiekerk, & Krajden . This research provides a timely, topical example of false positives, false negatives, medical research, and gets your students thinking a bit more flexibly about measurement. This research provides valuable information about debate in medicine: What method of cervical cancer detection is most accurate: The traditional Pap smear, or an HPV screening? The Pap smear works by scraping cells off of a cervix and having a human view and detect abnormal cervical cancer cells. The HPV test, indeed, detects HPV. Since HPV causes 99% of cervical cancers, its presence signals a clinician to perform further screen, usually a colonoscopy. The findings: Women over 30 benefit more from the HPV test. How to use this example in class: - This is a great example of easy-to-follow  research methodology and efficacy testing in medicine. A question existed: Which is better, Pap or HPV test? The questi...

Talking to your students about operationalizing and validating patient pain.

Patti Neighmond, reporting for NPR, wrote a piece on how the medical establishment's method for assessing patient pain is evolving . This is a good example of why it can be so tricky to operationalize the abstract. Here, the abstract notion in pain. And the story discusses shortcomings of the traditional numeric, Wong-Baker pain scale, as well as alternatives or complements to the pain scale. No one is vilifying the scale, but recent research suggests that what a patient reports and how a medical professional interprets that report are not necessarily the same thing. From Dr. John Markman's unpublished research: I think this could also be a good example of testing for construct validity. The researcher asked if the pain was tolerable and found out that their numerical scale was NOT detecting intolerable. This is a psychometric issue. One of the recommendations for better operationalization: Asking a patient how pain effects their ability to perform every day tas...

Chi-square example via dancing, empathetic babies

Don't you love it when research backs up your lifestyle? My kids LOVE dancing. We have been able to get both kids hooked on OK GO and Queen and Metallica. The big kid's favorite song is "Tell Me Something Good" by Chaka Khan and the little kid prefer's "Master of Puppets". We all like to dance together. My kids, husband, and sister dancing. Now, research suggests that our big, loud group activity may increase empathy in our kids. NPR summarized Dr. Laura Cirelli's research looking at 14 m.o.'s and whether they 1) helped or 2) did not help a stranger who either 1) danced in sync with them or 2) danced, but not in sync, with the child. She found (in multiple studies) that kids offer more assistance after they danced in sync with an adult.  How to use in class: 1) Here is fake chi-square, test of independence, data you can use in class. It IS NOT the data from the research but mimics the findings of the research. "Synced?" re...

Stein's, "Could probiotics protect kids from a downside of antibiotics?"

Your students have heard of probiotics. In pill form, in yogurt, and if you are a psychology major, there is even rumbling that probitotics and gut health are linked to mental health. But this is still an emerging area of research. And NPR did a news story about a clinical trial that seeks to understand how probiotics may or may not help eliminate GI problems in children who are on antibiotics . Ask any parent, and they can tell you how antibiotics, which are wonderful, can mess with a kid's belly. When they are already sick. Science is trying to provide some insight into the health benefits of probiotics in this specific situation. They spell out the methodology: How to use in class: 1) I love about this example is that the research is happening now, and very officially as an FDA   clinical trial . So talk to your students about clinical trials, which I think you can then related back to why it is good to pre-register your non-FDA research, with explicit research m...

Teach t-tests via "Waiting to pick your baby's name raises the risk for medical mistakes"

So, I am very pro-science, but I have a soft spot in my heart for medical research that improves medical outcomes without actually requiring medicine, expensive interventions, etc. And after spending a week in the NICU with my youngest, I'm doubly fond of a way of helping the littlest and most vulnerable among us. One example of such was published in the journal Pediatrics and written up by NPR . In this case, they found that fewer mistakes are made when not-yet-named NICU babies are given more distinct rather than less distinct temporary names. The unnamed baby issues is an issue in the NICU, as babies can be born very early or under challenging circumstances, and the babies' parents aren't ready to name their kids yet. Traditionally, hospitals would use the naming convention "BabyBoy Hartnett" but several started using "JessicasBoy Hartnett" as part of this intervention. So, distinct first and last names instead of just last names. They measured patie...

Harris's "Scientists Are Not So Hot At Predicting Which Cancer Studies Will Succeed"

This NPR story is about reproducibility in science that ISN'T psychology, the limitations of expert intuition, and the story is a summary of a recent research article from PLOS Biology  (so open science that isn't psychology, too!). Thrust of the story: Cancer researchers may be having a similar problem to psychologists in terms of replication.  I've blogged this issue before. In particular, concerns with replication in cancer research, possibly due to the variability with which lab rats are housed and fed . So, this story is about a study in which 200 cancer researchers, post-docs, and graduate students took a look at six pre-registered cancer stud y replications and guessed which studies would successfully replicate. And the participants systematically overestimated the likelihood of replication. However, researchers with high h-indices, were more accurate that the general sample. I wonder if the high h-indicies uncover super-experts or super-researchers who have be...

Parents May Be Giving Their Children Too Much Medication, Study Finds

Factorial ANOVA example ahead! With a lovely interaction. And I have a year old and a 4.5 year old and they are sickly daycare kids, so this example really spoke to me. NPR did a story about a recent publication that studied how we administer medicine to our kids and provides evidence for a few things I've suspected: Measuring cups for kid medicine are a disaster AND syringes allow for more accurate dosing, especially if the dose is small. The researchers wanted to know if parents properly dosed liquid medicine for their kids. The researchers used a 3 (dosage, 2.5, 5.0, 7.5 ml) x 3 (modality: small syringe, big syringe, medicine cup) design. They didn't use factorial ANOVA in their analysis, this example can still be used to conceptually explain factorial ANOVA. Their findings: How to use in class: -An easy-to-follow conceptual example of factorial ANOVA (again, they didn't use that analysis in the original paper, but the table above illustrates factorial ANO...

Harris' "Reviews Of Medical Studies May Be Tainted By Funders' Influence"

This NPR story is a summary of the decisively titled " The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses " authored by Dr. John Ioannidis. The NPR story provides a very brief explanation of meta-analysis and systematic reviews. It explains that they were originally used as a way to make sense of many conflicting research findings coming from a variety of different researchers. But these very influential publications are now being sponsored and possibly influenced by Big Pharma. This example explains conflicts of interest and how they can influence research outcomes. In addition to financial relationships, the author also cites ideological allegiances as a source of bias in meta-analysis. In addition to Dr. Ioannidis, Dr. Peter Kramer was interviewed. He is a psychiatrist who defends the efficacy of antidepressants. He suggests that researchers who believe that placebos are just as effective as anti-depressants tend to analy...

Harris' "How Big A Risk Is Acetaminophen During Pregnancy?"

This study, which found a link between maternal Tylenol usage during pregnancy and ADHD, has been making the rounds, particularly in the Academic Mama circles I move in. Being pregnant is hard. For just about every malady, the only solution is to stay hydrated. With a compromised bladder. But at least pregnant women have Tylenol for sore hips and bad backs. For a long time, this has been the only safe OTC pain reliever available to pregnant women. But a recent research article has cast doubt on this advice. A quick read of this article makes it sound like you are cursing your child with a lifetime of ADHD if you take Tylenol. A nd this article has become click-bait fodder. But these findings have some pretty big caveats.  Harris published this reaction piece at NPR . It is a good teaching example of media hype vs. incremental scientific progress and the third (or fourth or fifth) variable problem. It also touches on absolute vs. relative risk. NOTE: There are well-documente...

Bichell's "A Fix For Gender-Bias In Animal Research Could Help Humans"

This news story demonstrates that research methods are both federally monitored and that best practices can change over time. For a long time, women were not used in large scale pharmaceutical trials. Why did they omit women? They didn't want to accidentally exposed pregnant women to new drugs and because of fears that fluctuations in females hormones over the course of a month would affect research results. Which always makes me think of this scene from Anchorman: But I digress. This has been corrected for and female participants are being included in clinical trials. But many of the animal trials that occur prior to human trials still use mostly male animals. And, again, policies have changed to correct for this. This NPR story details the whole history of this sex bias in research. Part of why this bias has been so detrimental to women is because women report more side effects to drugs than do men. So, by catching such gender differences earlier with animal models, the...

Shapiro's "New Study Links Widening Income Gap With Life Expectancy"

This story is pretty easy to follow. Life expectancy varies by income level . The story becomes a good example for a statistics class because in the interview, the researcher describes a multivariate model. One in which multiple different independent variables (drug use, medical insurance, smoking, income, etc.) could be used to explain the disparity the exists in lifespan between people with different incomes. As such, this story could be used as an example of multivariate regression. And The Third Variable Problem. And why correlation isn't enough. In particular, this part of the interview (between interviewer Ari Shapiro and researcher Gary Burtless) refers to the underlying data as well as the Third Variable Problem as well as the amount to variability that can be assigned to the independent variables he lists). SHAPIRO: Why is this gap growing so quickly between life expectancy of rich and poor people? BURTLESS: We don't know. More affluent Americans tend to engage...

Science Friday's "Spot the real hypothesis"

Annie Minoff delves into the sins of ad hoc hypotheses using several examples from evolutionary science (including evolutionary psychology) . I think this is a fun way to introduce this issue in science and explain WHY a hypothesis is important for good research. This article provides three ways of conveying that ad hoc hypotheses are bad science. 1) This video of a speaker lecturing about absurd logic behind ad hoc testing (here, evolutionary explanations for the mid-life "spare tire" that many men struggle with). NOTE: This video is from an annual event at MIT, BAHFest (Bad Ad Hoc Fest) if you want more bad ad hoc hypotheses to share with students. 2) A quiz in which you need to guess which of the ad hoc explanations for an evolutionary finding is the real explanation. 3) A more serious reading to accompany this video is Kerr's HARKing: Hypothesizing after results are known (1998), a comprehensive take down of this practice.