Skip to main content

Posts

Showing posts with the label effect sizes

Using pulse rates to determine the scariest of scary movies

  The Science of Scare project, conducted by MoneySuperMarket.com, recorded heart rates in participants watching fifty horror movies to determine the scariest of scary movies. Below is a screenshot of the original variables and data for 12 of the 50 movies provided by MoneySuperMarket.com: https://www.moneysupermarket.com/broadband/features/science-of-scare/ https://www.moneysupermarket.com/broadband/features/science-of-scare/ Here is my version of the data in Excel format . It includes the original data plus four additional columns (so you can run more analyses on the data): -Year of Release -Rotten Tomato rating -Does this movie have a sequel (yes or no)? -Is this movie a sequel (yes or no)? Here are some ways you could use this in class: 1. Correlation : Rotten Tomato rating does not correlate with the overall scare score ( r = 0.13, p = 0.36).   2. Within-subject research design : Baseline, average, and maximum heart rates are reported for each film.   3. ...

Using data about antidepressant efficacy to illustrate Cohen's d, demonstrate why you need a control group, talk about interactions.

This example is from The Economist and behind a paywall. However, it is worth using one of your free monthly views to see these visualizations of how much improvement Ps experience. That said, whenever I talk about antidepressants in class, I remind my students MANY TIMES that I'm not that kind of psychologist, and even if I was, I'm not their psychologist. Instead, they should direct any and all medication questions to their own psychologist. This blog post was inspired by " Antidepressants are over-prescribed, but genuinely help some patients " from The Economist, which was in turn inspired by  " Response to acute monotherapy for major depressive disorder in randomized, placebo-controlled trials submitted to the US FDA: individual participant data analysis", by M.B. Stone et al., BMJ, 2022; "Selective publication of antidepressant trials and its influence on apparent efficacy: updated comparisons and meta-analyses of newer versus older trial s", ...

A recent research article that ACTUALLY USES ANALYSES WE TEACH IN INTRO STATS

 You have to walk before you can run, right? The basics we teach in Psych Stats help our students walk, but they are not typical of published psychology research. It is difficult for Psych Stat instructors to find good examples of our analyses in recently published research (for an exception, check out Open Stats Lab ). A recent publication caught my eye because I love sending people mail ( scroll down to find my list of recommended, envelope-friendly surprises ).  Liu, P. J., Rim, S., Min, L., & Min, K. E. (2022). The surprise of reaching out: Appreciated more than we think. Journal of Personality and Social Psychology , No Pagination Specified-No Pagination Specified. https://doi.org/10.1037/pspi0000402 Spoiler alert: People love being surprised by mail. Like, more than the sender thinks the receiver will be surprised. I was delighted to discover that this interesting paper consists of multiple studies that use what we teach in Psych Stats. Check out this article s...

Interpreting effect sizes: An Olympic-sized metaphor

First, a pun: American athlete Athing Mu broke the American record for the 800m. I guess you could say...that Mu is anything but average!! HAHAAAHAHAHHA. https://twitter.com/Notawful/status/1409456926497423363 Anyway. It is late June 2021, and my Twitter feed is filled with amazing athletes qualifying for the Olympics. Athletes like Sydney McLaughlin. That picture was taken after McLaughlin a) qualified for the 2021 Olympics AND b) broke the 400m hurdle world record. Which is amazing.  Now, here is where I think we could explain effect size interpretation. How big was McLaughlin's lead over the previous record? From SpectrumNews1 McLaughlin broke the world record by less than a second. But she broke the world record so less than a second is a huge deal. Similarly, we may have Cohen's small-medium-large recommendations when interpreting effect sizes, but we always need to interpret an effect size within context. Does a small effect size finding explain more variance than any pre...

Help your students understand effect sizes using voter behaviors

Interpreting effect sizes requires more than Rules of Thumb for interpretation. Interpretation requires deeper knowledge about the investigated topic, an idea we must convey to our students. For example, in presidential elections in the United States, the winner is usually selected by a slim margin. As such, if you can get even small numbers of voters who don't usually vote to vote, it can have a large real-world effect on an election. This is what Vote Forward is trying to do, and I'll explain how you can use their work to explain effect sizes in your stats classes.  This is Vote Foreward : Okay. So they are organizing letter-writing campaigns in advance of the 2020 General Election. NOTE: The organization is left-leaning, but many of its campaigns ask letter-writers to share non-partisan messages.  Vote Forward has tested whether or not writing letters to unlikely voters actually gets people to vote, and they shared the results of those efforts: Their findings, which aren...

The Washington Post's "The coronavirus pandemic and loss of aircraft data are taking a toll on weather forecasting"

The Washington Post , and numerous other media outlets, recent wrote about an unintended consequence of COVID-19 and the sudden drop off in commercial flights: Fewer data points for weather forecasts ( PDF ). Due to the coronavirus, commercial flights are down: How does this affect weather forecasts? Data is constantly being collected from commercial flights, and that data is used to predict future weather: Ways to use in class: A conceptual example of multivariate modeling : Windspeed...temperature...humidity...lots of different data points, from lots of different elevations, come into play when making our best guess at the weather. This is a non-math, abstract way to discuss such multivariate models. A conceptual example of effect sizes/real-world effects: In the article, they clearly spell out the magnitude of the data loss. That is pretty easy to track since we can count the number of flights that have been canceled. More complex is determining the effect size of this data loss....

Pedagogy article recommendation: "Introducing the new statistics in the classroom."

I usually blog about funny examples for the teaching of statistics, but this example is for teachers teaching statistics. Normile, Bloesch, Davoli, & Scheer's recent publication, "Introducing the new statistics in the classroom" (2019) is very aptly and appropriately titled. It is a rundown on p-values and effect sizes and confidence intervals. Such reviews exist elsewhere, but this one is just so short and precise. Here are a few of the highlights: 1) The article concisely explains what isn't great or what is frequently misunderstood about NHST. 2) Actual guidelines for how to explain it in Psychological Statistics/Introduction to Statistics, including ideas for doing so without completely redesigning your class. 3) It also highlights one of the big reasons that I am so pro-JASP: Easy to locate and use effect sizes.

A big metaphor for effect sizes, featuring malaria.

TL; DR- Effect size interpretation requires more than numeric interpretation of the effect size. You need to think about what would be considered a big deal, real-life change worth pursuing, given the real-world implications for your data. For example, there is a  malaria vaccine with a 30% success rate undergoing  a large scale trial in Malawi . If you consider that many other vaccines have much higher success rates, 30% seems like a relatively small "real world" impact, right? However, two million people are diagnosed with malaria every year. If science could help 30% of two million, the relatively small effect of 30% is a big deal. Hell, a 10% reduction would be wonderful. So, a small practical effect, like "just" 30%, is actually a big deal, given the issue's scale. How to use this news story: a) Interpreting effect sizes beyond Cohen's numeric recommendations. b) A primer on large-scale medical trials and their ridiculously large n-sizes and tra...

NYT's "What's going on in this graph?"

The New York Time's maintains The Learning Network, which contains news content that fits well into a variety of classrooms teaching a variety of topics.  Recently, they shared a good stats example. They created curves illustrating global climate change over time. The top graph illustrates a normal curve, with normal temperature as the modal value. But as we shift forward in time, hot days become modal and the curves no longer overlap. Sort of like the classic illustration of what a small to medium effect size looks like in terms of distribution overlap.  This graph is part of the NYT's "What's going on in this graph?" series , which are created and shared in partnership with the American Statistical Association.

Teaching the "new statistics": A call for materials (and sharing said materials!)

This blog is usually dedicated to sharing ideas for teaching statistics. And I will share some ideas for teaching. But I'm also asking you to share YOUR ideas for teaching statistics. Specifically, your ideas for teaching the new statistics: effect size, confidence intervals, etc. The following email recently came across the Society for the Teaching of Psychology listserv from Robert Calin-Jageman (rcalinjageman@dom.edu). "Is anyone out there incorporating the "New Statistics" (estimation, confidence intervals, meta-analysis) into their stats/methods sequence? I'm working with Geoff Cumming on putting together an APS 2017 symposium proposal on teaching the New Statistics.  We'd love to hear back from anyone who has already started or is about to.  Specifically, we'd love to:         * Collect resources you'd be willing to share (syllabi, assignments, etc.)         * Collect narratives of your experi...

Totilo's "Antonin Scalia's landmark defense of violent video games"

A great example using a topic relevant to your students (video games), involving developmental psychology (the effect of violent media on children), and a modern event (Scalia's passing) in order to demonstrate the importance of both research psychology as well as statistics. This article extensively quote Scalia's majority opinion regarding Brown vs. Entertainment Merchants Association, a 2010 U.S. Supreme Court case that decided against California's attempt to regulate the sale of violent video games to minors (the full opinion embedded in the article). Why did Scalia decide against regulating violent video games in the same manner that the government regulates alcohol and cigarette sales? In part, because research and statistics. Of particular use to an instructor of statistics are sections when Scalia cites shaky psychological research and argues that correlational research can not be used to make causal arguments... ...Scalia also discusses effect sizes... ...

One article (Kramer, Guillory, & Hancock, 2014), three stats/research methodology lessons

The original idea for using this article this way comes from Dr. Susan Nolan 's presentation at NITOP 2015, entitled " Thinking Like a Scientist: Critical Thinking in Introductory Psychology."  I think that Dr. Nolan's idea is worth sharing, and I'll reflect a bit on how I've used this resource in the classroom. (For more good ideas from Dr. Nolan, check out her books, Psychology , Statistics for the Behavioral Sciences , and The Horse that Won't Go Away (about critical thinking)). Last summer, the National Academy of Sciences Proceedings published an article entitled "Experimental evidence of massive-scale emotional contagion through social networks ." The gist: Facebook manipulated participants' Newsfeeds to increase the number of positive or negative status updates that each participant viewed. The researchers subsequently measured the number of positive and negative words that the participants used in their own status updates. They fou...

Kristopher Magnusson's "Interpreting Cohen's d effect size"

Kristopher Magnusson (previously featured on this blog for his interactive illustration of correlation ) also has a helpful illustration of effect size . While this example probably has some information that goes beyond an introductory understanding of effect size (via Cohen's d ) I think this still does a great job of illustrating how effect size measures, essentially, the magnitude of the difference between groups (not how improbably those differences are). See below for a screen shot of the tool. http://rpsychologist.com/d3/cohend/, created by Kristopher Magnusson

Geoff Cumming's "The New Statistics: Estimation and Research Integrity"

Geoff Cumming Geoff Cumming gave a talk at APS 2014 about the " new statistics " (reduced emphasis on p-value, greater emphasis on confidence intervals and effect sizes, for starters). This workshop is now available, online and free, from APS . The three hour talk has been divided into five sections, and each sections comes with a "Table of Contents" to help you quickly navigate all of the information contained in the talk. While some of this talk is too advanced for undergraduates, I think that there are portions, like his explanation of why p-values are so popular, p-hacking, confidence intervals can be nice additions to an Introduction to Statistics class.

Slate & Rojas-LeBouef's "Presenting and Communicating Your Statistical Findings: Model Writeups"

Holy smokes. This e-book  (distributed for free via Open Stax ) contains sample result sections for multiple statistical tests, which is helpful but not particularly unique. There are other resources for creating APA results sections ( love U. Washington's resources ) but I feel that this book is particularly useful in that: 1) It addresses how to include effect sizes in tests (most of the result section examples I have been able to find neglect this increasingly important aspect of data analysis). 2) The writers translate SPSS output into results sections. 3) The writers aren't psychologist but they are APA compliant (and even point out instances when their figures and tables aren't APA compliant). 4) It is gloriously free. The only shortcoming is that they don't provide examples for more types of data analyses. The book does, however, cover chi-square, correlation, t -test, and ANOVA, so most of what is covered in introductory statistics courses. I think th...

Research Wahlberg

" Mark Wahlberg as Research Scholar. Boom." Follow on Facebook or at twitter via  @ ResearchMark  

Nature's "Policy: Twenty tips for interpreting scientific claims" by William J. Sutherland, David Spiegelhalter, & Mark Burgman

This very accessible summary lists the ways people fib with, misrepresent, and overextend data findings. It was written as an attempt to give non-research folk (in particular, law makers), a cheat sheet of things to consider before embracing/rejecting research driven policy and laws. A sound list, covering plenty of statsy topics (p-values, the importance of replication), but what I really like is that they article doesn't criticize the researchers as the source of the problem. It places the onus on each person to properly interpret research findings. This list also emphasizes the importance of data driven change.

Changes in standards for data reporting in psychology journals

Two prominent psychology journals are changing their standards for publication in order to address several long-standing debates in statistics (p-values v. effect sizes and point estimates of the mean v. confidence intervals). Here are the details for changes that the Association for Psychological Science is creating for their gold-standard publication, Psychological Science, in order to improve the transparency in data reporting. Some of the big changes include mandatory reporting of effect sizes, confidence intervals, and inclusion of any scales or measures that were non-significant. This might be useful in class when describing why p-values and means are imperfect, the old p-value v. effect size debate, and how one can bend the truth with statistics via research methodology (and glossing over/completely neglecting N.S. findings). These examples are also useful in demonstrating to your students that these issues we discuss in class have real world ramifications and aren't be...

Stephen Colbert vs. Darryl Bem = effect size vs. statistical significance

Darryl Bem on The Colbert Report I love me some Colbert Report. So imagine my delight when he interviewed social psychologist Darryl Bem . Bem is famous for his sex roles inventory as well as his Psi research. Colbert interviewed him about his 2012 Journal of Personality and Social Psychology article, Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect, which demonstrated a better-than-chance ability to predict an outcome. Here, the outcome was guessing which side of a computer screen would contain an erotic image (Yes, Colbert had a field day with this. Yes, please watch the clip in its entirety before sharing it with a classroom of impressionable college students). Big deal? Needless to say, Colbert reveled in poking fun at the "Time Traveling Porn" research. However, the interview is of some educational value because it a)does a good job of describing the research methods used in the study. Additionally, b) h...