Skip to main content

A big metaphor for effect sizes, featuring malaria.

TL; DR- Effect size interpretation requires more than numeric interpretation of the effect size. You need to think about what would be considered a big deal, real-life change worth pursuing, given the real-world implications for your data.

For example, there is a  malaria vaccine with a 30% success rate undergoing a large scale trial in Malawi. If you consider that many other vaccines have much higher success rates, 30% seems like a relatively small "real world" impact, right?

However, two million people are diagnosed with malaria every year. If science could help 30% of two million, the relatively small effect of 30% is a big deal. Hell, a 10% reduction would be wonderful. So, a small practical effect, like "just" 30%, is actually a big deal, given the issue's scale.

How to use this news story:
a) Interpreting effect sizes beyond Cohen's numeric recommendations.
b) A primer on large-scale medical trials and their ridiculously large n-sizes and transitioning from lab to real-world trials.

So, I heard about this story about a new malaria vaccine trial in Malawi while getting ready to work and listening to the NPR Hourly News update, which is very on-brand for me.

The aspect of this report that really caught my attention: In the lab, this vaccine "only" has a 30% success rate. This makes me think of the struggle of understanding (and explaining!) effect sizes in stats class.


Weren't p-values sort of nice in that they were either/or? You were significant, whatever that means, or not. The binary was comfortable.

But now we are using effect sizes, among other methods, to determine if research findings are "big" enough to get excited about. And effect size interpretation is a wee bit arbitrary, right? They can be nil, small, medium, or large, corresponding to the result of your effect size calculation. What does that mean? When does it count? When is your research with implementing or acting on or replicating? Like, COHEN says it is. About his own rules of thumb: "This is an operation fraught with many dangers (1977).

In addition to the thumb rules, you need to know what you are measuring and doing with your data in real life and what is a big deal for what you are measuring.

So, is a 30% success rate a big deal? Or, when is a small numeric effect actually a big-deal real-life effect?

WHO and the Bill and Melinda Gates foundation think the malaria vaccine has big real-world potential. Malaria is awful. It kills more kids than adults (see below)—over 200 million cases globally, with almost half a million deaths annually. For a problem of this magnitude, a possible 30% reduction would be massive.

Look at the shaded confidence interval! Well done, NPR!

Aside from my big effect size metaphor, this article can also be used in class as it describes a pilot program that takes the malaria vaccine trials from the lab to the messy real world:



Comments

Popular posts from this blog

Ways to use funny meme scales in your stats classes

Have you ever heard of the theory that there are multiple people worldwide thinking about the same novel thing at the same time? It is the multiple discovery hypothesis of invention . Like, multiple great minds around the world were working on calculus at the same time. Well, I think a bunch of super-duper psychology professors were all thinking about scale memes and pedagogy at the same time. Clearly, this is just as impressive as calculus. Who were some of these great minds? 1) Dr.  Molly Metz maintains a curated list of hilarious "How you doing?" scales.  2) Dr. Esther Lindenström posted about using these scales as student check-ins. 3) I was working on a blog post about using such scales to teach the basics of variables.  So, I decided to create a post about three ways to use these scales in your stats classes:  1) Teaching the basics of variables. 2) Nominal vs. ordinal scales.  3) Daily check-in with your students.  1. Teach your students the basics...

Using pulse rates to determine the scariest of scary movies

  The Science of Scare project, conducted by MoneySuperMarket.com, recorded heart rates in participants watching fifty horror movies to determine the scariest of scary movies. Below is a screenshot of the original variables and data for 12 of the 50 movies provided by MoneySuperMarket.com: https://www.moneysupermarket.com/broadband/features/science-of-scare/ https://www.moneysupermarket.com/broadband/features/science-of-scare/ Here is my version of the data in Excel format . It includes the original data plus four additional columns (so you can run more analyses on the data): -Year of Release -Rotten Tomato rating -Does this movie have a sequel (yes or no)? -Is this movie a sequel (yes or no)? Here are some ways you could use this in class: 1. Correlation : Rotten Tomato rating does not correlate with the overall scare score ( r = 0.13, p = 0.36).   2. Within-subject research design : Baseline, average, and maximum heart rates are reported for each film.   3. ...

Andy Field's Statistics Hell

Andy Field is a psychologist, statistician, and author. He created a funny, Dante's Inferno-themed  web site that contains everything you ever wanted to know about statistics. I know, I know, you're thinking, "Not another Dante's Inferno themed statistics web site!". But give this one a try. Property of Andy Field. I certainly can't take credit for this. Some highlights: 1) The aesthetic is priceless. For example, his intermediate statistics page begins with the introduction, "You will experience the bowel-evacuating effect of multiple regression, the bone-splintering power of ANOVA and the nose-hair pulling torment of factor analysis. Can you cope: I think not, mortal filth. Be warned, your brain will be placed in a jar of cerebral fluid and I will toy with it at my leisure." 2) It is all free. Including worksheets, data, etc. How amazing and generous. And, if you are feeling generous and feel the need to compensate him for the website, ...