Skip to main content

A big metaphor for effect sizes, featuring malaria.

TL; DR- Effect size interpretation requires more than numeric interpretation of the effect size. You need to think about what would be considered a big deal, real-life change worth pursuing, given the real-world implications for your data.

For example, there is a  malaria vaccine with a 30% success rate undergoing a large scale trial in Malawi. If you consider that many other vaccines have much higher success rates, 30% seems like a relatively small "real world" impact, right?

However, two million people are diagnosed with malaria every year. If science could help 30% of two million, the relatively small effect of 30% is a big deal. Hell, a 10% reduction would be wonderful. So, a small practical effect, like "just" 30%, is actually a big deal, given the issue's scale.

How to use this news story:
a) Interpreting effect sizes beyond Cohen's numeric recommendations.
b) A primer on large-scale medical trials and their ridiculously large n-sizes and transitioning from lab to real-world trials.

So, I heard about this story about a new malaria vaccine trial in Malawi while getting ready to work and listening to the NPR Hourly News update, which is very on-brand for me.

The aspect of this report that really caught my attention: In the lab, this vaccine "only" has a 30% success rate. This makes me think of the struggle of understanding (and explaining!) effect sizes in stats class.


Weren't p-values sort of nice in that they were either/or? You were significant, whatever that means, or not. The binary was comfortable.

But now we are using effect sizes, among other methods, to determine if research findings are "big" enough to get excited about. And effect size interpretation is a wee bit arbitrary, right? They can be nil, small, medium, or large, corresponding to the result of your effect size calculation. What does that mean? When does it count? When is your research with implementing or acting on or replicating? Like, COHEN says it is. About his own rules of thumb: "This is an operation fraught with many dangers (1977).

In addition to the thumb rules, you need to know what you are measuring and doing with your data in real life and what is a big deal for what you are measuring.

So, is a 30% success rate a big deal? Or, when is a small numeric effect actually a big-deal real-life effect?

WHO and the Bill and Melinda Gates foundation think the malaria vaccine has big real-world potential. Malaria is awful. It kills more kids than adults (see below)—over 200 million cases globally, with almost half a million deaths annually. For a problem of this magnitude, a possible 30% reduction would be massive.

Look at the shaded confidence interval! Well done, NPR!

Aside from my big effect size metaphor, this article can also be used in class as it describes a pilot program that takes the malaria vaccine trials from the lab to the messy real world:



Comments

Popular posts from this blog

Ways to use funny meme scales in your stats classes

Have you ever heard of the theory that there are multiple people worldwide thinking about the same novel thing at the same time? It is the multiple discovery hypothesis of invention . Like, multiple great minds around the world were working on calculus at the same time. Well, I think a bunch of super-duper psychology professors were all thinking about scale memes and pedagogy at the same time. Clearly, this is just as impressive as calculus. Who were some of these great minds? 1) Dr.  Molly Metz maintains a curated list of hilarious "How you doing?" scales.  2) Dr. Esther Lindenström posted about using these scales as student check-ins. 3) I was working on a blog post about using such scales to teach the basics of variables.  So, I decided to create a post about three ways to use these scales in your stats classes:  1) Teaching the basics of variables. 2) Nominal vs. ordinal scales.  3) Daily check-in with your students.  1. Teach your students the basics...

Leo DiCaprio Romantic Age Gap Data: UPDATE

Does anyone else teach correlation and regression together at the end of the semester? Here is a treat for you: Updated data on Leonardo DiCaprio, his age, and his romantic partner's age when they started dating. A few years ago, there was a dust-up when a clever Redditor r/TrustLittleBrother realized that DiCaprio had never dated anyone over 25. I blogged about this when it happened. But the old data was from 2022. Inspired by this sleuthing,  I created a wee data set, including up-to-date information on his current relationship with Vittoria Ceretti, so your students can suss out the patterns that exist in this data.

If your students get the joke, they get statistics.

Gleaned from multiple sources (FB, Pinterest, Twitter, none of these belong to me, etc.). Remember, if your students can explain why a stats funny is funny, they are demonstrating statistical knowledge. I like to ask students to explain the humor in such examples for extra credit points (see below for an example from my FA14 final exam). Using xkcd.com for bonus points/assessing if students understand that correlation =/= causation What are the numerical thresholds for probability?  How does this refer to alpha? What type of error is being described, Type I or Type II? What measure of central tendency is being described? Dilbert: http://search.dilbert.com/comic/Kill%20Anyone Sampling, CLT http://foulmouthedbaker.com/2013/10/03/graphs-belong-on-cakes/ Because control vs. sample, standard deviations, normal curves. Also,"skewed" pun. If you go to the original website , the story behind this cakes has to do w...