A big metaphor for effect sizes, featuring malaria.

TL; DR- Effect size interpretation requires more than numeric interpretation of the effect size. You need to think about what would be considered a big deal, real-life change worth pursuing, given the real-world implications for your data.

For example, there is a  malaria vaccine with a 30% success rate undergoing a large scale trial in Malawi. If you consider that many other vaccines have much higher success rates, 30% seems like a relatively small "real world" impact, right?

However, two million people are diagnosed with malaria every year. If science could help 30% of two million, the relatively small effect of 30% is a big deal. Hell, a 10% reduction would be wonderful. So, a small practical effect, like "just" 30%, is actually a big deal, given the issue's scale.

How to use this news story:
a) Interpreting effect sizes beyond Cohen's numeric recommendations.
b) A primer on large-scale medical trials and their ridiculously large n-sizes and transitioning from lab to real-world trials.

So, I heard about this story about a new malaria vaccine trial in Malawi while getting ready to work and listening to the NPR Hourly News update, which is very on-brand for me.

The aspect of this report that really caught my attention: In the lab, this vaccine "only" has a 30% success rate. This makes me think of the struggle of understanding (and explaining!) effect sizes in stats class.


Weren't p-values sort of nice in that they were either/or? You were significant, whatever that means, or not. The binary was comfortable.

But now we are using effect sizes, among other methods, to determine if research findings are "big" enough to get excited about. And effect size interpretation is a wee bit arbitrary, right? They can be nil, small, medium, or large, corresponding to the result of your effect size calculation. What does that mean? When does it count? When is your research with implementing or acting on or replicating? Like, COHEN says it is. About his own rules of thumb: "This is an operation fraught with many dangers (1977).

In addition to the thumb rules, you need to know what you are measuring and doing with your data in real life and what is a big deal for what you are measuring.

So, is a 30% success rate a big deal? Or, when is a small numeric effect actually a big-deal real-life effect?

WHO and the Bill and Melinda Gates foundation think the malaria vaccine has big real-world potential. Malaria is awful. It kills more kids than adults (see below)—over 200 million cases globally, with almost half a million deaths annually. For a problem of this magnitude, a possible 30% reduction would be massive.

Look at the shaded confidence interval! Well done, NPR!

Aside from my big effect size metaphor, this article can also be used in class as it describes a pilot program that takes the malaria vaccine trials from the lab to the messy real world:



Comments