A simple tool operationalizes post-childbirth hemorrhaging and saves lives.

 https://www.npr.org/sections/goatsandsoda/2023/05/10/1175303067/a-plastic-sheet-with-a-pouch-could-be-a-game-changer-for-maternal-mortality

https://www.bmj.com/content/381/bmj.p1055

I love this study, in and of itself, because it is based on research that will save women's lives without spending a lot of money. I love it. 

Headline from the original research, reading, "Simple intervention for severe blood loss after childbirth is found to save lives."


Here is a link to the original study. I learned about it from an NPR story about the research by Rhitu Chaterjee.

I also love it because it is an accessible example of a bunch of statistics things: Dependent variables...operationalizing variables...why cross-cultural research and solutions aren't just lip service to diversity...how control groups in medical research are very different than control groups in psychology research...absolute vs. relative risk.

-Dependent variables/operationalized variables: This study clearly illustrates the power of measurement and operationalization. The researchers wanted to create a way to better assess post-childbirth hemorrhaging. Medical professionals created a plastic drape to correctly measure blood loss after pregnancy. The blood loss pools in a plastic pouch, which is far easier to measure than an estimate of blood lost based on blood on the floor, gurney, etc. 


https://www.researchgate.net/figure/Photograph-of-the-calibrated-blood-collection-drape_fig1_281634929

-Cross-cultural research is essential. An intervention that works in one part of the world may or may not work in other parts of the world. Innovations can come from parts of the world that may be under-represented in medical journals. Previous research about the drape has occurred in Thailand and India

-Control group =/= The Nothing Happens Group. In psychology research, the control group usually means nothing is happening. Placebo. However, that is different from what a control group is in medical research. As in many medical research studies, the control group received standard care plus the experimental condition.

For the study, the researchers trained hospital providers in the intervention group to treat women with a set of treatments recommended by the World Health Organization. The treatments include massaging the uterus to help it contract, which would stop the bleeding, and administering the drug Oxytocin, which also helps the uterus contract, and the drug Tranexamic acid, which promotes clotting. Other treatments include IV fluids, which replace lost fluids from bleeding, and a physical exam to check for sources of bleeding.  WHO recommends bundling these treatments, "which means that all the effective treatments need to be given at once in somebody who was bleeding," explains Coomarasamy. "So there isn't any time lost."  The providers in the control group of hospitals provided care as usual, which normally involves using one of the treatments at a time, then trying another ... and another.
A description of the control group used in this study.


Absolute vs. Relative Risk: I believe absolute vs. relative risk is the most critical probability lesson we can teach in introduction to psychology. This is an excellent example of this.


An example of absolute v. relative risk that isn't shady. This first paragraph describes the findings in terms of relative risk.

A more precise measurement of blood loss with a 'drape' The study, carried out in 80 hospitals across four African countries, used a simple device called a "drape" to collect and measure the amount of blood from women who have just given birth. The device, combined with a bundle of treatment options recommended by the World Health Organization, reduced the number of women experiencing severe bleeding by 60%. The study also found a reduction in maternal deaths from bleeding.


The same summary also describes the findings in absolute risk.



Also, the interview with the author touches on how research can 1) help explain some but not all of the variance explained, 2) ecological validity, and 3) how real-life interventions are evaluated not just by efficacy/data but by cost and ease of implementation. 




Comments