This NPR story is about reproducibility in science that ISN'T psychology, the limitations of expert intuition, and the story is a summary of a recent research article from PLOS Biology (so open science that isn't psychology, too!).
Thrust of the story: Cancer researchers may be having a similar problem to psychologists in terms of replication. I've blogged this issue before. In particular, concerns with replication in cancer research, possibly due to the variability with which lab rats are housed and fed.
So, this story is about a study in which 200 cancer researchers, post-docs, and graduate students took a look at six pre-registered cancer study replications and guessed which studies would successfully replicate. And the participants systematically overestimated the likelihood of replication. However, researchers with high h-indices, were more accurate that the general sample. I wonder if the high h-indicies uncover super-experts or super-researchers who have been around the block and are a bit more cynical about the ability of any research finding to replicate.
How to use in a stats class: False positives: The original research didn't replicate (this time, maybe) AND that the experts judging replicability were overly optimistic. Also, one might wonder if there are potential cancer treatments that we don't know about because of false negatives.
How to use in a research class: The lack of reproduction may signal evidence of the publication bias. Replication is necessary for good science. Experts aren't perfect.
Thrust of the story: Cancer researchers may be having a similar problem to psychologists in terms of replication. I've blogged this issue before. In particular, concerns with replication in cancer research, possibly due to the variability with which lab rats are housed and fed.
So, this story is about a study in which 200 cancer researchers, post-docs, and graduate students took a look at six pre-registered cancer study replications and guessed which studies would successfully replicate. And the participants systematically overestimated the likelihood of replication. However, researchers with high h-indices, were more accurate that the general sample. I wonder if the high h-indicies uncover super-experts or super-researchers who have been around the block and are a bit more cynical about the ability of any research finding to replicate.
How to use in a stats class: False positives: The original research didn't replicate (this time, maybe) AND that the experts judging replicability were overly optimistic. Also, one might wonder if there are potential cancer treatments that we don't know about because of false negatives.
How to use in a research class: The lack of reproduction may signal evidence of the publication bias. Replication is necessary for good science. Experts aren't perfect.
Comments
Post a Comment