Monday, October 31, 2016

Harris' "Reviews Of Medical Studies May Be Tainted By Funders' Influence"

This NPR story is a summary of the decisively titled "The Mass Production of Redundant, Misleading, and Conflicted Systematic Reviews and Meta-analyses" authored by Dr. John Ioannidis.

The NPR story provides a very brief explanation of meta-analysis and systematic reviews. It explains that they were originally used as a way to make sense of many conflicting research findings coming from a variety of different researchers. But these very influential publications are now being sponsored and possibly influenced by Big Pharma.

This example explains conflicts of interest and how they can influence research outcomes. In addition to financial relationships, the author also cites ideological allegiances as a source of bias in meta-analysis. In addition to Dr. Ioannidis, Dr. Peter Kramer was interviewed. He is a psychiatrist who defends the efficacy of antidepressants. He suggests that researchers who believe that placebos are just as effective as anti-depressants tend to analyze meta-analysis data in such a way as to support that belief.

Ways to use in class:
-Meta-analysis as a way to sort out conflicting research findings.
-An example of conflict of interest.
-An example of experimenter bias (in the form of both the conflict of interest as well as individuals who believe that anti-depressants are ineffective).
-If you are like me and teach lots of pre-PT/OT/PA and nursing students, this is a applicable example for that crowd.
-Confirmation bias

Thursday, October 27, 2016

Turner's "E Is For Empathy: Sesame Workshop Takes A Crack At Kindness" and the K is for Kindness survey.

This NPR story is about a survey conducted by the folks at Sesame Street. And that survey asked parents and teachers about kindness. If kids are kind, if the world is kind, how they define kindness, etc..

The NPR story is a round about way of explaining how we operationalize variables, especially in psychology. And the survey itself provides examples of forced choice research questions and dichotomous responses that could have been Likert-type scales.

The NPR Story:

The Children's Television Workshop, the folks behind Sesame Street, have employees in charge of research and evaluation (a chance to plug off-the-beat-path stats jobs to your students). And they did a survey to figure out what it means to be kind when you are a kid. They surveyed parents and teachers to do so.

The main findings are summarized here. Parents and teachers are worried that the world isn't kind and doesn't emphasize kind. But both groups think that kindness is more important than academic achievement.


http://kindness.sesamestreet.org/view-the-results/ 

Monday, October 24, 2016

Hancock's "Skip The Math: Researchers Paint A Picture Of Health Benefits And Risks"

Two scientists, Lazris and Rifkin, want to better illustrate the risks and benefits associated with preventative medicine. They do so by asking people to imagine theaters filled with 1,000 people, and describing the costs and benefits for different preventative procedures by discussing how many people in the theater will be saved or perish based on current efficacy data.

One such video can be viewed here and illustrates the absolute and relative risks associated with mammography. They are attempting to demystify statistics and better explain the risks and benefits by showing an animated theater filled with 1,000 women, and showing how many women actually have their lives saved by mammograms (see screen shot below)...



...as well as the number of women who received false positives over the course of a life time...

A screen shot of the video, which is trying a new way to illustrate risk.
...the video also illustrates how a "20% reduction in breast cancer deaths" can actually be equal to 1 life saved out of 1,000.



This video touches on the confusion about relative versus absolute risk as well as the actual effectiveness of preventative medicine (and why it is so important to conduct efficacy research for medical interventions). I have a few discussion days with my Honor students and a discussion board with my online students that involve this piece from fivethirtyeight.com that questions whether methods of early cancer detection save lives or just uncover non-cancerous variation within the human body. This topic leads to lively discussions.

How to use in class:
-Relative risk
-Absolute risk
-False positives
-Medical examples (and I have plenty of pre-medical professional students)
-An example of why we teach our students to make graphs and charts. Sometimes, data is better shared via illustration
-Using statistics to inform important real-life decisions

Monday, October 17, 2016

Pew Research's "The art and science of the scatterplot"

Sometimes, we need to convince our students that taking a statistics class changes the way they think for the better.

This example demonstrates that one seemingly simple skill, interpreting a scatter plot, is tougher than it seems. Pew Research conducted a survey on scientific thinking in America (here is a link to that survey) and they found that only 63% of American adults can correctly interpret the linear relationship illustrated in the scatter plot below. And that 63% came out a survey with multiple choice responses!


How to use in class:
-Show your students that a major data collection/survey firm decided that interpreting statistics was an appropriate question on their ten-item quiz of scientific literacy.
-Show your students that many randomly selected Americans can't interpret a scatter plot correctly.

And for us instructors:
-Maybe a seemingly simple task like the one in this survey isn't as intuitive as we think it is!

Monday, October 10, 2016

Pew Research's "Growing Ideological Consistency"

This interactive tool from Pew research illustrates left and right skew as well as median and longitudinal data. The x-axis indicates how politically consistent (as determined by a survey of political issues) self-identified republicans and democrats are across time. Press the button and you can animate data, or cut up the data so you only see one party or only the most politically active Americans.

http://www.people-press.org/2014/06/12/section-1-growing-ideological-consistency/#interactive
The data for both political part goes from being normally distributed in 1994 to skewed by 2014. And you can watch what happens to the median as the political winds change (and perhaps remind your students as to why mean would be the less desirable measure of central tendency for this example). I think it is interesting to see the relative unity in political thought (as demonstrated by more Republicans and Democrats indicating mixed political opinions) in the wake of 9/11 but more politically consistent (divided?) in the more recent past.

Depending on how deep you feel like going with this example, it can also illustrate research methods for your students as Pew has been gathering this research for years. 

Monday, October 3, 2016

Dr. Barry Marshall as an example of Type II error.

I just used this example in class and I realized that I never shared it on my blog. I really love this example of Type II error (and some other stuff, too). So here it goes.

http://www.achievement.org/autodoc/page/mar1int-1