Monday, January 25, 2016

Statistics/RM videos from The Economist

TED isn't the only source of videos for teaching statistics. The Economist also makes animated videos that are lousy with data. One easy, no-pay-wall source for such videos is The Economists Videographic playlist on YouTube (there is a limit on number article views/month at their website).

One really statsy video from The Economist that I've featured previously on this blog explains the real life implications for Type I/Type II error in research (and, specifically, how it leads to errors in published research).

The other videos may not be as directly related to the teaching of statistical topics, but they do illustrate data. Topics range from American union membership trends to this video about world population growth. As you may have inferred from the source, many of these videos focus on national and global economic information, but all of the videos do present data that you can integrate into your classes.

Some are more applicable to teaching statistics: This video describes why we have so much data and keep on generating more data. Others are particularly applicable to social/interpersonal psychology, like this illustration of how "like likes like" in terms of education level (and how this may contribute to income inequality).

Like likes like

Others are about non-teaching topics but very relatable, like this video about shifting world age demographics or this one explaining why textbooks in America are so expensive.

These videos demonstrate to your students a) how data can be used to make a logical argument, b) how illustrated data can help to visualize a compelling story, and c) that statistics are used by people who do not work in explicitly stats-y careers.

Monday, January 18, 2016

Explaining between and within group differences using Pew Research data on religion/climate change

I am a big fan of Pew Research Center. They collect, share, and summarize data about a wide variety of topics. In addition to providing very accessible summaries of their findings, they also provide more in-depth information about their data collection techniques, including original materials used in their data collection and very through explanations of their methods.

One topic they collect Pew studies is religion and attitudes (religious and secular) held by people of different religions. And it got me thinking that I could use their data in order to explain within and between group differences at the heart of a conceptual understanding of ANOVA.

Specifically, Pew gathered data looking at between-group differences in beliefs in global climate change by religion...

Chart created by Pew Research

...and belief in climate change within just Catholics, divided up by political affiliation.

Chart created by Pew Research

The questionnaires differed slightly for the two surveys. However, both groups were asked whether or not global warming was caused by human activity. The data table illustrates the between group differences between religions and their views on climate change while the bar graphs demonstrate how within one group (Catholics) there is a fair amount of variability in beliefs about climate change, based upon political affiliation.

How to use in class? Well, the Catholics, as a group, report a 45% agreement with the idea that climate change is caused by human activity. Which is pretty different than White Evangelicals, who report a 28% agreement with this statement. So that would be significantly different, right? But wait...62% of Catholic Democrats agree that global climate change is caused by human activity, while only 24% of Catholic Republicans agree with this statement. That is an awful lot of within group variance.

More data on more religion is available from Pew.

Monday, January 11, 2016

Stein's "Is It Safe For Medical Residents To Work 30-Hour Shifts?"

This story describes an 1) an efficacy study that 2) touches on some I/O/Health psychology research and 3) has gained the unwanted attention of government regulatory agencies charged with protecting research participants.  

The study described in this story is an efficacy study that questions a decision made by the 2003 Accreditation Council for Graduate Medical Education. Specifically, this decision capped the number of hours that first-year medical student can work at 80/week and a maximum shift of 16 hours. The PIs want to test whether or not these limits improve resident performance and patient safety. They are doing so by assigning medical students to either 16-hour maximum shifts or 30-hour maximum shifts. However, the research participants didn't have the option to opt out of this research. Hence, an investigation by the federal government.

So, this is interesting and relevant to the teaching of statistics, research methods, I/O, and health psychology for a number of reasons.

1) As an I/O instructor, it is nice to double dip with a research methods example that studies an I/O topic (shift work/night shift and employee well-being).

2) Efficacy research must be conducted because intuition isn't always right. Here, they question whether a 30-hour shift is really worse than multiple 16-hour night shifts over the course of a week. The PIs argue that the longer shifts lead to more consistent care (your doctor doesn't change in the middle of your care), which may lead to fewer mistakes and better patient care.

3) This multi-location study looked at two different conditions: maximum 16-hour shift versus maximum 30-hour shifts.

4) None of the medical residents or their patients consented to be part of this research study nor were the medical students able to opt out without leaving their residency. The research is being investigated by the federal government, even though it was classified as "minimal risk" (and they use that applicable IRB term in the story).

Monday, January 4, 2016

Oster's "Everybody Calm Down About Breastfeeding"

I just had a baby. Arthur Francis joined our family last week. Don't mind the IV line on his head, he is a happy, chubby little boy.

Now, I am the mother of a new born and a toddler. And I have certainly been inundated by the formula versus breast feeding debate. In case you've missed out on this, the debate centers around piles and piles of data that indicate that breast fed babies enjoy a wealth of developmental outcomes denied to their formula fed peers. Which means there is a lot of pressure to breast feed (and some women feel a lot of guilt when they can't/do not want to breast feed).

However, the data that supports breast feeding also finds that breast feeding is much more common among  educated, wealthy white women with high IQs. And being born to such a woman probably affords a wealth of socioeconomic advantages beyond simply breast milk. These issues, as well as mixed research findings, are reviewed in Emily Oster's "Everybody calm down about breast feeding", written for

I think that this would be a good way to introduce the idea of co-variates/third variable problem to your students. For example, what are we to conclude when breast feeding is associated with higher IQ in children. But women with high IQs are more likely to breast feed?

It is also an example of a situation when creating a truly randomized study is exceedingly difficult. This could also be used as a bigger discussion point of how influential socio-economic status is in our life choices. Not to brag, but I'm a highly educated woman with a high IQ. I also happen to be white. And I work in an environment in which I control my own schedule, have an office with a door that shuts, I have enough money to pay for a breast pump, etc. It is relatively easy for me to breast feed. What is a poor woman to do in a work environment in which an employer is not obliged to give her time off to pump, a discreet location to pump, or a secured refrigerator or freezer to store her breast milk?

Another thing to love about this story are all of the links to cited research. If you combined the fivethirtyeight article with an original research article, it might be a way to give your students a security blanket as they take on a big, bad research article.