Monday, July 27, 2015

One article (Kramer, Guillory, & Hancock, 2014), three stats/research methodology lessons

The original idea for using this article this way comes from Dr. Susan Nolan's presentation at NITOP 2015, entitled "Thinking Like a Scientist: Critical Thinking in Introductory Psychology". I think that Dr. Nolan's idea is worth sharing and I'll reflect a bit on how I've used this resource in the classroom. (For more good ideas from Dr. Nolan, check out her books, Psychology, Statistics for the Behavioral Sciences, and The Horse that Won't Go Away (about critical thinking)).

Last summer, the Proceedings of the National Academy of Sciences published an article entitled "Experimental evidence of massive-scale emotional contagion through social networks". Gist: Facebook manipulated participants' Newsfeeds to increase the number of positive or negative status updates that each participant viewed. The researchers subsequently measured the number the positive and negative words that the participants used in their own status updates. They found significance and, thus, support that emotional contagion/spreading of emotions can occur via Facebook.

I assure you, your students are very familiar with Facebook. Additionally, emotional contagion theory is pretty easy to understand. As such, the article itself is accessible and interesting to students.

Pedagogically, there are three statistical lessons (p-value vs. effect size, how to create a misleading graph, informed consent in the age of Big Data).

Lesson 1. This article is a good example of p-values vs. effect size. For one analysis, the p-value is gorgeous (< .001) and the effect size is itty-bitty (.001). Guess why? N = 689,003. Here is an additional resource if you want to delve further into p-values vs. effect sizes with your students.

Lesson 2. The figures overemphasize the findings because they didn't scale they y-axis at zero. See below.
http://www.pnas.org/content/111/24/8788.full.pdf
If you look at the graph in passing, the differences seem...reasonable, especially the one in the upper right-hand quadrant. However, if you look at the actual numbers on the y-axis, the differences are not of great practical value. This disparity also harkens back to the p vs. effect size issues, as the practical (yet-statistically-significant) implications of these findings are unimpressive.

Lesson 3. The researchers sorta-kinda obtained informed consent. How did they go about doing so? The researchers argued that by agreeing to the the Facebook Terms of Service, users are providing consent to experimental research. However, the participants 1) were not aware that they were part of this particular study and 2) were never given the option to opt out of the study. Several good pieces have been written on this aspect of the study, including The Washington Post (.pdf here) and Wall Street Journal (.pdf here). Of particular interest here (to me, at least) is the disconnect between a bunch of industry statisticians crunching numbers and manipulating user experiences, which they do EVERY DAY as part of their job, versus how social psychologists perceive the exact same practices (and place a greater emphasis on research ethics and participant rights). This all resulted in the"Editorial Expression of Concern and Correction" that have been appended to the source article. Facebook also claims to have changed their research process as a result of this study (described in the WSJ article).

How I used this in my classes:

I used the graph during the first few weeks of class as an example of how NOT to create a bar graph. I also used the study itself as a review of research methods, IV, DV, etc.

I used this as a discussion point in my Honors Psychological Statistics class (the topic of the week's discussion was research ethics and this was one of several case studies) and it seemed to engage the students. We discussed User Agreements, practical ways to increase actual reading of User Agreements, and whether or not this was an ethical study (in terms of data collection as well as potential harm to participants).

In the future, I think I'll have my students go over the federal guidelines for informed consent and compare those standards to Facebook's attempt to gain informed consent via user agreement.

Aside: I learned about this article and how to use it in the classroom at the National Institute for the Teaching of Psychology. Guys, go to NITOP. Totally worth your time and money. Also, family friendly if you and your partner and/or kids would like a trip to Florida in early January. NITOP is accepting proposals for various submission until October 1st (with some exceptions). 

Monday, July 20, 2015

"Correlation is not causation", Parts 1 and 2

Jethro Waters, Dan Peterson, Ph.D., Laurie McCollough, and Luke Norton made a pair of animated videos (1, 2) that explain why correlation does not equal causation and how we can perform lab research in order to determine if causal relationships exist.

I like them a bunch. Specific points worth liking:

-Illustrations of scatter plots for significant and non-significant relationships.

Data does not support the old wive's tale that everyone goes a little crazy during full moons.

-Explains the Third Variable problem.
Simple, pretty illustration of the perennial correlation example of ice cream sales (X):death by drowning (Y) relationship, and the third variable, hot weather (Z) that drives the relationship.
-In addition to discussing correlation =/= causation, the video makes suggestions for studying a correlational relationship via more rigorous research methods (here violent video games:violent behavior).
Video games (X) influence aggression (Y) via the moderator of personality (Z)


In order to test the video game hypothesis without using diary/retrospective data collection, the video describes how one might design a research study to test this hypothesis.

-Finally, at the end of  the video, they provide citations to the research used in the video. You could take this example a step further and have your students look at the source research.

Special thanks to Rajiv Jhangiani for introducing me to this resource!

Tuesday, July 14, 2015

Free online research ethics training

Back in the day, I remember having to complete an online research ethics course in order to serve as an undergraduate research assistant at Penn State.

I think that such training could be used as an exercise/assessment in a research methods class or an advanced statistics class.

NOTE: These examples are sponsored by the American agencies and, thus, teach participants about American laws and rules. If you have information about similar training in other countries (or other free options for American researchers), please email me and I will add the link.

Online Research Ethics Course from the U.S. Health and Human Service's Office of Research Integrity.

Features: Six different learning modules, each with a quiz and certificate of completion. These sections include separate quizzes on the treatment of human and animal test subjects. Other portions also address ethical relationships between PIs and RAs and broader issues of professional responsibility when reporting results.

National Institute of Health's Protecting Human Subject Participants

Features: Requires free registration. Four different, quizzed learning modules. This one includes some lessons about the historical need for IRBs, the Belmont Report, the need to respect and protect our participants. This also provides a certificate at the end of the training.

Monday, July 13, 2015

Dread Fall 2015 Semester

It's coming, guys.

But let's get ahead of it. I thought I would re-share some resources that you may want to consider working into your curriculum this year. I picked out a few lessons and ideas that also require a bit of forethought and planning, especially if they become assessment measures for your class.

Center for Open Science workshops:

As previously discussed on this blog, COS offers free consultation (face-to-face or online) to faculty and students in order to teach us about the open framework for science. They provide guidance about more more traditional statistical issues, like power calculations and conducting meta-analysis in addition to lessons tailored to introducing researchers to the COS framework.


Take your students to an athletic event, talk about statistics and sports:

I took my students to a baseball game and worked some statsy magic. You can do it, too. If not a trip to the ballpark, an on-campus or televised athletic event will work just fine.

Statistic/research method discussions:

I have just added a new searchable label to this blog, discussion prompts. Such items are typically news stories that are great for generating conversations about statistic/research related topics. This is a topic to think about prior the beginning of the semester in case you want to integrate one of the prompts into your assessments (discussion boards, small group discussion, discussion days, reflective writing, etc).

Formal, free, online research ethics training:

This post describes two free resources that provide thorough research ethics training (as well as assessment quizzes and training completion certificates, two things that are especially helpful if you want to use these as a class assignment).

First day of class persuasion:

Statistics are everywhere and a part of every career. We know this. Here are some resources for trying to convince your students.

Monday, July 6, 2015

Ben Blatt's "Bad Latitude" and "You Live in Alabama. Here’s How You’re Going to Die"

Ben Blatt of Slate mined through Center for Disease control data in order to provide us with 13 different maps of the United States and mortality information for each state. Below, information on disproportionately high cases of death in each state.

While the maps are morbid and interesting, the story behind the maps (read the story here about how data can be easily misrepresented by maps) make this a good example of how easily data can be distorted.

The story along with the maps unveils several issues that statisticians/researchers must consider when they are presenting descriptive statistics. In this instance, Blatt had to sort through the data to eliminate the most common causes of death (heart disease, cancer, etc.) in order to uncover unique data for each state.


Relatedly, he highlights the fact that "disproportionately" does not mean "most":

"But this map—like many maps which purport to show attributes meant to be “distinct” or “disproportionate”—can be misleading if not read properly. For one thing, you cannot make comparisons between states. Looking at this map, you probably would not guess that Utah has the sixth-highest diabetes rate in the country. Diabetes just happens to be the one disease that affects Utah most disproportionately. Louisiana has a higher diabetes death rate than any state, but is affected even more disproportionately by kidney disease."

Additionally, some of the maps highlight the difference between median and mean splits...



"It should be noted that each state’s rate is compared with the national average, not the median. That’s why it’s possible for 30 states to have more deaths than the national average."

Anyway, I think this could be useful in class because the title of the maps are interesting, but I think it also forces students to think about some unique issues in statistics.


Or, put bluntly and in a manner inappropriate for the undergraduate classroom:


http://xkcd.com/1138/