Monday, July 27, 2015

One article (Kramer, Guillory, & Hancock, 2014), three stats/research methodology lessons

The original idea for using this article this way comes from Dr. Susan Nolan's presentation at NITOP 2015, entitled "Thinking Like a Scientist: Critical Thinking in Introductory Psychology". I think that Dr. Nolan's idea is worth sharing and I'll reflect a bit on how I've used this resource in the classroom. (For more good ideas from Dr. Nolan, check out her books, Psychology, Statistics for the Behavioral Sciences, and The Horse that Won't Go Away (about critical thinking)).

Last summer, the Proceedings of the National Academy of Sciences published an article entitled "Experimental evidence of massive-scale emotional contagion through social networks". Gist: Facebook manipulated participants' Newsfeeds to increase the number of positive or negative status updates that each participant viewed. The researchers subsequently measured the number the positive and negative words that the participants used in their own status updates. They found significance and, thus, support that emotional contagion/spreading of emotions can occur via Facebook.

I assure you, your students are very familiar with Facebook. Additionally, emotional contagion theory is pretty easy to understand. As such, the article itself is accessible and interesting to students.

Pedagogically, there are three statistical lessons (p-value vs. effect size, how to create a misleading graph, informed consent in the age of Big Data).

Lesson 1. This article is a good example of p-values vs. effect size. For one analysis, the p-value is gorgeous (< .001) and the effect size is itty-bitty (.001). Guess why? N = 689,003. Here is an additional resource if you want to delve further into p-values vs. effect sizes with your students.

Lesson 2. The figures overemphasize the findings because they didn't scale they y-axis at zero. See below.
http://www.pnas.org/content/111/24/8788.full.pdf
If you look at the graph in passing, the differences seem...reasonable, especially the one in the upper right-hand quadrant. However, if you look at the actual numbers on the y-axis, the differences are not of great practical value. This disparity also harkens back to the p vs. effect size issues, as the practical-yet-statistically-significant difference is very small.

Lesson 3. The researchers sorta-kinda obtained informed consent. How did they go about doing so? The researchers argued that the Facebook Terms of Service is a form of consent to experimental research. However, the participants were not aware that they were part of this particular study and were never given the option to opt out of the study. Several good pieces have been written on this aspect of the study, including The Washington Post (.pdf here) and Wall Street Journal (.pdf here). Of particular interest here (to me, at least) is the disconnect between a bunch of industry statisticians crunching numbers and manipulating user experiences, which they do EVERY DAY as part of their job, versus how social psychologists perceive the exact same practices (and place a greater emphasis on research ethics and participant rights). This all resulted in the"Editorial Expression of Concern and Correction" that have been appended to the source article. Facebook also claims to have changed their research process as a result of this study (described in the WSJ article).

How I used this in my classes:

I used the graph during the first few weeks of class as an example of how NOT to create a bar graph. I also used the study itself as a review of research methods, IV, DV, etc.

I also used this as a discussion point in my Honors Psychological Statistics class (the topic of the week's discussion was research ethics and this was one of several case studies) and it seemed to engage the students. We discussed User Agreements, practical ways to increase actual reading of User Agreements, and whether or not this was an ethical study (in terms of data collection as well as potential harm to participants).

In the future, I think I'll have my students go over the federal guidelines for informed consent and compare those standards to Facebook's attempt to gain informed consent.

Aside: I learned about this article and how to use it in the classroom at the National Institute for the Teaching of Psychology. Guys, go to NITOP. Totally worth your time and money. Also, family friendly if you and your partner and/or kids would like a trip to Florida in early January. NITOP is accepting proposals for various submission until October 1st (with some exceptions). 

Monday, July 20, 2015

"Correlation is not causation", Parts 1 and 2

Jethro Waters, Dan Peterson, Ph.D., Laurie McCollough, and Luke Norton got together an made a pair of animated videos (1, 2) that explain why correlation does not equal causation and how we can perform lab research in order to determine if causal, linear relationships exist.

I like them a bunch. Specific points worth liking:

-Illustrations of scatter plots and significant and non-significant relationships.

Data does not support the idea that society goes a little crazy during full moons.

-Explains the Third Variable problem.
Simple, pretty illustration of the perennial correlation example of ice cream sales (X):death by drowning (Y) relationship, and the third variable, hot weather (Z) that drives the relationship.
-In addition to discussing correlation =/= causation, the video makes suggestions for studying a correlational relationship via more rigorous research methods (here violent video games:violent behavior).
Video games (X) influence aggression (Y) via the moderator of personality (Z)


In order to test the video game hypothesis without using diary/retrospective data collection, the video describes how one might design a research study to test this hypothesis.

-Also, they cite actual research in these videos.

Special thanks to Rajiv Jhangiani for introducing me to this resource!

Tuesday, July 14, 2015

Free online research ethics training

Back in the day, I remember having to complete an online research ethics course in order to serve as an undergraduate research assistant at Penn State.

I think that such training could be used as an exercise/assessment in a research methods class or an advanced statistics class.

NOTE: These examples are sponsored by the American agencies and, thus, teach participants about American laws and rules. If you have information about similar training in other countries (or other free options for American researchers), please email me and I will add the link.

Online Research Ethics Course from the U.S. Health and Human Service's Office of Research Integrity.

Features: Six different learning modules, each with a quiz and certificate of completion. These sections include separate quizzes on the treatment of human and animal test subjects. Other portions also address ethical relationships between PIs and RAs and broader issues of professional responsibility when reporting results.

National Institute of Health's Protecting Human Subject Participants

Features: Requires free registration. Four different, quizzed learning modules. This one includes some lessons about the historical need for IRBs, the Belmont Report, the need to respect and protect our participants. This also provides a certificate at the end of the training.