Monday, December 5, 2016

Christie Aschwanden's "You Can’t Trust What You Read About Nutrition"


Fivethirtyeight provides lots of beautiful pictures of spurious correlations found by their own in-house study.
At the heart of this article are the limitations of a major tool use in nutritional research, the Food Frequency Questionnaire (FFQ). The author does a mini-study, enlisting the help of several co-workers and fivethirtyeight.com readers. They track track their own food for a week and reflect on how difficult it is to properly estimate and recall food (perhaps a mini-experiment you could do with your own students?).

And she shares the spurious correlations she found in her own mini-research:



Aschwanden also discusses how much noise and lack of consensus their is in real, published nutritional research (a good argument for why we need replication!): 

http://fivethirtyeight.com/features/you-cant-trust-what-you-read-about-nutrition/

How to use in class:
-Short comings of survey research, especially survey research that relies on accurate memories
-Spurious correlations (and p-values!)
-Correlation does not equal causation
-Why replication is necessary


Also included is an amusing video that shows what it is like to be a participant in a nutrition study. It details the FFQ, or Food Frequency Questionnaire. And the video touches on serving sizes and portions, how how it may be difficult for many of to properly estimate (per the example) how many cups of spare ribs we consume per week. 

Monday, November 28, 2016

Teaching the "new statistics": A call for materials (and sharing said materials!)

This blog is usually dedicated to sharing ideas for teaching statistics. And I will share some ideas for teaching. But I'm also asking you to share YOUR ideas for teaching statistics. Specifically, your ideas for teaching the new statistics: effect size, confidence intervals, etc.

The following email recently came across the Society for the Teaching of Psychology listserv from Robert Calin-Jageman (rcalinjageman@dom.edu).

"Is anyone out there incorporating the "New Statistics" (estimation, confidence intervals, meta-analysis) into their stats/methods sequence?
I'm working with Geoff Cumming on putting together an APS 2017 symposium proposal on teaching the New Statistics.  We'd love to hear back from anyone who has already started or is about to.  Specifically, we'd love to:
        * Collect resources you'd be willing to share (syllabi, assignments, etc.)
        * Collect narratives of your experience (the good, the bad, the unexpected)
        * Know what tips/suggestions you might have for others embarking on the transition
We'll use responses to help shape our symposium proposal (and if you're interested in possibly joining, let us know).
In addition, we're curating resources, tips, on a "Getting started teaching the New Statistics" page on the OSF : https://osf.io/muy6u/"


I'll start by sharing to examples I have successfully used in class and have previously blogged about. Here is a post about a Facebook research study (Kramer, Guillory, & Hancock, 2014) that demonstrates how large sample sizes lead to statistical significance but very small effect sizes. This study also demonstrates how to mislead with graphs and the debate of whether or not Terms of Service agreements are the same thing as informed consent.

And I use this Colbert interview with Daryl Bem in which Bem is basically arguing for p-values without ever saying "p-values", and Colbert is arguing for effect size/clinical significance without ever saying those words. I follow up this video by sharing a table from the much-debated Bem, 2014 JPSP article that displays, again, small p-values and large effect sizes. NOTE: This interview is about the Bem, 2014 research that used erotic imagery as stimuli, so the  tone of the interview might be a little racy for inclass use at some universities/high school statistics classes.

Finally, I use Kristopher Magnussen's website to illustrate a quite a few statistical principles, including Cohen's d. 

So, I am sharing it here to reach out to all of you statistics instructors to see 1) if you are interested in sharing your ideas for the APS symposium/OSF resource, 2) would like to look out for the APS symposium if you are attending next year, 3) alert you to the great OSF resource listed above, and in the spirit of the holiday season, 4) share, share, share.  

Monday, November 21, 2016

Chokshi's "How Much Weed Is in a Joint? Pot Experts Have a New Estimate"

Alright, stick with me. This article is about marijuana dosage and it provides good examples for how researchers go about quantifying their variables in order to properly study them. The article also highlights the importance of Subject Matter Experts in the process and how one research question can have many stakeholders.

As the title states, the main question raised by this article is "How much weed is in a joint?". Why is this so important? Researchers in medicine, addictions, developmental psychology, criminal justice, etc. are trying to determine how much pot a person is probably smoking when most drug use surveys measure marijuana use by the joint. How to use in a statistics class: