Monday, November 28, 2016

Teaching the "new statistics": A call for materials (and sharing said materials!)

This blog is usually dedicated to sharing ideas for teaching statistics. And I will share some ideas for teaching. But I'm also asking you to share YOUR ideas for teaching statistics. Specifically, your ideas for teaching the new statistics: effect size, confidence intervals, etc.

The following email recently came across the Society for the Teaching of Psychology listserv from Robert Calin-Jageman (

"Is anyone out there incorporating the "New Statistics" (estimation, confidence intervals, meta-analysis) into their stats/methods sequence?
I'm working with Geoff Cumming on putting together an APS 2017 symposium proposal on teaching the New Statistics.  We'd love to hear back from anyone who has already started or is about to.  Specifically, we'd love to:
        * Collect resources you'd be willing to share (syllabi, assignments, etc.)
        * Collect narratives of your experience (the good, the bad, the unexpected)
        * Know what tips/suggestions you might have for others embarking on the transition
We'll use responses to help shape our symposium proposal (and if you're interested in possibly joining, let us know).
In addition, we're curating resources, tips, on a "Getting started teaching the New Statistics" page on the OSF :"

I'll start by sharing to examples I have successfully used in class and have previously blogged about. Here is a post about a Facebook research study (Kramer, Guillory, & Hancock, 2014) that demonstrates how large sample sizes lead to statistical significance but very small effect sizes. This study also demonstrates how to mislead with graphs and the debate of whether or not Terms of Service agreements are the same thing as informed consent.

And I use this Colbert interview with Daryl Bem in which Bem is basically arguing for p-values without ever saying "p-values", and Colbert is arguing for effect size/clinical significance without ever saying those words. I follow up this video by sharing a table from the much-debated Bem, 2014 JPSP article that displays, again, small p-values and large effect sizes. NOTE: This interview is about the Bem, 2014 research that used erotic imagery as stimuli, so the  tone of the interview might be a little racy for inclass use at some universities/high school statistics classes.

Finally, I use Kristopher Magnussen's website to illustrate a quite a few statistical principles, including Cohen's d. 

So, I am sharing it here to reach out to all of you statistics instructors to see 1) if you are interested in sharing your ideas for the APS symposium/OSF resource, 2) would like to look out for the APS symposium if you are attending next year, 3) alert you to the great OSF resource listed above, and in the spirit of the holiday season, 4) share, share, share.  

Monday, November 21, 2016

Chokshi's "How Much Weed Is in a Joint? Pot Experts Have a New Estimate"

Alright, stick with me. This article is about marijuana dosage and it provides good examples for how researchers go about quantifying their variables in order to properly study them. The article also highlights the importance of Subject Matter Experts in the process and how one research question can have many stakeholders.

As the title states, the main question raised by this article is "How much weed is in a joint?". Why is this so important? Researchers in medicine, addictions, developmental psychology, criminal justice, etc. are trying to determine how much pot a person is probably smoking when most drug use surveys measure marijuana use by the joint. How to use in a statistics class:

Wednesday, November 16, 2016

The Onion's "Study: Giving Away “I Voted” Burger Instead Of Sticker Would Increase Voter Turnout By 80%"


A very funny example of conflict of interest, as this satirical study was sponsored by Red Robin. Click through to the original content to read how the study replaced "I Voted" stickers with "thick Red Robin Gourmet Cheeseburger complete with pickle relish, tomatoes, onions, lettuce, mayonnaise, and their choice of cheese".

Monday, November 14, 2016

Johnson & Wilson's The 13 High-Paying Jobs You Don’t Want to Have

This is a lot of I/O and personality a little bit of stats. But it does demonstrate correlation, percentiles, and it is interactive.

For this article from Time, Johnson and Wilson used participant score on a very popular vocational selection tool, the Holland Inventory (sometimes called the RAISEC) and participant salary information to see if there is a strong relationship between salary and personality-job fit. There is not.

How to use in class:

-Show your students what a weak correlation looks like when expressed via scatter plot. Seriously. I spend a lot of time looking for examples for teaching statistics. And there are all sorts of significant positive and negative correlation examples out there. But good examples of non-relationships are a lot rarer.

-If you teach I/O, this fits nicely into personality-job fit lecture. If you don't teach I/O but are a psychologist, this still applies to your field and may introduce your students to the field of I/O.

-This example is interactive in a few ways. Johnson and Wilson got this data from a previous Time article that included the RAISEC survey. The survey, via Time, also returns the respondents' results. It makes the example more self relevant, and also gives your students a bit of vocational advice.

Additionally, there is a search feature so that you can look up a job title and find the personality-job fit percentile for a given job. Or, you can cursor over any of the dots on the scatter plot to get the job title, salary, and personality-job fit for that job title.

You can also use the search feature to look up particular job titles.

Wednesday, November 9, 2016

CNN, exit polls, and chi-square examples.

CNN posted a whole mess of exit polling data that illustrates how different demographics voted last night. And through my "I teach too many stats classes" lense, I see many examples of chi-square.

I think they work at a conceptual level to clearly illustrate how chi-square looks at people falling into different categories, then measures whether the distribution of people is by chance or not.

If you actually wanted to test these using chi square, I would suggest you should problem delete the other/no answer column (or else they will all come out as statistical significant, I would wager).

EDIT (11/14/16); Daniel Findley made of video demonstrating how to use Excel to conduct chi-square tests on the marital status data. Check it out here.

Monday, November 7, 2016

Collin's "America’s most prolific wall punchers, charted"

Collin gleaned some archival data about ER visits in America from US Consumer Product Safety commission. For each ER visit, there is a brief description of the reason for the visit. Collin queried punching related injuries. See his Method section below describes how he set the parameters for his operationalized variable. With a bit of explaining, you could also describe how Collin took qualitative data (the written description of the injury) and converted it into quantitative data:

Then he made some charts.

The age of wall punchers is right skewed. And probably could be used in a Developmental Psychology class to illustrate poor judgement in adolescents as well as the emergence of the prefrontal cortext/executive thinking skills in one's early 20s.
The author looked at wall punching by month of the year and uncovered a fairly uniform distribution.

 How to use in class:
-Taking qualitative data and coding it (here, turning "ER Visit" into "Wall Punch: Yes or No"
-Uniform Distributions
-Method section
-Archival data
-Criteria for operationlizing a variable when coding data