Friday, January 8, 2021

Chi-square Test of Independence using CNN exit polling data

If you are trying to explain the Chi-Square Test of Independence to your students, here are some timely examples that are political and not polarizing. Well, I don't think it is polarizing. I'm sure there are people out there that disagree. Maybe some of the questions are polarizing? Regardless, it is nice to have an example that uses a current event with easy to understand data. 

The example comes from CNN. The network conducted exit polling during the 2020 presidential election. I'm sure they didn't intend to provide us with a bunch of chi-square examples, but here we are.

Essentially, CNN divided Biden and Trump voters into many categories with not a parameter to be had. I have included a few of the tables here, but there are many others on the website

They illustrate different designs (2x2, 2x3, 2x4, etc.) and different magnitudes of difference between expected and observed values. 






Monday, December 21, 2020

We should teach intro stats students about relative vs. absolute risk

You know what bugs me? How much time different intro stats textbooks spend talking about probability, lots of A not B stuff*, lots of probability associated with the normal distribution, etc. But we don't take advantage of the discussion to warn their students about the evils of relative vs. absolute risk. #statsliteracy

Relative risk is the most clickbaity abuses of statistics that there is. Well, maybe the causal claims based on correlational data are more common. But I think the relative risk is used to straight-up scare people, possibly changing their behaviors and choices.

I thought of it most recently when The Daily Mail (bless) used explained the difference in COVID-19 risk between dog owners and non-dog owners.  


Here is the data described in the headline, straight from the original paper:


Really, Daily Mail? How dare you.

I think the most clever, trickiest, sneakiest ways to mislead with data are by not lying with data at all. Most truncated y-axes display actual data. Data sets with sampling error are still data sets. And relative risk and absolute risk express the same data, but relative risk sounds scary while absolute risk doesn't sound scary. 

Another example I like involved Gerd Gigerenzer, the well-known cognitive psychologist. In this video, he describes an instance when relative risk headlines scared women away from oral contraceptives, and then there was a rise in abortions. All of which could have been avoided totally if women have been given absolute risk information about birth control pills.

You can take issue with Gerd's claim that abortion negatively affects women, but I think that most people would agree that women being scared out of taking their birth control pills is a bad thing. And that newspapers need to be responsible, and editors must avoid using relative risk to scare readers.


I know we have a lot of ground to cover in Intro Stats. But we are doing our students a disservice if we aren't preparing them to deal with oft-encountered dirty data in real life. This topic doesn't take too long to explain. You already have to talk about probability. The video is short. It is easy to explain. 


*And probability has its time and place. Go read The Drunkard's Walk. You'll love it. But I just sit and think a lot about how we should use our precious Intro to Stats time. And I think we should point out how classroom topics play out in real life. 

Monday, December 14, 2020

Ways to use funny meme scales in your stats classes

Have you ever heard of the theory that there are multiple people worldwide thinking about the same novel thing at the same time? It is the multiple discovery hypothesis of invention. Like, multiple great minds around the world were working on calculus at the same time.

Well, I think a bunch of super-duper psychology professors were all thinking about scale memes and pedagogy at the same time. Clearly, this is just as impressive as calculus.


Who were some of these great minds?

1) Dr. Molly Metz maintains a curated list of hilarious "How you doing?" scales. 2) Dr. Esther Lindenström posted about using these scales as student check-ins. 3) I was working on a blog post about using such scales to teach the basics of variables. 


So, I decided to create a post about three ways to use these scales in your stats classes: 

1) Teaching the basics of variables.

2) Nominal vs. ordinal scales. 

3) Daily check-in with your students. 


1. Teach your students the basics of variables

Here is a slide I use in my own classes to introduce scales, response options, score, anchors.


I also use this scale in particular because it is an example of the fact that all scales need a "Not Applicable" option. Vegetarians, Jews, and Muslims don't want any of this bacon, thankyouverymuch. The lack of a n/a option is my psychometric pet peeve.

If you are a stats professor who uses a Begining of the Year Survey, you could add one of these scales to that survey and then use it in class the way I do. 

When I use the silly bacon scale, I usually pair it with the more widely used Wong-Baker to show that images can be used as anchors in serious contexts for serious reasons. Such anchors are helpful for non-English speakers, injured patients who can't speak, and when you need to operationalize the abstract. Many of your students have seen the Wong-Baker scale, so you are taking your silly example and grounding it in reality. 

2. Teach your students the difference between nominal vs. ordinal data:

Some of these scales are ordinal, like Dr. Karen Errichetti's scale of Fauci:


Other scales are nominal, like this scale of cat:




Do your students see the difference? When I teach my students nominal versus ordinal, I teach them that order matters with nominal. Fauci is a scale from "Super happy Zoom moment" to "What the hell did that guy just say?" face. On the other hand, the cats are expressing many different feelings, but not necessarily from smallest to largest of the same emotion. 

3. Care about your students and use these scales to check-in with students AND take attendance.

During the Fall 2020 semester, I didn't have attendance points, but I had daily Microassignments, which students could complete within 24 hours of the class time. 

Sometimes, the Microassignements were review questions covering the previous lecture. 

Sometimes, they asked students to enter a degree of freedom or effect size from an example we did in class. 

Sometimes, I asked my students to report their current mood using one of these silly scales. The students needed to give me the number that corresponded with their mood and at least one sentence describing why. I told them the one sentence could be "None of your business". 

(I could also see setting up a multiple-choice question in Bb and gathering the data from students, and sharing descriptive data for the class.) 

The scales made some students smile. They provided a chance for students to type "I'm feeling OK" or "I have three exams this week, and I can't sleep" or "I just found out that I was in contact with someone with COVID and I'm being tested tomorrow". 

In any event, if you do a check-in with your students using one of these scales, I would recommend grading them promptly. You never know when one of your students needs a little encouragement, empathy, or a referral to your student counseling center or student success center.