Skip to main content

Posts

Izadi's "Black Lives Matter and America’s long history of resisting civil rights protesters"

Elahe Izadi, writing for The Washington Post, shared polling data from the 1960s. The data focused on public opinion about different aspects of the civil rights movement (March on Washington, freedom riders, etc.). The old data was used to draw parallels between the mixed support for the Civil Rights Movement of the 1960s and the mixed support for current civil rights protests, specifically, Black Lives Matter. Here is the  Washington  Post  story on the polling data, the civil rights movement, and Black Lives Matter. The story is the source of all the visualizations contained below. H ere is the original polling data . https://img.washingtonpost.com/wp-apps/imrs.php?src=https://img.washingtonpost.com/blogs/the-fix/files/2016/04/2300-galluppoll1961-1024x983.jpg&w=1484 https://img.washingtonpost.com/wp-apps/imrs.php?src=https://img.washingtonpost.com/blogs/the-fix/files/2016/04/2300-galluppoll1963-1024x528.jpg&w=1484 I think this is timely data. And...

Yau's "Divorce and Occupation"

Nathan Yau , writing for Flowing Data , provides a good example of correlation, median, and correlation not equaling causation in his story, " Divorce and Occupation ". Yau looked at the relationship between occupation and divorce in a few ways. He used one of variation upon the violin plot to illustrate how each occupation's divorce rate falls around the median divorce rate. Who has the lowest rate? Actuaries. They really do know how to mitigate risk. You could also discuss why median divorce rate is provided instead of mean divorce rate. Again, the actuaries deserve attention as they probably would throw off the mean. https://flowingdata.com/2017/07/25/divorce-and-occupation/ He also looked at  how salary was related to divorce, and this can be used as a good example of a linear relationship: The more money you make, the lower your chances for divorce. And an intuitive exception to that trend? Clergy members.  https://flowingdata.com/2017/07/25/divorce...

Teach t-tests via "Waiting to pick your baby's name raises the risk for medical mistakes"

So, I am very pro-science, but I have a soft spot in my heart for medical research that improves medical outcomes without actually requiring medicine, expensive interventions, etc. And after spending a week in the NICU with my youngest, I'm doubly fond of a way of helping the littlest and most vulnerable among us. One example of such was published in the journal Pediatrics and written up by NPR . In this case, they found that fewer mistakes are made when not-yet-named NICU babies are given more distinct rather than less distinct temporary names. The unnamed baby issues is an issue in the NICU, as babies can be born very early or under challenging circumstances, and the babies' parents aren't ready to name their kids yet. Traditionally, hospitals would use the naming convention "BabyBoy Hartnett" but several started using "JessicasBoy Hartnett" as part of this intervention. So, distinct first and last names instead of just last names. They measured patie...

Hedonometer.org

The Hedonometer measures the overall happiness of Tweets on Twitter. It provides a simple, engaging example for  Intro Stats since the data is graphed over time, color-coded for the day of the week, and interactive. I think it could also be a much deeper example for a Research Methods class as the " About " section of the website reads like a journal article methods section, in so much that the Hedonometer creators describe their entire process for rating Tweets. This is what the basic table looks like. You can drill into the data by picking a year or a day of the week to highlight. You can also use the sliding scale along the bottom to specify a time period. The website is also kept very, very up to date, so it is also a very topical resource. Data for white supremacy attack in VA In the pages "About" section, they address many methodological questions your students might raise about this tool. It is a good example for the process researchers go ...

Sonnad and Collin's "10,000 words ranked according to their Trumpiness"

I finally have an example of Spearman's rank correlation to share. This is a political example, looking at how Twitter language usage differs in US counties based upon the proportion of votes that Trump received. This example was created by  Jack Grieves , a linguist who uses archival Twitter data to study how we speak. Previously, I blogged about his work that analyzed what kind of obscenities are used in different zip codes in the US . And he created maps of his findings, and the maps are color coded by the z-score for frequency of each word. So, z-score example. Southerners really like to say "damn". On Twitter, at least. But on to the Spearman's example. More recently, he conducted a similar analysis, this time looking for trends in word usage based on the proportion of votes Trump received in each county in the US. NOTE: The screen shots below don't do justice to the interactive graph. You can cursor over any dot to view the word as well as the cor...

The Economists' "Ride-hailing apps may help to curb drunk driving"

I think this is a good first day of class example. It shows how data can make a powerful argument, that argument can be persuasively illustrated via data visualization, AND, maybe, it is a soft sell of a way to keep your students from drunk driving. It also touches on issues of public health, criminal justice, and health psychology. This article from The Economist  succinctly illustrates the decrease in drunk driving incidents over time using graphs. This article is based on a  working paper  by PhD student Jessica Lynn (name twin!) Peck. https://cdn.static-economist.com/sites/default/files/imagecache/640-width/20170408_WOC328_2.png Also, maybe your students could brainstorm third variables that could explain the change. Also, New Yorkers: What's the deal with Staten Island? Did they outlaw Uber? Love drunk driving? 

Kim Kardashinan-West, Buzzfeed, and Validity

So, I recently shared a post detailing how to use the Cha-Cha Slide in your Intro Stats class. Today? Today, I will provide you with an example of how to use Kim Kardashian to explain test validity. So. Kim Kardashian-West stumbled upon a Buzzfeed quiz that will determine if you are more of a Kim Kardashian-West or more of a Chrissy Teigen . She Tweeted about it, see below. https://twitter.com/KimKardashian/status/887881898805952514 And she went and took the test, BUT SHE DIDN'T SCORE AS A KIM!! SHE SCORED AS A CHRISSY! See below. https://twitter.com/KimKardashian/status/887882791488061441 So, this test purports to assess one's Kim Kardashian-West-ness or one's Chrissy Teigan-ness. And it failed to measure what it claimed to measure as Kim didn't score as a Kim. So, not a valid measure. No word on how Chrissy scored. And if you are in you teach people in their 30s, you could always use this example of the time Garbage's Shirley Manson...

Hickey's "The Ultimate Playlist Of Banned Wedding Songs"

I think this blog just peaked. Why? I'm giving you a way to use the Cha-Cha-Slide ("Everybody clap your hands!") as a tool to teach basic descriptive statistics. Here is a list of the most frequently banned-from-wedding songs: Most Intro Stats teachers could use this within the first week of class, to describe rank order data, interval data, qualitative data, quantitative data, the author's choice of percentage frequency data instead of straight frequency. Additionally, Hickey, writing for fivethirtyeight , surveyed two dozen wedding DJs about banned songs at 200 weddings. So, you can chat about research methodology as well.  Finally, as a Pennsylvanian, it makes me so sad that people ban the Chicken Dance! How can you possibly dislike the Chicken Dance enough to ban it? Is this a class thing? 

de Frieze's "‘Replication grants’ will allow researchers to repeat nine influential studies that still raise questions"

In my stats classes, we talk about the replication crisis. When introducing the topic, I use this  reading from NOBA . I think it is also important for my students to think about how science could create an environment where replication is more valued. And the Dutch Organization for Scientific Research has come up with a solution: It is providing grants to nine groups to either 1) replicate famous findings or 2) reanalyze famous findings. This piece from Science details their effort s. The Dutch Organization for Scientific Research provides more details on the grant recipients , which include several researchers replicating psychology findings: How to use in class: Again, talk about the replication crisis. Ask you students to generate ways to make replication more valued. Then, give them a bit of faith in psychology/science by sharing this information on how science is on it. From a broader view, this could introduce the idea of grants to your undergraduates or get yo...

Harris's "Scientists Are Not So Hot At Predicting Which Cancer Studies Will Succeed"

This NPR story is about reproducibility in science that ISN'T psychology, the limitations of expert intuition, and the story is a summary of a recent research article from PLOS Biology  (so open science that isn't psychology, too!). Thrust of the story: Cancer researchers may be having a similar problem to psychologists in terms of replication.  I've blogged this issue before. In particular, concerns with replication in cancer research, possibly due to the variability with which lab rats are housed and fed . So, this story is about a study in which 200 cancer researchers, post-docs, and graduate students took a look at six pre-registered cancer stud y replications and guessed which studies would successfully replicate. And the participants systematically overestimated the likelihood of replication. However, researchers with high h-indices, were more accurate that the general sample. I wonder if the high h-indicies uncover super-experts or super-researchers who have be...

Domonoske's "50 Years Ago, Sugar Industry Quietly Paid Scientists To Point Blame At Fat"

This NPR story discusses research  detective work published JAMA . The JAMA article looked at a very influential NEJM review article that investigated the link between diet and Coronary Heart Disease. Specifically, whether sugar or fat contribute more to CHD. The article, written by Harvard researchers decades ago, pinned CHD on fatty diets. But the researchers took money from Big Sugar (which sounds like...a drag queen or CB handle) and communicated with Big Sugar while writing the review article. This piece discusses how conflict of interest shaped food research and our beliefs about the causes of CHD for decades. And how conflict of interest and institutional/journal prestige shaped this narrative. It also touches on how industry, namely sugar interests, discounted research that finds a sugar:CHD link while promoting and funding research that finds a fat:CHD link. How to use in a Research Methods class: -Conflict of interest. The funding received by the researchers from th...

Chris Wilson's "The Ultimate Harry Potter Quiz: Find Out Which House You Truly Belong In"

Full disclosure: I have no chill when it comes to Harry Potter. Despite my great bias, I still think this pscyometrically-created (with help from psychologists and Time Magazine's Chris Wilson!) Hogwart's House Sorter is a great example for scale building, validity, descriptive statistics, electronic consent, etc. for stats and research methods. How to use in a Research Methods class: 1) The article details how the test drew upon the Big Five inventory. And it talks smack about the Myers-Briggs. 2) The article also uses simple language to give a rough sketch of how they used statistics to pair you with your house. The "standard statistical model" is a regression line, the "affinity for each House is measured independently", etc. While you are taking the quiz itself, there are some RM/statsy lessons: 3) At the end of the quiz, you are asked to contribute some more information. It is a great example of a leading response options ...

APA's "How to Be A Wise Consumer of Psychological Research"

This is a nice, concise hand out from APA that touches on the main points for evaluating research. In particular, research that has been distilled by science reporters. It may be a bit light for a traditional research methods class, but I think it would be good for the research methods section of most psychology electives, especially if your students working through source materials. The article mostly focuses on evaluating for proper sampling techniques. They also have a good list of questions to ask yourself when evaluating research: This also has an implicit lesson of introducing the APA website to psychology undergraduates and the type of information shared at APA.org. (including, but not limited to, this glossary of psychology terms .)