Skip to main content

Posts

Showing posts with the label psychometrics

Rank choice voting, explained by CNN using ice cream

This one is for all of my psychometric instructors. CNN created an engaging, interactive website to explain rank choice voting using ice cream flavor preference.  It was created due to the 2025 NYC mayoral primaries, but uses ice cream instead of humans to make for a good explainer that may have a home in your classroom. https://www.cnn.com/interactive/2025/06/politics/ranked-choice-voting-explained-dg/ First, you rank order your top five favorite ice cream flavors out of a field of ten. Then, you can view all users' ranking data, and see how the distribution changes when the least popular flavor, Rocky Road, is eliminated and the rocky road voters' votes are redistributed. The vote relocation goes on and on... Finally, you get to see the winner, chocolate. Rank choice voting is one of those concepts that is way, way easier to explain with a bit of animation and a very simple premise. I couldn't capture it in my screenshots, but the flavor elimination and redistribution are...

Sampling Error (Taylor's Version)

Friends. You don't know what finding fun stats blog content has been like over the last few years. All of the data writers/websites I followed were always writing about, explaining, and visualizing COVID or political data (rightfully so). I prefer examples about puppies , lists of songs banned from wedding reception s, and ghosts . Memorable examples stick in my students' heads and don't presuppose any knowledge about psychological theory.  Due to the lack of silly data and my own life as a professor, mom of two, wife, and friend, my number of posts during The Rona definitely dipped.  But now, as the crocuses bloom in Erie, PA, the earth, and I, are finding new life and new examples. Nathaniel Rakich, writing for FiveThirtyEight, wrote a whole piece  USING TAYLOR SWIFT TO EXPLAIN POLLING/SAMPLING ERROR S. Specifically, this article tackles three different polling firms and how they went about asking Americans which Taylor Swift album is their favorite Taylor Swift album....

How to investigate click-bait survey claims

Michael Hobbes shared a Tweet from Nick Gillespie. That Tweet was about an essay from The Bulwark . That Tweet plays fast and loose with Likert-type scale interpretation. The way Hobbes and his Twitter followers break down the issues with this headline provides a lesson on how to examine suspicious research clickbait that doesn't pass the sniff test. First off, who says "close to one in four"? And why are they evoking the attempt on Salman Rushdie's life, which did not happen on a college campus and is unrelated to high-profile campus protests of controversial speakers?  Hobbes dug into the survey cited in the Bulwark piece. The author of the Bulwark piece interpreted the data by collapsing across response options on a Likert-type response scale. Which can be done responsibly, I think. "Very satisfied" and "satisfied" are both happy customers, right? But this is suspicious. Other Twitter users questioned the question and how it may leave room for i...

Freakanomics Radio's "America's Math Curriculum Doesn't Add Up"

"I believe that we owe it to our children to prepare them for a world they will encounter, a world driven by data. Basic data fluency is a requirement, not just for most good jobs, but for navigating life more generally." -Steven Levitt Preach it, Steve. This edition of the Freakonomics podcast featured guest host Steven Levitt. He dedicated his episode to providing evidence for an overhaul of America's K-12 math curriculum. He argues that our kids need more information on data fluency. I'm not one to swoon over a podcast dedicated to math curriculums, but this one is about the history of how we teach math, the realities of the pressures our teachers face, and solutions. It is fascinating. You need to sit and listen to the whole thing, but here are some highlights: Our math curriculum was designed to help America fight the Space Race (yes, the one back in the 1960s). For a world without calculators. And not much has changed. Quick idea for teaching regr...

Nextdoor.com's polls: A lesson in psychometrics, crankiness

If you are unaware, Nextdoor.com is a social network that brings together total strangers because they live in the same neighborhood. And it validates your identity and your address, so even though you don't really know these people, you know where they live, what their name is, and maybe even what they look like as you have the option to upload a photo. Needless to say, it is a train wreck. Sure, people do give away free stuff, seek out recommendations for home improvements, etc. But it is mostly complaining or non-computer-savvy people using computers. One of the things you can do is create a poll. Or, more often, totally screw up a poll. Here are some of my favorites. In the captions, I have given some ideas of how they could be used as examples in RM or psychomtrics. This is actually a pretty good scale. A lesson in human factors/ease of user interface use? Response options are lacking and open to interpretation. Sometimes, you don't need a poll at a...

A psychometrics mega remix: Hilarious scales and anchors

I am avoiding grading and trying to make this here blog more usable, so I am consolidating all of my funny scale examples into one location. Feast your eyes on this! https://earther.com/we-finally-know-what-hot-as-balls-really-means-1825713726 http://hyperboleandahalf.blogspot.com/2010/02/boyfriend-doesnt-have-ebola-probably.html http://notawfulandboring.blogspot.com/2018/01/this-is-very-silly-example-for.html

We Rate Dogs, Psychometrics, and Operationalization.

This is a very silly example for psychometrics. It highlights how hard it is to quantify certain things, but we keep on trying. While psychologists struggle with creating scales to rate things like intelligence, aggression, and anxiety, WeRateDogs struggles with encompassing all that is good about dogs on a 1-10 rating scale. See below. WeRateDogs is a Twitter account. And they rate dogs. And every single dogs is rated between 12 and 15 out of 10 points, because every dog is a very good dog. But...I do see the psychometric flaw of their rating system. And so did Twitter user Brant, We Rate Dog's Reviewer 2. And Brant is right, right? The flaw in the rating system is part of the gimmick of the website, but psychometrically inaccurate.. It would be a funny class exercise to create a rubric for a true rating scale for a dog.