Skip to main content

Posts

Showing posts with the label psychometrics

Sampling Error (Taylor's Version)

Friends. You don't know what finding fun stats blog content has been like over the last few years. All of the data writers/websites I followed were always writing about, explaining, and visualizing COVID or political data (rightfully so). I prefer examples about puppies , lists of songs banned from wedding reception s, and ghosts . Memorable examples stick in my students' heads and don't presuppose any knowledge about psychological theory.  Due to the lack of silly data and my own life as a professor, mom of two, wife, and friend, my number of posts during The Rona definitely dipped.  But now, as the crocuses bloom in Erie, PA, the earth, and I, are finding new life and new examples. Nathaniel Rakich, writing for FiveThirtyEight, wrote a whole piece  USING TAYLOR SWIFT TO EXPLAIN POLLING/SAMPLING ERROR S. Specifically, this article tackles three different polling firms and how they went about asking Americans which Taylor Swift album is their favorite Taylor Swift album....

How to investigate click-bait survey claims

Michael Hobbes shared a Tweet from Nick Gillespie. That Tweet was about an essay from The Bulwark . That Tweet plays fast and loose with Likert-type scale interpretation. The way Hobbes and his Twitter followers break down the issues with this headline provides a lesson on how to examine suspicious research clickbait that doesn't pass the sniff test. First off, who says "close to one in four"? And why are they evoking the attempt on Salman Rushdie's life, which did not happen on a college campus and is unrelated to high-profile campus protests of controversial speakers?  Hobbes dug into the survey cited in the Bulwark piece. The author of the Bulwark piece interpreted the data by collapsing across response options on a Likert-type response scale. Which can be done responsibly, I think. "Very satisfied" and "satisfied" are both happy customers, right? But this is suspicious. Other Twitter users questioned the question and how it may leave room for i...

Freakanomics Radio's "America's Math Curriculum Doesn't Add Up"

"I believe that we owe it to our children to prepare them for a world they will encounter, a world driven by data. Basic data fluency is a requirement, not just for most good jobs, but for navigating life more generally." -Steven Levitt Preach it, Steve. This edition of the Freakonomics podcast featured guest host Steven Levitt. He dedicated his episode to providing evidence for an overhaul of America's K-12 math curriculum. He argues that our kids need more information on data fluency. I'm not one to swoon over a podcast dedicated to math curriculums, but this one is about the history of how we teach math, the realities of the pressures our teachers face, and solutions. It is fascinating. You need to sit and listen to the whole thing, but here are some highlights: Our math curriculum was designed to help America fight the Space Race (yes, the one back in the 1960s). For a world without calculators. And not much has changed. Quick idea for teaching regr...

Nextdoor.com's polls: A lesson in psychometrics, crankiness

If you are unaware, Nextdoor.com is a social network that brings together total strangers because they live in the same neighborhood. And it validates your identity and your address, so even though you don't really know these people, you know where they live, what their name is, and maybe even what they look like as you have the option to upload a photo. Needless to say, it is a train wreck. Sure, people do give away free stuff, seek out recommendations for home improvements, etc. But it is mostly complaining or non-computer-savvy people using computers. One of the things you can do is create a poll. Or, more often, totally screw up a poll. Here are some of my favorites. In the captions, I have given some ideas of how they could be used as examples in RM or psychomtrics. This is actually a pretty good scale. A lesson in human factors/ease of user interface use? Response options are lacking and open to interpretation. Sometimes, you don't need a poll at a...

A psychometrics mega remix: Hilarious scales and anchors

I am avoiding grading and trying to make this here blog more usable, so I am consolidating all of my funny scale examples into one location. Feast your eyes on this! https://earther.com/we-finally-know-what-hot-as-balls-really-means-1825713726 http://hyperboleandahalf.blogspot.com/2010/02/boyfriend-doesnt-have-ebola-probably.html http://notawfulandboring.blogspot.com/2018/01/this-is-very-silly-example-for.html

We Rate Dogs, Psychometrics, and Operationalization.

This is a very silly example for psychometrics. It highlights how hard it is to quantify certain things, but we keep on trying. While psychologists struggle with creating scales to rate things like intelligence, aggression, and anxiety, WeRateDogs struggles with encompassing all that is good about dogs on a 1-10 rating scale. See below. WeRateDogs is a Twitter account. And they rate dogs. And every single dogs is rated between 12 and 15 out of 10 points, because every dog is a very good dog. But...I do see the psychometric flaw of their rating system. And so did Twitter user Brant, We Rate Dog's Reviewer 2. And Brant is right, right? The flaw in the rating system is part of the gimmick of the website, but psychometrically inaccurate.. It would be a funny class exercise to create a rubric for a true rating scale for a dog.

'Nowhere To Sleep': Los Angeles Sees Increase In Young Homeless

Anna Scott, reporting for NPR, described changes to the homeless census in LA . It applies to stats/RM because an improvement in survey methodology lead to a big change in the city's estimation of number of homeless young adults. I also think this is also a good piece for teaching because the story keeps coming back to Japheth Greg Dyer, a homeless college student who aged out of the foster care and was sort of tossed into the world on his own. Straight from NPR: Homelessness hasn't necessarily increased dramatically. Instead, these findings seem to indicate that they finally have a reliable way to count young adult homelessness due to a better understanding of young adults. The dramatic increase is methodological.