Skip to main content

Posts

Showing posts with the label operationalized variables

Whataburger Index: Operationalizing power outages in hurricane ravaged Texas.

As a stats nerd, I love it when clever people make lives easier by finding clever, easy, indirect ways to estimate the thing they want to measure. As a statistics instructor, I find such examples engaging, as they encourage students to think critically and nurture their statistical literacy.  Like the Waffle Shop index. TL;DR: During weather emergencies, the federal government tracks whether or not Waffle Shops are open as a proxy for the severity of damage in a community. Waffle Shops are tough as hell, and if they close, a community needs help.  Below is a map of Waffle Houses. https://www.scrapehero.com/store/wp-content/uploads/maps/Waffle_House_USA.png Due to Hurricane Beryl, the people of Houston, Texas discovered an even more accurate measure the severity of electricity outages: The Whattaburger Index:   https://www.facebook.com/photo/?fbid=8242206945824619&set=gm.2698315720337038&idorvanity=1416658058502817 Certainly, Waffle House exists in Texas. 126...

The Unstoppable Pop of Taylor Swift: Data visualizations, variable operationalization, and DATA DATA DATA

  The unstoppable pop of Taylor Swift (reuters.com) Here are some ideas for using this to teach statistics: Data visualizations and visualization guides: With cats, y'all. And the Taylor Swift handwriting font. I love the whole vibe of this as well as how they explain their data visualizations. Operationalizing things: The page describes three Spotify metrics for music: Acousticness, danceability, and emotion. The data visualization contains a numeric value for each metric and a description of the metric's meaning. DATA!: Okay. This is an excellent example of things already. And it is delightful. Then I thought, "Oh, wouldn't it be fun if this was in spreadsheet form!" (I think that A LOT, friends). But, as I write a book and my syllabi, I don't have time for that,  BUT A REDDITOR DID HAVE TIME FOR THAT . Dr. Doon created a spreadsheet with 18 columns of Spotify data for each son. It doesn't include the Midnights data but is still a fantastic amount of dat...

A simple tool operationalizes post-childbirth hemorrhaging and saves lives.

 https://www.npr.org/sections/goatsandsoda/2023/05/10/1175303067/a-plastic-sheet-with-a-pouch-could-be-a-game-changer-for-maternal-mortality https://www.bmj.com/content/381/bmj.p1055 I love this study, in and of itself, because it is based on research that will save women's lives without spending a lot of money. I love it.  Here is a link to the original study . I learned about it from an NPR story about the research by Rhitu Chaterjee . I also love it because it is an accessible example of a bunch of statistics things: Dependent variables...operationalizing variables...why cross-cultural research and solutions aren't just lip service to diversity...how control groups in medical research are very different than control groups in psychology research...absolute vs. relative risk. -Dependent variables/operationalized variables: This study clearly illustrates the power of measurement and operationalization. The researchers wanted to create a way to better assess post-childbirth h...

Bad credit scores as a predictor of dementia

NPR aired this story by Sarah Boden  about the relationship between risky financial behavior and dementia. It consists of Boden interviewing people caring for individuals with dementia and dementia researchers. Before the NPR story, Boden published a related piece to a Pittsburgh NPR station . The Pittsburgh piece is a more formal report with many links to helpful information. Among the research Boden describes is this study by Nicholas et al. (2020),  which finds that people exhibit poor financial decision-making up to six years before a dementia diagnosis. Here is a press release about the study, in case you want to give more advanced students a primer or earlier UG students a sheet for understanding the research.  The audio version of this story is very compelling. It includes interviews with several people who have been left heavily in debt because of poor decisions made by family members before their diagnosis. It also offers some solutions that could be implemented ...

Data collection via wearable technology

This article from The Economist, " Data from wearable devices are changing disease surveillance and medical research ," has a home in your stats or RM class. It describes how FitBits and Apple Watches can be used to collect baseline medical data for health research. I like it because it is very accessible but still goes into detail about specific research issues related to this kind of data: -How does one operationalize their outcome variable? Pulse, temperature, etc., as proxies for underlying problems. Changes in heart rates have predicted the onset of COVID and the flu.  -Big samples be good! One of the reasons this data works like it does is because it is harvested from a massive number of people using these devices.  -The article gives examples of well-designed experiments that use wearable technology. However, often with massive data collection via tech, the data drives the hypothesis, not the other way around. In our psychology classes, we discuss NHST and the proper w...

"You're wrong about" podcast and data about human trafficing

"The answer is always more spreadsheets." -Michael Hobbes The good news: 1) This isn't a COVID example. 2) This is one of those examples that gets your students to think more abstractly about some of the tougher, fundamental questions in statistics. Precisely: How do we count things in the very, very messy real world? What are the ramifications of miscounting messy things? 3) The example comes in the form of the very engaging podcast You're Wrong About , hosted by Michael Hobbes and Sarah Marshall. @yourewrongabout The bad news: The example is about human trafficking, so not nearly as fluffy as my hotdog or seagull posts. That said, this episode of the You're Wrong About podcast, or even just the first 10 minutes of the episode, reveals how hard it can be to count and operationalize a variable that seems pretty clear cut: The number of children who are trafficked in America every year.  The You're Wrong About podcast takes misunderstood, widely reported event...

A bunch of pediatricians swallowed Lego heads. You can use their research to teach the basics of research methods and stats.

As a research-parent-nerd joke before Christmas, six doctors swallowed Lego heads and recorded how long it took to pass the Lego heads. Why? As to inform parents about the lack of danger associated with your kid swallowing a tiny toy.  I encourage you to use it as a class example because it is short, it describes its research methodology very clearly, using a within-subject design, has a couple of means, standard deviations, and even a correlation. TL;DR: https://dontforgetthebubbles.com/dont-forget-the-lego/ In greater detail: Note the use of a within subject design. They also operationalized their DV via the SHAT (Stool Hardness and Transit) scale. *Yeah. So here is the Bristol Stool Chart  mentioned in the above excerpt. Please don't click on the link if your are eating or have a sensitive stomach. Research outcomes, including mean and standard deviations: An example of a non-significant correlation, with the SHAT score on the y-axi...

The Waffle House Index, a great example for creative measurement methods.

Alright, this example is a little more abstract, but stick with me. When you perform statistics, you are measuring or counting something. And sometimes the thing you want to measure is pretty straightforward. The number of sick days an employee takes. GPA. Parts per million of some thingy in the water. But sometimes statisticians, especially psychologists, have to get a little creative and indirect with the way we measure a thing. Like the MMPI. IQ tests are our best bet at encompassing someone's intelligence but are still not perfect. Sometimes, a statistician needs to find an approximation or proxy for the actual thing they are measuring. To explain this, show your students how the Federal Emergency Management Agency uses the Waffle House Index to determine how severely damaged a town is following a hurricane or tornado. http://www2.philly.com/philly/news/weather/hurricane-florence-waffle-house-index-20180912.html If you are one of the uninitiated, Waffle Houses are...

Talking to your students about operationalizing and validating patient pain.

Patti Neighmond, reporting for NPR, wrote a piece on how the medical establishment's method for assessing patient pain is evolving . This is a good example of why it can be so tricky to operationalize the abstract. Here, the abstract notion in pain. And the story discusses shortcomings of the traditional numeric, Wong-Baker pain scale, as well as alternatives or complements to the pain scale. No one is vilifying the scale, but recent research suggests that what a patient reports and how a medical professional interprets that report are not necessarily the same thing. From Dr. John Markman's unpublished research: I think this could also be a good example of testing for construct validity. The researcher asked if the pain was tolerable and found out that their numerical scale was NOT detecting intolerable. This is a psychometric issue. One of the recommendations for better operationalization: Asking a patient how pain effects their ability to perform every day tas...

Wilson's "Why Are There So Many Conflicting Numbers on Mass Shootings?"

This example gets students thinking about how we operationalize variables. Psychologists operationalize a lot of abstract stuff. Intelligence. Grit. But what about something that seems more firmly grounded and countable, like whether or not a crime meets the criteria for a a mass shooting? How do we define mass shooting? As shared in this article by Chris Wilson for Time Magazine , the official definition is 1) three or more people 2) killed in a public setting. That is per the current federal definition of a mass shooting . But that isn't universally excepted by media outlets. The article shares different metrics used for identifying a mass shooting, depending on what source is being used. Whether or not to include a dead shooter towards the total number killed. Whether or not the victims were randomly selected. I think the most glaring example from the article has to do with the difference that this definition makes on mass shooting counts: You could also discuss wi...

Collin's "America’s most prolific wall punchers, charted"

C ollin gleaned some archival data about ER visits in America from US Consumer Product Safety Commission. For each ER visit, there is a brief description of the reason for the visit. Collin queried punching related injuries. See his Method section below describes how he set the parameters for his operationalized variable. With a bit of explaining, you could also describe how Collin took qualitative data (the written description of the injury) and converted it into quantitative data: http://qz.com/582720/americas-most-prolific-wall-punchers-charted/ Then he made some charts. The age of wall punchers is right-skewed. And probably could be used in a Developmental Psychology class to illustrate poor judgment in adolescents as well as the emergence of the prefrontal cortex/executive thinking skills in one's early 20s. http://qz.com/582720/americas-most-prolific-wall-punchers-charted/ The author looked at wall punching by month of the year and uncovered a fairly uniform d...

Turner's "E Is For Empathy: Sesame Workshop Takes A Crack At Kindness" and the K is for Kindness survey.

This NPR story is about a survey conducted by the folks at Sesame Street. And that survey asked parents and teachers about kindness. If kids are kind, if the world is kind, how they define kindness, etc.. The NPR story is a round about way of explaining how we operationalize variables, especially in psychology. And the survey itself provides examples of forced choice research questions and dichotomous responses that could have been Likert-type scales. The NPR Story: The Children's Television Workshop, the folks behind Sesame Street, have employees in charge of research and evaluation (a chance to plug off-the-beat-path stats jobs to your students). And they did a survey to figure out what it means to be kind when you are a kid. They surveyed parents and teachers to do so. The main findings are summarized here . Parents and teachers are worried that the world isn't kind and doesn't emphasize kind. But both groups think that kindness is more important than academic a...

Aschwanden's "Science is broken, it is just a hell of a lot harder than we give it credit for"

Aschwanden (for fivethirtyeight.com) did an extensive piece that summarizes that data/p-hacking/what's wrong with statistical significance crisis in statistics. There is a focus on the social sciences, including some quotes from Brian Nosek regarding his replication work. The report also draws attention to  Retraction Watch  and Center for Open Science as well as retractions of findings (as an indicator of fraud and data misuse). The article also describes our funny bias of sticking to early, big research findings even after those research findings are disproved (example used here is the breakfast eating:weight loss relationship). The whole article could be used for a statistics or research methods class. I do think that the p-hacking interactive tool found in this report could be especially useful illustration of How to Lie with Statistics. The "Hack your way to scientific glory" interactive piece demonstrates that if you fool around enough with your operationalized...

Thomas B. Edsall's "How poor are the poor"?

How do we count the number of poor people in America? How do we operationalize "poor"? That is the psychometric topic of this opinion piece from the New York Times  ( .pdf of same here ). This article outlines several ways of defining poor in America, including: 1)"Jencks’s methodology is simple. He starts with the official 2013 United States poverty rate of 14.5 percent. In 2013, the government determined that 45.3 million people in the United States were living in poverty, or 14.5 percent of the population.Jencks makes three subtractions from the official level to account for expanded food and housing benefits (3 percentage points); the refundable earned-income tax credit and child tax credit (3 points); and the use of the Personal Consumption Expenditures index instead of the Consumer Price Index to measure inflation (3.7 percentage points)." 2)  " Other credible ways to define poverty  paint a different picture. One is to count all those living ...