Skip to main content

Posts

Baby Name Wizard's NameVoyager

UPDATE (12/8/23): YOOOOOOOOO if you got to this post, I suggest that you check out this update for a up-to-date link to this tool. Here is the  Baby Name Wizard's NameVoyager , which provides illustrations of trends in baby names, using data from the 1880s to the present. It is a good tool for demonstrating why graphs can be more engaging than tables when presenting data. When I use this in class, I compare the NameVoyager data display to more  traditionally presented data from the the Social Security Agency . Additionally, I teach in a computer lab, so my students were able to search for their own names, which makes the example more self relevant. Yup. I am one of many, many Jessicas that are around my age.

NPR's "Will Afghan polling data help alleviate election fraud?"

This story details the application of American election polling techniques to Afghanistan's fledgling democracy. Essentially, international groups are attempting to poll Afghans prior to their April 2014 presidential elections as to combat voter fraud and raise awareness about the election. However, how do researchers go about collecting data in a country where few people have telephones, many people are illiterate, and just about everyone is weary about strangers approaching them and asking them sensitive questions about their political opinions? The story also touches on issues of social desirability as well as the decisions  a researcher makes regarding the kinds of response options to use in survey research. I think that this would be a good story to share with a cranky undergraduate research methods class that thinks that collecting data from the undergraduate convenience sample is really, really hard. Less snarkily, this may be useful when teaching multiculturalism or ...

A.V. Club's "Shirley Manson takes BuzzFeed's "Which Alt-Rock Grrrl Are You?" quiz, discovers she's not herself"

Lately, there have been a lot of quizzes popping up on my Facebook feed ("What breed of dog are you?", "What character from Harry Potter are you?"). As a psychologist who tinkers in statistics, I have pondered the psychometric properties of such quizzes and concluded that these quizzes where probably not properly vetted in peer-reviewed journals. Now I have a tiny bit of evidence to support that conclusion. What better way to ensure that a scale is valid than by using the standard of concurrent validity (popular in I/O psychology)? This actually happened when renowned Shirley Manson Subject Matter Expert, Shirley Manson, lead singer of the band Garbage, took the "Which Alt-rock Grrrl are you?" quiz and she didn't score as herself (as she posted on Facebook and reported by A.V. Club ). From Facebook, via A.V. Club An excellent example of an invalid test (or concurrent validity for you I/O types).

Anecdote is not the plural of data: Using humor and climate change to make a statistical point

Variations upon a theme...good for spicing up a powerpoint...inspired by living in the #1 snowiest city (population > 100K, 2014) in the United States. property of xkcd.com https://thenib.com/can-t-stand-the-heat-4d5650fd671b

Time's "Can Time predict your politics?" by Jonathan Haidt and Chris Wilson

This scale , created by Haidt and Wilson, predicts your political leanings based upon seemingly unrelated questions. Screen grab from time.com You can use this in a classroom to 1) demonstrate interactive, Likert-type scales, 2) face validity (or lack there of). I think this would be 3) useful for a psychometrics class to discuss scale building. Finally, the update at the end of the article mentions 4) both the n-size and the correlation coefficient for their reliability study, allowing you discuss those concepts with students. For more about this research, try yourmorals.org

NPR's "In Pregnancy, What's Worse? Cigarettes Or The Nicotine Patch?"

This story discusses the many levels of analysis required to get to the bottom of the hypothesis stated in the title of this story. For instance, are cigarettes or the patch better for mom? The baby? If the patch isn't great for either but still better than smoking, what sort of advice should a health care provider give to their patient who is struggling to quit smoking? What about animal model data? I think this story also opens up the conversation about how few medical interventions are tested on pregnant women (understandably so), and, as such,  researchers have to opt for more observational research studies when investigating medical interventions for protected populations.

Shameless self-promotion 2

Here is a link to a recent co-authored publication that used Second Life to teach students about virtual data collection as well as the broader trend in psychology to study how virtual environments influence interpersonal interactions. Specifically, students replicated evolutionary psychology findings using Second Life avatars. We also discuss best practices for using Second Life in the class room as well as our partial replication of previously established evolutionary psychology findings (Clark & Hatfield, 1989, Buss, Larson, Weston, & Semmelroth, 1992).

Changes in standards for data reporting in psychology journals

Two prominent psychology journals are changing their standards for publication in order to address several long-standing debates in statistics (p-values v. effect sizes and point estimates of the mean v. confidence intervals). Here are the details for changes that the Association for Psychological Science is creating for their gold-standard publication, Psychological Science, in order to improve the transparency in data reporting. Some of the big changes include mandatory reporting of effect sizes, confidence intervals, and inclusion of any scales or measures that were non-significant. This might be useful in class when describing why p-values and means are imperfect, the old p-value v. effect size debate, and how one can bend the truth with statistics via research methodology (and glossing over/completely neglecting N.S. findings). These examples are also useful in demonstrating to your students that these issues we discuss in class have real world ramifications and aren't be...

The United Nation's "2013 World Happiness Report"

I am teaching positive psychology for the first time this semester. One way to quickly teach students that this isn't just Happy Psych. 101 is to show them convincing data collected by an international organization (here, the United Nations) that demonstrates the link between positive psychology and the well-being of nations. This data isn't just for a positive psychology class: You could also use it more broadly to demonstrate how research methods have to be adjusted when data is collected internationally (see item 4) and as examples of different kinds of data analysis (as described under item 1). 1) Report on international happiness data from the United Nations . If you look through the data collected, there is a survival analysis related to longevity and affect on page 66. A graphic on page 21 describes factors that account for global variance in happiness levels across countries. There is also a lot of data about mental health care spending in different nations. 2 ...

The Economist's "Unlikely Results"

A great, foreboding video  (here is a link to the same video at YouTube in case you hit the paywall) about the actual size and implication of Type II errors in scientific research. This video does a great job of illustrating what p < .05 means in the context of thousands of experiments. Here is an article from The Economist on the same topic. From TheEconomist

The Atlantic's "Congratulations, Ohio! You Are the Sweariest State in the Union"

While it isn't hypothesis driven research  data, this data was collected to see which states are the sweariest. The data collection itself is interesting and a good, teachable example. First, the article describes previous research that looked at swearing by state (typically, using publicly available data via Twitter or Facebook). Then, they describe the data collection used for the current research: " A new map, though, takes a more complicated approach. Instead of using text, it uses data gathered from ... phone calls. You know how, when you call a customer service rep for your ISP or your bank or what have you, you're informed that your call will be recorded?  Marchex Institute , the data and research arm of the ad firm Marchex,  got ahold of the data that resulted from some recordings , examining more than 600,000 phone calls from the past 12 months—calls placed by consumers to businesses across 30 different industries. It then used call mining technology to isola...

Washington Posts's "GAO says there is no evidence that a TSA program to spot terrorists is effective" (Update: 3/25/15)

The Travel Security Agency implemented SPOT training in order to teach air port security employees how to spot problematic and potentially dangerous individuals via behavioral cues. This intervention has cost the U.S. government $1 billion+. It doesn't seem to work. By discussing this with your class, you can discuss the importance of program evaluations as well as validity and reliability. The actual government issued report goes into great detail about how the program evaluation data was collected to demonstrate that SPOT isn't working. The findings (especially the table and figure below) do a nice job of demonstrating the lack of reliability and the lack of validity. This whole story also implicitly demonstrates that the federal government is hiring statisticians with strong research methods backgrounds to conduct program evaluations (= jobs for students). Here is a summary of the report from the Washington Post. Here is a short summary and video about the report from ...

The New York Times "As ‘Normal’ as Rabbits’ Weights and Dragons’ Wings"

The Central Limit Theorem, explained using bunnies and dragons . Brilliant. I don't use this to introduce the topic, but I do use it to review the topic. Property of Shu-Yi Chiou