Skip to main content

Posts

The United Nation's "2013 World Happiness Report"

I am teaching positive psychology for the first time this semester. One way to quickly teach students that this isn't just Happy Psych. 101 is to show them convincing data collected by an international organization (here, the United Nations) that demonstrates the link between positive psychology and the well-being of nations. This data isn't just for a positive psychology class: You could also use it more broadly to demonstrate how research methods have to be adjusted when data is collected internationally (see item 4) and as examples of different kinds of data analysis (as described under item 1). 1) Report on international happiness data from the United Nations . If you look through the data collected, there is a survival analysis related to longevity and affect on page 66. A graphic on page 21 describes factors that account for global variance in happiness levels across countries. There is also a lot of data about mental health care spending in different nations. 2 ...

The Economist's "Unlikely Results"

A great, foreboding video  (here is a link to the same video at YouTube in case you hit the paywall) about the actual size and implication of Type II errors in scientific research. This video does a great job of illustrating what p < .05 means in the context of thousands of experiments. Here is an article from The Economist on the same topic. From TheEconomist

The Atlantic's "Congratulations, Ohio! You Are the Sweariest State in the Union"

While it isn't hypothesis driven research  data, this data was collected to see which states are the sweariest. The data collection itself is interesting and a good, teachable example. First, the article describes previous research that looked at swearing by state (typically, using publicly available data via Twitter or Facebook). Then, they describe the data collection used for the current research: " A new map, though, takes a more complicated approach. Instead of using text, it uses data gathered from ... phone calls. You know how, when you call a customer service rep for your ISP or your bank or what have you, you're informed that your call will be recorded?  Marchex Institute , the data and research arm of the ad firm Marchex,  got ahold of the data that resulted from some recordings , examining more than 600,000 phone calls from the past 12 months—calls placed by consumers to businesses across 30 different industries. It then used call mining technology to isola...

Washington Posts's "GAO says there is no evidence that a TSA program to spot terrorists is effective" (Update: 3/25/15)

The Travel Security Agency implemented SPOT training in order to teach air port security employees how to spot problematic and potentially dangerous individuals via behavioral cues. This intervention has cost the U.S. government $1 billion+. It doesn't seem to work. By discussing this with your class, you can discuss the importance of program evaluations as well as validity and reliability. The actual government issued report goes into great detail about how the program evaluation data was collected to demonstrate that SPOT isn't working. The findings (especially the table and figure below) do a nice job of demonstrating the lack of reliability and the lack of validity. This whole story also implicitly demonstrates that the federal government is hiring statisticians with strong research methods backgrounds to conduct program evaluations (= jobs for students). Here is a summary of the report from the Washington Post. Here is a short summary and video about the report from ...

The New York Times "As ‘Normal’ as Rabbits’ Weights and Dragons’ Wings"

The Central Limit Theorem, explained using bunnies and dragons . Brilliant. I don't use this to introduce the topic, but I do use it to review the topic. Property of Shu-Yi Chiou

Burr Settles's "On “Geek” Versus “Nerd”"

Settles decided to investigate the difference between being a nerd and being a geek via a pointwise mutual association analysis (using archival data from Twitter). Specifically, he measured the association/closeness between various hashtag descriptors (see below) and the words nerd and geek. Settles provides a nice description of his data collection and analysis on his blog. A good example of archival data use as well as PMA.

Joshua Katz's visualizations of American dialect data (edited 11/30)

I love American dialects. There might be a Starbuck's in every city, but our regions are still uniquely identifiable by the way we talk. Joshua Katz (graduate student in Statistics) at NCS created graphical representations of data from Cambridge that identified dialectical differences in how Americans speak. Here is a story about the maps and here are the maps themselves . AND: You can even take the Dialect Similarity Quiz that tells you (via map) what parts of the country tend to have language patterns like your own. I think this demonstrates that 1) graphs are interesting ways of conveying information, 2) data being used to make predictions (of what portion of the U.S. you hail from), and 3) statisticians and social sciences gather interesting and varied data. Mmmmmmmmmmmmmmmmm...hoagies... Edited to add: The Atlantic has a created a video that contains the audio of folks providing examples of their awesome accents whilst completing the original surve.