Monday, December 30, 2013

The United Nation's "2013 World Happiness Report"

I am teaching positive psychology for the first time this semester. One way to quickly teach students that this isn't just Happy Psych. 101 is to show them convincing data collected by an international organization (here, the United Nations) that demonstrates the link between positive psychology and the well-being of nations.

This data isn't just for a positive psychology class: You could also use it more broadly to demonstrate how research methods have to be adjusted when data is collected internationally (see item 4) and as examples of different kinds of data analysis (as described under item 1).

1) Report on international happiness data from the United Nations.

If you look through the data collected, there is a survival analysis related to longevity and affect on page 66. A graphic on page 21 describes factors that account for global variance in happiness levels across countries. There is also a lot of data about mental health care spending in different nations.

2) A quick summary of a few data points from National Geographic.

Including points that have been made previously in positive psychology circles: a) living someplace with nice weather doesn't lead to happiness, b) after basic financial stability has been achieved, happiness doesn't increase in proportion to one's income.

3) Data, visualized, via Huffington Post.

4) Another rich source of well-being data comes from the Gallup polling organization. They collect data on various aspects of wellness, including subjective well being as well as health data from the US and around the world. Included are weekly polls on if Americans feel that they are thriving, struggling, or suffering as well information on how well-being data is collected internationally.

5) Finally, The Onion's take on America's ranking as the 17th happiest country in the world.

Monday, December 23, 2013

The Economist's "Unlikely Results"

A great, foreboding video  (here is a link to the same video at YouTube in case you hit the paywall) about the actual size and implication of Type II errors in scientific research. This video does a great job of illustrating what p < .05 means in the context of thousands of experiments.

Here is an article from The Economist on the same topic.


From TheEconomist

Monday, December 16, 2013

The Atlantic's "Congratulations, Ohio! You Are the Sweariest State in the Union"

While it isn't hypothesis driven research data, this data was collected to see which states are the sweariest. The data collection itself is interesting and a good, teachable example. First, the article describes previous research that looked at swearing by state (typically, using publicly available data via Twitter or Facebook). Then, they describe the data collection used for the current research:

"A new map, though, takes a more complicated approach. Instead of using text, it uses data gathered from ... phone calls. You know how, when you call a customer service rep for your ISP or your bank or what have you, you're informed that your call will be recorded? Marchex Institute, the data and research arm of the ad firm Marchex, got ahold of the data that resulted from some recordings, examining more than 600,000 phone calls from the past 12 months—calls placed by consumers to businesses across 30 different industries. It then used call mining technology to isolate the curses therein, cross-referencing them against the state the calls were placed from."

Nice big sample size, archival data, AND data collected in a very naturalistic setting of folks calling and complaining to companies. You could also discuss how this data may be more representative of the average American versus data collected only from folks who use FB or Twitter.

In addition to swearing, they also analyzed the data for courtesy. Way to go, South Carolina!

From The Atlantic

Monday, December 9, 2013

Washington Posts's "GAO says there is no evidence that a TSA program to spot terrorists is effective" (Update: 3/25/15)

The Travel Security Agency implemented SPOT training in order to teach air port security employees how to spot problematic and potentially dangerous individuals via behavioral cues. This intervention has cost the U.S. government $1 billion+. It doesn't seem to work.

By discussing this with your class, you can discuss the importance of program evaluations as well as validity and reliability. The actual government issued report goes into great detail about how the program evaluation data was collected to demonstrate that SPOT isn't working. The findings (especially the table and figure below) do a nice job of demonstrating the lack of reliability and the lack of validity. This whole story also implicitly demonstrates that the federal government is hiring statisticians with strong research methods backgrounds to conduct program evaluations (= jobs for students).

Here is a summary of the report from the Washington Post.

Here is a short summary and video about the report from CBS.

Here is the actual report.


This table from the official report demonstrates a lack of inter-rater reliability.



This figure from the report demonstrates a lack of validity in terms of SPOT leading to arrests (also demonstrates the concept of false positives/Type I error)

UPDATE (3/25/15): As reported by Brian Naylor (for NPR), the ACLU is suing for access to this efficacy data. They argue that the SPOT program has led to racial profiling and they filed a Freedom of Information Act petition in order to examine the data themselves. This update also describes in greater detail the debate about whether or not people are very good lie detectors and includes a brief interview with psychologists Nicholas Eppley and Anne Kring, making this a good applied social psychology example.

Monday, December 2, 2013