Monday, June 29, 2015

Scott Ketter's "Methods can matter: Where web surveys produce different results than phone interviews"

Pew recently revisited the question of how survey modality can influence survey responses. In particular, this survey used both web and telephone based surveys to ask participants about their attitudes towards politicians, perceptions of discrimination, and their satisfaction with life.

As summarized in the article, the big differences are:

"1) People expressed more negative views of politicians in Web surveys than in phone surveys." 




"2) People who took phone surveys were more likely than those who took Web surveys to say that certain groups of people – such as gays and lesbians, Hispanics, and blacks – faced “a lot” of discrimination." 


"3) People were more likely to say they are happy with their family and social life when asked by a person over the phone than when answering questions on the Web."  



The social psychologist in me likes this as an example of the Social Desirability Bias. When speaking directly to another human being, we report greater life satisfaction, we are more critical of politicians, and more sympathetic towards members of minority groups.

The statistician in me thinks this is a good example for discussing sources of error in research. Even a completely conscientious research using valid, reliable measures may have their data effected based on how it is collected. It might be interesting to asks students to generate lists of research topics (say, market research about cereal preference versus opinions about abortion) and whether students think you could get "true" answers via telephone or web surveys. What is a "true" answer, how could we evaluate or measure this? How could we come up with an implicit or behavioral measure of something like satisfaction with family life, then test which survey modality is most congruent with an implicit or behavioral measure? What do students think would happen if you used face-to-face interviews or paper and pencil surveys in a classroom of people completing surveys?

Additionally, you can't call yourself a proper stats geek unless you follow Pew Research Center on either Twitter (@pewresearch) or on Facebook . So many good examples of interesting data!

Wednesday, June 24, 2015

Statsy pictures/memes for not awful PowerPoints

I take credit for none of these. A few have been posted here before.

by Rayomond Biesinger, http://fifteen.ca/


Creator unknown, usually attributed to clipart?
http://www.sciencemag.org/content/331/6018.cover-expansion

https://www.flickr.com/photos/lendingmemo/

https://lovestats.wordpress.com/2014/11/10/why-do-kids-and-you-need-to-learn-statistics-mrx/
http://memecollection.net/dmx-statistics/



Monday, June 22, 2015

John Bohannon's "I fooled millions into thinking chocolate helps weight loss. Here's how."

http://io9.com/i-fooled-millions-into-thinking-chocolate-helps-weight-1707251800
This story demonstrates how easy it is to do crap science, get it published in a pay-to-play journal, and market your research (to a global audience). Within this story, there are some good examples of Type I error, p-hacking, sensationalist science reporting, and, frankly, our obsession with weight and fitness and easy fixes. Also, chocolate.


Here is the original story, as told to io9.com by the perpetrator of this very conscientious fraud, John Bohannon. Bohannon ran this con as to expose just how open to corruption and manipulation the whole research publication process can be (BioMed Central scandal, for another example), especially when it just the kind of research that is bound to get a lot of media attention (LaCour scandal, for another example).

Bohannon set out to "demonstrate" that dark chocolate can contribute to weight loss. He ran an actual study (n = 26). He went on a fishing expedition and measured 18 different markers of health, and did find a significant relationship between chocolate and lower cholesterol (a good example of likely Type I error).

So a manuscript was created. Bohannon describes how quickly their manuscript was accepted at several pay to play journals (no peer review, either, opening up a class discussion about both peer review as well as the gradient of pay to play journals, some of which are peer reviewed, many of which are not).

Dr. Bonahon then describes how he created a website called "The Institute of Diet and Health" to legitimize his research (to be clear, this Institute does not exist) as well as a press release for his study. Then, the media did his work for him. Once one outlet picked up his story, so did hundreds of others. One glimmer of hope: While the media just ran with this story, Bohannon states that internet discussion boards associated with the different media outlets actually yielded intelligent discussions that picked apart the flaws of the study.

So, I think that the whole io9.com piece would be a good reading assignment for a statistics or research methods class. Additionally, if you are looking to use this story in class, here is an NPR interview with Bohannon.