Monday, July 31, 2017

Hickey's "The Ultimate Playlist Of Banned Wedding Songs"

I think this blog just peaked. Why? I'm giving you a way to use the Cha-Cha-Slide ("Everybody clap your hands!") as a tool to teach basic descriptive statistics.

Most Intro Stats teachers could use this within the first week of class, to describe rank order data, interval data, qualitative data, quantitative data, the author's choice of percentage frequency data instead of straight frequency.

Additionally, Hickey, writing for fivethirtyeight, surveyed two dozen wedding DJs about banned songs at 200 weddings. So, you can chat about research methodology as well. 

Finally, as a Pennsylvanian, it makes me so sad that people ban the Chicken Dance! How can you possibly dislike the Chicken Dance enough to ban it? Is this a class thing? 

Monday, July 24, 2017

de Frieze's "‘Replication grants’ will allow researchers to repeat nine influential studies that still raise questions"

In my stats classes, we talk about the replication crisis. When introducing the topic, I use this reading from NOBA. I think it is also important for my students to think about how science could create an environment where replication is more valued. And the Dutch Organization for Scientific Research has come up with a solution: It is providing grants to nine groups to either 1) replicate famous findings or 2) reanalyze famous findings. This piece from Science details their efforts.

The Dutch Organization for Scientific Research provides more details on the grant recipients, which include several researchers replicating psychology findings:

How to use in class: Again, talk about the replication crisis. Ask you students to generate ways to make replication more valued. Then, give them a bit of faith in psychology/science by sharing this information on how science is on it. From a broader view, this could introduce the idea of grants to your undergraduates or get your graduate students thinking about new avenues for getting their replications funded.

Monday, July 17, 2017

Harris's "Scientists Are Not So Hot At Predicting Which Cancer Studies Will Succeed"

This NPR story is about reproducibility in science that ISN'T psychology, the limitations of expert intuition, and the story is a summary of a recent research article from PLOS Biology (so open science that isn't psychology, too!).

Thrust of the story: Cancer researchers may be having a similar problem to psychologists in terms of replication. I've blogged this issue before. In particular, concerns with replication in cancer research, possibly due to the variability with which lab rats are housed and fed.

So, this story is about a study in which 200 cancer researchers, post-docs, and graduate students took a look at six pre-registered cancer study replications and guessed which studies would successfully replicate. And the participants systematically overestimated the likelihood of replication. However, researchers with high h-indices, were more accurate that the general sample. I wonder if the high h-indicies uncover super-experts or super-researchers who have been around the block and are a bit more cynical about the ability of any research finding to replicate.

How to use in a stats class: False positives: The original research didn't replicate (this time, maybe) AND that the experts judging replicability were overly optimistic. Also, one might wonder if there are potential cancer treatments that we don't know about because of false negatives.

How to use in a research class: The lack of reproduction may signal evidence of the publication bias. Replication is necessary for good science. Experts aren't perfect.

Monday, July 10, 2017

Domonoske's "50 Years Ago, Sugar Industry Quietly Paid Scientists To Point Blame At Fat"

This NPR story discusses research detective work published JAMA. The JAMA article looked at a very influential NEJM review article that investigated the link between diet and Coronary Heart Disease. Specifically, whether sugar or fat contribute more to CHD. The article, written by Harvard researchers decades ago, pinned CHD on fatty diets. But the researchers took money from Big Sugar (which sounds like...a drag queen or CB handle) and communicated with Big Sugar while writing the review article.

This piece discusses how conflict of interest shaped food research and our beliefs about the causes of CHD for decades. And how conflict of interest and institutional/journal prestige shaped this narrative. It also touches on how industry, namely sugar interests, discounted research that finds a sugar:CHD link while promoting and funding research that finds a fat:CHD link.

How to use in a Research Methods class:
-Conflict of interest. The funding received by the researchers from the sugar lobby was never fully disclosed. Sugar lobby communicated with the authors of the original research while they were writing the review article.
-Article of ill repute was a literature review. Opens up the conversation on how influential review papers are. Especially when the authors are from well-reputed institutions and they are printed in well-reputed journals.
-A good example of cherry picking data. Articles critical of sugar where held to a different standard.
-I am a psychologist. I discuss the replication crisis in psychology, but other fields (here, nutrition and heart diseaseresearch) are susceptible to zeitgeist as well.

Monday, July 3, 2017

Chris Wilson's "The Ultimate Harry Potter Quiz: Find Out Which House You Truly Belong In"

Full disclosure: I have no chill when it comes to Harry Potter.

Despite my great bias, I still think this pscyometrically-created (with help from psychologists and Time Magazine's Chris Wilson!) Hogwart's House Sorter is a great example for scale building, validity, descriptive statistics, electronic consent, etc. for stats and research methods.

How to use in a Research Methods class:

1) The article details how the test drew upon the Big Five inventory. And it talks smack about the Myers-Briggs.

2) The article also uses simple language to give a rough sketch of how they used statistics to pair you with your house. The "standard statistical model" is a regression line, the "affinity for each House is measured independently", etc.

While you are taking the quiz itself, there are some RM/statsy lessons:

3) At the end of the quiz, you are asked to contribute some more information. It is a great example of a leading response options as well as implied, electronic consent.

4) The quiz provides descriptive statistics of how well you fit into each House:

5) There is a debriefing:

This isn't the first time I've posted about Chris Wilson's statsy interactive pieces for Time magazine.

Teach Least Squared Error, trends over time, archival data sets via this feature that finds the British equivalent of your first name based on the popularity of your name when you were born versus the same ranked name in England. Bonus: Your students can find out their British name. Mine is Shannon.

Teach percentiles, medians, and I/O's Holland Inventory with this data investigating the relationship between job salary AND Holland personality match for the job. Spoiler alert: This data also provides an example of a non-significant correlation. Bonus: Your students can find out their own Holland Inventory type.