This scale, created by Haidt and Wilson, predicts your political leanings based upon seemingly unrelated questions.
Screen grab from time.com
You can use this in a classroom to 1) demonstrate interactive, Likert-type scales, 2) face validity (or lack there of). I think this would be 3) useful for a psychometrics class to discuss scale building. Finally, the update at the end of the article mentions 4) both the n-size and the correlation coefficient for their reliability study, allowing you discuss those concepts with students.
This story discusses the many levels of analysis required to get to the bottom of the hypothesis stated in the title of this story. For instance, are cigarettes or the patch better for mom? The baby? If the patch isn't great for either but still better than smoking, what sort of advice should a health care provider give to their patient who is struggling to quit smoking? What about animal model data? I think this story also opens up the conversation about how few medical interventions are tested on pregnant women (understandably so), and, as such, researchers have to opt for more observational research studies when investigating medical interventions for protected populations.
Here is a link to a recent co-authored publication that used Second Life to teach students about virtual data collection as well as the broader trend in psychology to study how virtual environments influence interpersonal interactions. Specifically, students replicated evolutionary psychology findings using Second Life avatars. We also discuss best practices for using Second Life in the class room as well as our partial replication of previously established evolutionary psychology findings (Clark & Hatfield, 1989, Buss, Larson, Weston, & Semmelroth, 1992).
Two prominent psychology journals are changing their standards for publication in order to address several long-standing debates in statistics (p-values v. effect sizes and point estimates of the mean v. confidence intervals).
Here are the details for changes that the Association for Psychological Science is creating for their gold-standard publication, Psychological Science, in order to improve the transparency in data reporting.
Some of the big changes include mandatory reporting of effect sizes, confidence intervals, and inclusion of any scales or measures that were non-significant. This might be useful in class when describing why p-values and means are imperfect, the old p-value v. effect size debate, and how one can bend the truth with statistics via research methodology (and glossing over/completely neglecting N.S. findings). These examples are also useful in demonstrating to your students that these issues we discuss in class have real world ramifications and aren't being taken lightly by research scientists.
Additionally, the Society of Personality and Social Psychology is implementing similar changes in the Personality and Social Psychology Bulletin, as described here. SPSP is even going a step further and demanding open sharing of any data being considered for publication, as described here, and also asking authors to address issues of sample size/power.