Skip to main content

Posts

Showing posts with the label best practices in statistics

One article (Kramer, Guillory, & Hancock, 2014), three stats/research methodology lessons

The original idea for using this article this way comes from Dr. Susan Nolan 's presentation at NITOP 2015, entitled " Thinking Like a Scientist: Critical Thinking in Introductory Psychology."  I think that Dr. Nolan's idea is worth sharing, and I'll reflect a bit on how I've used this resource in the classroom. (For more good ideas from Dr. Nolan, check out her books, Psychology , Statistics for the Behavioral Sciences , and The Horse that Won't Go Away (about critical thinking)). Last summer, the National Academy of Sciences Proceedings published an article entitled "Experimental evidence of massive-scale emotional contagion through social networks ." The gist: Facebook manipulated participants' Newsfeeds to increase the number of positive or negative status updates that each participant viewed. The researchers subsequently measured the number of positive and negative words that the participants used in their own status updates. They fou...

Randy McCarthy's "Research Minutia"

This blog posting by Dr. Randy McCarthy discusses best practices in organizing/naming conventions for data files. These suggestions are probably more applicable to teaching graduate students than undergraduates. They are also the sorts of tips and tricks we use in practice but rarely teach in the classroom (but maybe we should). Included in Randy's recommendations: 1) Maintain consistent naming conventions for frequently used variables (like scale items or compiled scales that you use over and over again in your research). Then create and run the same syntax for this data for the rest of your scholarly career. If you are very, very consistent in the scales you use and the data analyses your run, you can save yourself time by showing a little forethought. 2) Keep and guard a raw version of all data sets. 3) Annotate your syntax. I would change that to HEAVILY annotate your syntax. I even put the dates upon which I write code so I can follow my own logic if I have to let a d...

Geoff Cumming's "The New Statistics: Estimation and Research Integrity"

Geoff Cumming Geoff Cumming gave a talk at APS 2014 about the " new statistics " (reduced emphasis on p-value, greater emphasis on confidence intervals and effect sizes, for starters). This workshop is now available, online and free, from APS . The three hour talk has been divided into five sections, and each sections comes with a "Table of Contents" to help you quickly navigate all of the information contained in the talk. While some of this talk is too advanced for undergraduates, I think that there are portions, like his explanation of why p-values are so popular, p-hacking, confidence intervals can be nice additions to an Introduction to Statistics class.

Center for Open Science's FREE statistical & methodological consulting services

Center for Open Science (COS) is an  organization  that seeks " to increase openness, integrity, and reproducibility of scientific research " . As a social psychologist, I am most  familiar  with COS as a repository for experimental data. However, COS also provides free consulting services as to teach scientists how to make their own research processes more replication-friendly .  As scholars, we can certainly take advantage of these services. As instructors, the kind folks at COS are willing to provide workshops to our students (including, but not limited to, online workshops). Topics that they can cover include:  Reproducible Research Practices, Power Analyses, The ‘New Statistics’, Cumulative Meta-analyses, and Using R to create reproducible code (or more information on scheduling, see their availability  calendar ). I once heard it said that the way you learn how to conduct research and statistics in graduate school will be the way you...

Chew and Dillion's "Statistics Anxiety Update Refining the Construct and Recommendations for a New Research Agenda"

Here are two articles, one from The Observer and one from Perspectives on Psychological Science . The PPS article, by Chew and Dillion, is a call for more research to study statistics anxiety in the classroom. Chew and Dillon provide a thorough review of statistics anxiety research, with a focus on antecedents of anxiety as well as interventions (The Observer article is a quick summary of those interventions) and directions for further research. I think Chew and Dillion make a good case for why we should care about statistics anxiety as statistics instructors. As a psychologist who teaches statistics, I find that many of my students are not in math-related majors but can still learn to think like a statistician, in order to improve their critical thinking skills and prepare them for a data/analytic driven world after graduation. However, their free-standing anxiety related to simply being in a statistics class is a big barrier to this and I welcome their suggestions regarding the re...

Nature's "Policy: Twenty tips for interpreting scientific claims" by William J. Sutherland, David Spiegelhalter, & Mark Burgman

This very accessible summary lists the ways people fib with, misrepresent, and overextend data findings. It was written as an attempt to give non-research folk (in particular, law makers), a cheat sheet of things to consider before embracing/rejecting research driven policy and laws. A sound list, covering plenty of statsy topics (p-values, the importance of replication), but what I really like is that they article doesn't criticize the researchers as the source of the problem. It places the onus on each person to properly interpret research findings. This list also emphasizes the importance of data driven change.

Changes in standards for data reporting in psychology journals

Two prominent psychology journals are changing their standards for publication in order to address several long-standing debates in statistics (p-values v. effect sizes and point estimates of the mean v. confidence intervals). Here are the details for changes that the Association for Psychological Science is creating for their gold-standard publication, Psychological Science, in order to improve the transparency in data reporting. Some of the big changes include mandatory reporting of effect sizes, confidence intervals, and inclusion of any scales or measures that were non-significant. This might be useful in class when describing why p-values and means are imperfect, the old p-value v. effect size debate, and how one can bend the truth with statistics via research methodology (and glossing over/completely neglecting N.S. findings). These examples are also useful in demonstrating to your students that these issues we discuss in class have real world ramifications and aren't be...