Skip to main content

Posts

Showing posts with the label the new statistics

Pedagogy article recommendation: "Introducing the new statistics in the classroom."

I usually blog about funny examples for the teaching of statistics, but this example is for teachers teaching statistics. Normile, Bloesch, Davoli, & Scheer's recent publication, "Introducing the new statistics in the classroom" (2019) is very aptly and appropriately titled. It is a rundown on p-values and effect sizes and confidence intervals. Such reviews exist elsewhere, but this one is just so short and precise. Here are a few of the highlights: 1) The article concisely explains what isn't great or what is frequently misunderstood about NHST. 2) Actual guidelines for how to explain it in Psychological Statistics/Introduction to Statistics, including ideas for doing so without completely redesigning your class. 3) It also highlights one of the big reasons that I am so pro-JASP: Easy to locate and use effect sizes.

Teaching the "new statistics": A call for materials (and sharing said materials!)

This blog is usually dedicated to sharing ideas for teaching statistics. And I will share some ideas for teaching. But I'm also asking you to share YOUR ideas for teaching statistics. Specifically, your ideas for teaching the new statistics: effect size, confidence intervals, etc. The following email recently came across the Society for the Teaching of Psychology listserv from Robert Calin-Jageman (rcalinjageman@dom.edu). "Is anyone out there incorporating the "New Statistics" (estimation, confidence intervals, meta-analysis) into their stats/methods sequence? I'm working with Geoff Cumming on putting together an APS 2017 symposium proposal on teaching the New Statistics.  We'd love to hear back from anyone who has already started or is about to.  Specifically, we'd love to:         * Collect resources you'd be willing to share (syllabi, assignments, etc.)         * Collect narratives of your experi...

Kristopher Magnusson's "Interpreting Cohen's d effect size"

Kristopher Magnusson (previously featured on this blog for his interactive illustration of correlation ) also has a helpful illustration of effect size . While this example probably has some information that goes beyond an introductory understanding of effect size (via Cohen's d ) I think this still does a great job of illustrating how effect size measures, essentially, the magnitude of the difference between groups (not how improbably those differences are). See below for a screen shot of the tool. http://rpsychologist.com/d3/cohend/, created by Kristopher Magnusson

Geoff Cumming's "The New Statistics: Estimation and Research Integrity"

Geoff Cumming Geoff Cumming gave a talk at APS 2014 about the " new statistics " (reduced emphasis on p-value, greater emphasis on confidence intervals and effect sizes, for starters). This workshop is now available, online and free, from APS . The three hour talk has been divided into five sections, and each sections comes with a "Table of Contents" to help you quickly navigate all of the information contained in the talk. While some of this talk is too advanced for undergraduates, I think that there are portions, like his explanation of why p-values are so popular, p-hacking, confidence intervals can be nice additions to an Introduction to Statistics class.

Slate & Rojas-LeBouef's "Presenting and Communicating Your Statistical Findings: Model Writeups"

Holy smokes. This e-book  (distributed for free via Open Stax ) contains sample result sections for multiple statistical tests, which is helpful but not particularly unique. There are other resources for creating APA results sections ( love U. Washington's resources ) but I feel that this book is particularly useful in that: 1) It addresses how to include effect sizes in tests (most of the result section examples I have been able to find neglect this increasingly important aspect of data analysis). 2) The writers translate SPSS output into results sections. 3) The writers aren't psychologist but they are APA compliant (and even point out instances when their figures and tables aren't APA compliant). 4) It is gloriously free. The only shortcoming is that they don't provide examples for more types of data analyses. The book does, however, cover chi-square, correlation, t -test, and ANOVA, so most of what is covered in introductory statistics courses. I think th...

Center for Open Science's FREE statistical & methodological consulting services

Center for Open Science (COS) is an  organization  that seeks " to increase openness, integrity, and reproducibility of scientific research " . As a social psychologist, I am most  familiar  with COS as a repository for experimental data. However, COS also provides free consulting services as to teach scientists how to make their own research processes more replication-friendly .  As scholars, we can certainly take advantage of these services. As instructors, the kind folks at COS are willing to provide workshops to our students (including, but not limited to, online workshops). Topics that they can cover include:  Reproducible Research Practices, Power Analyses, The ‘New Statistics’, Cumulative Meta-analyses, and Using R to create reproducible code (or more information on scheduling, see their availability  calendar ). I once heard it said that the way you learn how to conduct research and statistics in graduate school will be the way you...

Regina Nuzzo's "Scientific method: Statistical errors"

This article from Nature is  an excellent primer on the concerns surrounding the use of p -values as the great gate keeper of statistical significance. The article includes historical perspective on how p -values came to be so widely used as well as some discussion on solutions and alternative measures of significance. This article also provides good examples failed attempts at replication (good examples of Type I errors) and a shout out to Open Science Framework folks. Personally, I have revised my class for the fall to include more discussion of and use of effect sizes. I think this article may be a bit above an undergraduate, introduction to statistics class but it could be useful for us as instructors as well as a good reading for advanced undergraduates and graduate students.

Changes in standards for data reporting in psychology journals

Two prominent psychology journals are changing their standards for publication in order to address several long-standing debates in statistics (p-values v. effect sizes and point estimates of the mean v. confidence intervals). Here are the details for changes that the Association for Psychological Science is creating for their gold-standard publication, Psychological Science, in order to improve the transparency in data reporting. Some of the big changes include mandatory reporting of effect sizes, confidence intervals, and inclusion of any scales or measures that were non-significant. This might be useful in class when describing why p-values and means are imperfect, the old p-value v. effect size debate, and how one can bend the truth with statistics via research methodology (and glossing over/completely neglecting N.S. findings). These examples are also useful in demonstrating to your students that these issues we discuss in class have real world ramifications and aren't be...

The Economist's "Unlikely Results"

A great, foreboding video  (here is a link to the same video at YouTube in case you hit the paywall) about the actual size and implication of Type II errors in scientific research. This video does a great job of illustrating what p < .05 means in the context of thousands of experiments. Here is an article from The Economist on the same topic. From TheEconomist

Stephen Colbert vs. Darryl Bem = effect size vs. statistical significance

Darryl Bem on The Colbert Report I love me some Colbert Report. So imagine my delight when he interviewed social psychologist Darryl Bem . Bem is famous for his sex roles inventory as well as his Psi research. Colbert interviewed him about his 2012 Journal of Personality and Social Psychology article, Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect, which demonstrated a better-than-chance ability to predict an outcome. Here, the outcome was guessing which side of a computer screen would contain an erotic image (Yes, Colbert had a field day with this. Yes, please watch the clip in its entirety before sharing it with a classroom of impressionable college students). Big deal? Needless to say, Colbert reveled in poking fun at the "Time Traveling Porn" research. However, the interview is of some educational value because it a)does a good job of describing the research methods used in the study. Additionally, b) h...