Skip to main content

Posts

Explaining the replication crisis to undergraduates

If you are unaware, Noba Project is a collaboration of many, many psychology instructors who create and make freely available text books as well as stand-alone chapters (modules) that cover a wide variety of psychology topics. You can build a personalized text book AND access test banks/powerpoints for the materials offered. Well, one of the new modules covers the replication crisis in psychology . I think it is thorough treatment of the issue and appropriate for undegraduates.

NFL.com's Football Freakanomics

EDIT: All of this content appears to have been removed from NFL.com. If anyone has any luck finding it, please email me at hartnett004@gannon.edu The NFL and the statistics folks over at Freakonomics got together and made some...learning modules? Let's call them learning modules. They are interactive websites that teach users about very specific questions related to football (like home field advantage , instances when football player statistics don't tell the whole story about a player/team , whether or not firing a head coach improves a failing team , the effects of player injury on team success , etc.) and then answer these questions via statistics. Most of the modules include interactive tables, data, and videos (featuring the authors of Freakanomics) in order to delve into the issue at hand. For example: The Home Field Advantage : This module features a video, as well as a interesting interactive map that illustrates data about the exact sleep lost experienced by ...

Neighmond's "Why is mammogram advice still such a tangle? Ask your doctor."

This news story discusses medical advice regarding dates for recommended annual mammograms for women. Of particular interest for readers of this blog: Recommendations for regular mammograms are moving later and later in life. Because of the very high false positive rate associated with mammograms and subsequent breast tissue biopsies. However, women who have a higher probability (think genetics) are still being advised to have their mammograms earlier in life. Part of the reason that these changes are being made is because previous recommendations (start mammograms at 40) were based on data that was 30-40 years old ( efficacy studies/replication are good things!). Also, I generally love counter-intuitive research findings: I think they make a strong argument for why research and data analysis are so very important. I have blogged about this topic before. This piece by Christy Ashwanden  contains some nice graphs and charts that demonstrate that enthusiastic preventative care ...

Come work with me.

Hi, I wanted to post a blog about a job opportunity that available in my department here at Gannon University . Currently, we are seeking a tenure-track assistant professor who specializes in clinical or counseling psychology and would be interested in teaching theories of personality, psychological assessment, and other specialty undergraduate courses. Gannon is a true undergraduate institution. We teach a 4/4 course load, typically with two and sometimes three unique teaching preps. I started at Gannon in 2009. In that time, I've received $1000s of dollar in internal grant funding to pursue my work in the scholarship of teaching. In addition to supporting the scholarship of teaching, Gannon provides internal support so that faculty can create global education opportunities as well as service learning opportunities for our students. For instance, one of my colleagues is currently writing a proposal for a History of Psychology class that would include an educational trip to E...

Smith's "Rutgers survey underscores challenges collecting sexual assault data."

Tovia Smith filed a report with NPR that detailed the psychometric delicacies of trying to measure the sexual assault rates on a college campus. I think this story is highly relevant to college students. I also think it also provides an example of the challenge of operationalizing variables as well as self-selection bias. This story describes sexual assault data collected at two different universities, Rutgers and U. Kentucky. The universities used different surveys, had very different participation rates, and had very different findings (20% of Rutgers students met the criteria for sexual assault, while only 5% of Kentucky students did). Why the big differences? 1) At Rutgers, students where paid for their participation and 30% of all students completed the survey. At U. Kentucky, student participation was mandatory and no compensation was given. Sampling techniques were very different, which opens the floor to student discussion about what this might mean for the results. Who m...

Barry-Jester, Casselman, & Goldstein's "Should prison sentences be based on crimes that haven't been committed yet?"

This article describes how the Pennsylvania Department of Corrections is using risk assessment data in order to predict recidivism, with the hope of using such data in order to guide parole decisions in the future. So, using data to predict the future is very statsy, demonstrates multivariate modeling, and a good example for class, full stop. However, this article also contains a cool interactive tool, entitled "Who Should Get Parole?" that you could use in class. It demonstrates how increasing/decreasing alpha and beta changes the likelihood of committing Type I and Type II errors. The tool allows users to manipulate the amount of risk they are willing to accept when making parole decisions. As you change the working definition of a "low" or "high" risk prisoner, a visualization will startup, and it shows you whether your parolees stay out of prison or come back. From a statistical perspective, users can adjust the definition of a low, medium, and h...

r/faux_pseudo's "Distribution of particles by size from a Cracker Jack box

I love my fellow Reddit data geeks over at r/dataisbeautiful . Redditor faux_pseudo created a frequency chart of the deliciousness found in a box of Cracker Jacks. I think it would be funny to ask students to discuss why this graph is misleading (since the units are of different size and the pop corn is divided into three columns). You could also discuss why a relative frequency chart might provide a better description. Finally, you could also replicate this in class with Cracker Jacks (one box is an insufficient n-size, after all) or try it using individual servings of Trail Mix or Chex Mix or order to recreate this with a smaller, more manageable sample size. Also, as always, Reddit delivers in the Comments section:

Orlin's "What does probability mean in your profession?"

Math with Bad Drawings is a very accurately entitled blog. Math teacher Ben Orlin illustrates math principles, which means that he occasionally illustrates statistical principles. He dedicated one blog posting to probability, and what probability means in different contexts. He starts out with a fairly standard and reasonable interpretation of p :  Then he has some fun. The example below illustrates the gap that can exist between reality and reporting. And then how philosophers handle probability (with high- p statements being "true"). And in honor of the current Star Wars frenzy: And finally...one of Orlin's Twitter followers, JP de Ruiter , came up with this gem about p -values:

Barry-Jester's "What A Bar Graph Can Tell Us About The Legionnaires’ Outbreak In New York" + CDC learning module

Statistics aficionados over at FiveThirtyEight applied statistics (specifically, tools used by epidemiologists) to the Summer of 2015  outbreak of Legionnaires' Disease  in New York. This story can be specifically used in class as a way of discussing how simple bar graphs can be modified as to display important information about the spread of disease. This news story also includes a link to a learning module  from the CDC. It takes the user through the process of creating an Epi curve. Slides 1-8 describe the creation of the curve, and slides 9-14 ask questions and provide interactive feedback that reinforces the lesson about creating Epi curves. Graphs are useful for conveying data, but even one of our out staples, the bar graph, can be specialized as they share information about the way that disease spread. 1) Demonstrates statistics being used in a field that isn't explicitly statistics-y. 2) A little course online via the CDC for your students to learn to...

U.S. Holocaust Mueseum's "Deadly medicine, creating the master race" traveling exhibit

Alright. This teaching idea is pretty involved. It is bigger than any one instructor and requires interdepartmental effort as well as support from The Powers that Be at your university. The U.S. Holocaust Museum hosts a number of  traveling exhibits . One in particular, " Deadly Medicine: Creating the Master Race ", provides a great opportunity for the discussions of research ethics, the protection and treatment of human research subjects, and how science can be used to justify really horrible things. I am extraordinarily fortunate that Gannon University's Department of History (with assistance from our Honors program as well as College of the Humanities, Education, and Social Sciences) has worked hard to get this exhibit to our institution during the Fall 2015 semester. It is housed in our library through the end of October. How I used it in my class: My Honors Psychological Statistics class visited the exhibit prior to a discussion day about research ethics. In...

An example of when the median is more useful than the mean. Also, Bill Gates.

From Reddit's Instagram...the comments section demonstrates some heart-warming statistical literacy.

How NOT to interpret confidence intervals/margins of error: Feel the Bern edition

This headline is a good example of a) journalists misrepresenting statistics as well as b) confidence intervals/margin of error more broadly. See the headline below: In actuality, Bernie didn't exactly take the lead over Hillary Clinton. Instead, a Quinnipiac poll showed that 41% of likely Democratic primary voters in Iowa indicated that they would vote for Sanders, while 40% reported that they would vote for Clinton. If you go to the original Quinnipiac poll , you can read that the actual data has a margin of error of +/- 3.4%, which means that the candidates are running neck and neck. Which, I think, would have still been a compelling headline.  I used this as an example just last week to explain applied confidence intervals. I also used this as a round-about way of explaining how confidence intervals are now being used as an alternative/compliment to p -values. 

Aschwanden's "Science is broken, it is just a hell of a lot harder than we give it credit for"

Aschwanden (for fivethirtyeight.com) did an extensive piece that summarizes that data/p-hacking/what's wrong with statistical significance crisis in statistics. There is a focus on the social sciences, including some quotes from Brian Nosek regarding his replication work. The report also draws attention to  Retraction Watch  and Center for Open Science as well as retractions of findings (as an indicator of fraud and data misuse). The article also describes our funny bias of sticking to early, big research findings even after those research findings are disproved (example used here is the breakfast eating:weight loss relationship). The whole article could be used for a statistics or research methods class. I do think that the p-hacking interactive tool found in this report could be especially useful illustration of How to Lie with Statistics. The "Hack your way to scientific glory" interactive piece demonstrates that if you fool around enough with your operationalized...