Monday, November 30, 2015

Explaining the replication crisis to undergraduates

If you are unaware, Noba Project is a collaboration of many, many psychology instructors who create and make freely available text books as well as stand-alone chapters (modules) that cover a wide variety of psychology topics. You can build a personalized text book AND access test banks/powerpoints for the materials offered.

Well, one of the new modules covers the replication crisis in psychology. I think it is thorough treatment of the issue and appropriate for undegraduates.

Monday, November 23, 2015's Football Freakanomics

EDIT: All of this content appears to have been removed from If anyone has any luck finding it, please email me at

The NFL and the statistics folks over at Freakonomics got together and made some...learning modules? Let's call them learning modules. They are interactive websites that teach users about very specific questions related to football (like home field advantage, instances when football player statistics don't tell the whole story about a player/team, whether or not firing a head coach improves a failing team, the effects of player injury on team success, etc.) and then answer these questions via statistics.

Most of the modules include interactive tables, data, and videos (featuring the authors of Freakanomics) in order to delve into the issue at hand.

For example:

The Home Field Advantage: This module features a video, as well as a interesting interactive map that illustrates data about the exact sleep lost experienced by teams that travel from one coast to another and the teams that have to travel the most during seasons. This module uses data to demonstrate that home field advantage does exist, but it also describes all of the factors that may cause home-field advantage (player sleep quality/stadium noise/etc.). This opens up a discussion of multivariate statistics and co-variates.

How to use in your class: Engaging examples of real life questions that can be answered via data collection and analysis.

Monday, November 16, 2015

Neighmond's "Why is mammogram advice still such a tangle? Ask your doctor."

This news story discusses medical advice regarding dates for recommended annual mammograms for women.

Of particular interest for readers of this blog: Recommendations for regular mammograms are moving later and later in life. Because of the very high false positive rate associated with mammograms and subsequent breast tissue biopsies. However, women who have a higher probability (think genetics) are still being advised to have their mammograms earlier in life. Part of the reason that these changes are being made is because previous recommendations (start mammograms at 40) were based on data that was 30-40 years old (efficacy studies/replication are good things!). Also, I generally love counter-intuitive research findings: I think they make a strong argument for why research and data analysis are so very important.

I have blogged about this topic before. This piece by Christy Ashwanden contains some nice graphs and charts that demonstrate that enthusiastic preventative care to detect cancer (including breast cancer) isn't necessarily saving any lives. Another piece that touches on this topic is Sharon Begley's extensive article about how more medicine isn't necessarily better medicine. I use this article in my online class as a discussion prompt. And my online class is aimed at adult, working students who are RNs and are earning their BSNs, and they always have some great responses to this article (generally, they view it favorably).

Thursday, November 12, 2015

Come work with me.


I wanted to post a blog about a job opportunity that available in my department here at Gannon University. Currently, we are seeking a tenure-track assistant professor who specializes in clinical or counseling psychology and would be interested in teaching theories of personality, psychological assessment, and other specialty undergraduate courses.

Gannon is a true undergraduate institution. We teach a 4/4 course load, typically with two and sometimes three unique teaching preps.

I started at Gannon in 2009. In that time, I've received $1000s of dollar in internal grant funding to pursue my work in the scholarship of teaching. In addition to supporting the scholarship of teaching, Gannon provides internal support so that faculty can create global education opportunities as well as service learning opportunities for our students. For instance, one of my colleagues is currently writing a proposal for a History of Psychology class that would include an educational trip to Europe. Another colleague will be teaching his Psychology of Poverty class for the first time in the Spring. This class includes a requirement of 30 service learning hours spent at local not-for-profits that serve the poor in our community.

I've also been able to pursue more traditional research opportunities, and the expectations for such research are in line with a university that focuses so much on undergraduate education.

I would say that I and happy here, have a very good work/life balance (I am married with a toddler and another baby on the way), and fairly compensated for my work. I really like the department that I work in and Gannon provides many leadership and committee opportunities that further enhance my work life.

Erie, PA, is either a small large town or a large small town (~100,000 people). It has most of the amenities that you could want (cool microbrewery scene, fun downtown, lots of outdoorsy fun as we're right on Lake Erie, mall, zoo) and the cost of living is very low. We're also ~two hours away from Pittsburgh, Buffalo, and Cleveland, if you feel like getting out of town.

If you are interested in learning more about the position, click here.

Monday, November 9, 2015

Smith's "Rutgers survey underscores challenges collecting sexual assault data."

Tovia Smith filed a report with NPR that detailed the psychometric delicacies of trying to measure the sexual assault rates on a college campus. I think this story is highly relevant to college students. I also think it also provides an example of the challenge of operationalizing variables as well as self-selection bias.

This story describes sexual assault data collected at two different universities, Rutgers and U. Kentucky. The universities used different surveys, had very different participation rates, and had very different findings (20% of Rutgers students met the criteria for sexual assault, while only 5% of Kentucky students did).

Why the big differences?

1) At Rutgers, students where paid for their participation and 30% of all students completed the survey. At U. Kentucky, student participation was mandatory and no compensation was given. Sampling techniques were very different, which opens the floor to student discussion about what this might mean for the results. Who might be drawn to complete a sexual assault survey? Who is enticed by completing a survey for compensation? How might mandatory survey completion effect college students' attitudes towards a survey and their likelihood to take the survey seriously? Is it ethical to make a survey about something as private as sexual assault mandatory? Is it ethical to make any survey mandatory?

2) Rutgers used a broader definition of sexual assault. For instance, one criteria for sexual assault was having a romantic partner threaten to break up with you if you didn't have sex with them. Jerk move? Absolutely. But does should this boorish behavior be lumped into the same category as rape? Again, this bring up room for class discussion about how such definitions may have influenced the research findings. How can we objectively, sensitively define sexual assault?

Here is an additional news story on the survey out of University of Kentucky. Here is more information about Rutgers' survey (you can take a look at the actual survey on p. 44 of this document).

Monday, November 2, 2015

Barry-Jester, Casselman, & Goldstein's "Should prison sentences be based on crimes that haven't been committed yet?"

This article describes how the Pennsylvania Department of Corrections is using risk assessment data in order to predict recidivism, with the hope of using such data in order to guide parole decisions in the future.

So, using data to predict the future is very statsy, demonstrates multivariate modeling, and a good example for class, full stop. However, this article also contains a cool interactive tool, entitled "Who Should Get Parole?" that you could use in class. It demonstrates how increasing/decreasing alpha and beta changes the likelihood of committing Type I and Type II errors.

The tool allows users to manipulate the amount of risk they are willing to accept when making parole decisions. As you change the working definition of a "low" or "high" risk prisoner, a visualization will start up, and it shows you whether your parolees stay out of prison or come back.

From a statistical perspective, users can adjust the definition of a low, medium, and high risk prisoners and then see how many 1) people who are paroled and reoffend (Type II error: False negative) versus 2) people who are denied parole but wouldn't have reoffended (Type I error: False positive). When you adjust the risk level (below, in Column 2) and then see your outcomes (below, in column 3), it really does reflect on the balance between power and confidence.

Here, I have set the sliding scale so that there is a broad range for designating a prisoner as "Medium Risk". As such, you have 23% of paroled prisoners landing back in jail and 17% of your unparoled prisoners sitting in jail even though they wouldn't have re-offended. As we expand our range of "significance" (here, prisoners we parole), we increase the possibility of false positives (here, folks who re-offend) but have a smaller amount of false negatives
Meanwhile, if you have very stringent standards, you will have fewer false positive (only 10% of those paroled will re-offend) but then you have a lot more false negatives (people denied parole who wouldn't have re-offended).