Skip to main content

Posts

Dvorsky's "Lab Mice Are Freezing Their Asses Off—and That’s Screwing Up Science"

This example can be used to explain why the smallest of details can be so important when conducting research. This piece by Dvosrsky summarizes a recently published  article that points out a (possible!) major flaw in pre-clinical cancer research using rats. Namely, lab rats aren't being kept at an ideal rat temperature. This leads to the rats behaving differently than normal to stay warm: They eat more, they burrow more, and their metabolism changes. The researchers go on to explain that there are also plenty of other seemingly innocuous factors that can vary from rat lab to rat lab, like bedding, food, exposure to light, etc. and that these factors may also effect research findings. Why is this so important? Psychology isn't the only field dealing with a replicability crisis: Rat researchers are also experiencing difficulties. Difficulties that may be the result of all of these seemingly tiny differences in lab rats that are used during pre-clinical research. I thin...

Weinberg's "How One Study Produced a Bunch of Untrue Headlines About Tattoos Strengthening Your Immune System"

In my Honors Statistics course, we have discussion days over the course of a semester. One of the discussion topics involves instances when the media has skewered research results (for another example, see this story about  fitness trackers ,) Jezebel writer Caroline Weinberg   describes a  modest study  that found that people who have at least one previous tattoo experience a boost in their immunity when they get subsequent tattoos, as demonstrated via saliva samples of Immunoglobulin A. This is attributed to the fact that compared to tattoo newbies, tattoo veterans don't experience a cortisol reaction following the tattoo. Small sample size but a pretty big effect. So, as expected, the media exaggerated these effects...but mostly because the researcher's university's marketing department did so first. Various new outlets stated things like  "Sorry, Mom: Getting lots of tattoos could have surprising health benefits"  and  "Getting multip...

Bichell's "A Fix For Gender-Bias In Animal Research Could Help Humans"

This news story demonstrates that research methods are both federally monitored and that best practices can change over time. For a long time, women were not used in large scale pharmaceutical trials. Why did they omit women? They didn't want to accidentally exposed pregnant women to new drugs and because of fears that fluctuations in females hormones over the course of a month would affect research results. Which always makes me think of this scene from Anchorman: But I digress. This has been corrected for and female participants are being included in clinical trials. But many of the animal trials that occur prior to human trials still use mostly male animals. And, again, policies have changed to correct for this. This NPR story details the whole history of this sex bias in research. Part of why this bias has been so detrimental to women is because women report more side effects to drugs than do men. So, by catching such gender differences earlier with animal models, the...

Shapiro's "New Study Links Widening Income Gap With Life Expectancy"

This story is pretty easy to follow. Life expectancy varies by income level . The story becomes a good example for a statistics class because in the interview, the researcher describes a multivariate model. One in which multiple different independent variables (drug use, medical insurance, smoking, income, etc.) could be used to explain the disparity the exists in lifespan between people with different incomes. As such, this story could be used as an example of multivariate regression. And The Third Variable Problem. And why correlation isn't enough. In particular, this part of the interview (between interviewer Ari Shapiro and researcher Gary Burtless) refers to the underlying data as well as the Third Variable Problem as well as the amount to variability that can be assigned to the independent variables he lists). SHAPIRO: Why is this gap growing so quickly between life expectancy of rich and poor people? BURTLESS: We don't know. More affluent Americans tend to engage...

Pew Research Center's "The strong relationship between per capita income and internet access, smartphone ownership"

This finding is super-duper intuitive: A positive, strong correlation exists between national per capita income and rates of internet access and smartphone ownership within that nation. Because it is intuitive, it makes a good example for your class when you teach correlation to your baby statisticians. This graph is  more engaging than your average graph because the good people at Pew made it interactive. You can see which country is represented by which dot. You can also see regional trends as the countries are color-coded by continent/region. For more context and information on this survey, see this more extensive report on the relationship between smartphone/internet access and economic advancement . This report further breaks down technology usage by education level, age, individual income, etc. This data is also useful for demonstrating the distribution of wealth in the world and variability that exists among countries in the same region/on the same continent,

Shameless Self Promotion

Check out my recent publication in Teaching of Psychology. Whomp, whomp!

Kennedy's "'Everybody Stretches' Without Gravity: Mark Kelly Talks About NASA's Twins Study"

In addition to being an astronaut, Scott Kelly is one-half of a pair of twins and a lab rat for NASA researchers studying space travel's effects on the human body. This NPR story details how NASA has been using twin research to learn more about the side-effects of prolonged time in space as the agency prepares to go to Mars. Scott and his twin, Mark (also an astronaut!), have been providing all manner of biodata to researchers. In particular, researchers are interested in the effects of weightlessness and exposure to space radiation on aging. This story provides a good example in class, as you can discuss twin AND longitudinal research. I think you could also use this example to introduce the concept of paired t -tests. UPDATE 2/9/2017: Preliminary research is available if you want to flesh out this example.  MOAR UPDATES 3/3/21: CHECK OUT this PBS documentary featuring the twins! ESPECIALLY useful for a brief class period: This 2-minute clip that describes the twin ...

Granqvist's "Why Science Needs to Publish Negative Results"

This  link  is worth it for these pictures alone: I know, right? Perfect for teaching research methods and explaining the positivity bias in publication. These figures also sum up the reasoning behind the new journal described in this article. New Negatives in Plant Science was founded in order to combat the file drawer problem. It publishes non-significant research. It is open access. It publishes commentaries. It even plans special issues for specific controversial topics within Plant Science. Which absolutely, positively are NOT my jam. However, the creators of this journal hope that it will serve as a model for other fields. Given the recent flare up in the Replication Crisis (now Replication War?), this new journal provides a model for on-going, peer reviewed, replication and debate. I think this journal (or the idea behind this journal) could be used in a research methods class as a discussion piece. Specifically, how else could we reduce the file dra...

Science Friday's "Spot the real hypothesis"

Annie Minoff delves into the sins of ad hoc hypotheses using several examples from evolutionary science (including evolutionary psychology) . I think this is a fun way to introduce this issue in science and explain WHY a hypothesis is important for good research. This article provides three ways of conveying that ad hoc hypotheses are bad science. 1) This video of a speaker lecturing about absurd logic behind ad hoc testing (here, evolutionary explanations for the mid-life "spare tire" that many men struggle with). NOTE: This video is from an annual event at MIT, BAHFest (Bad Ad Hoc Fest) if you want more bad ad hoc hypotheses to share with students. 2) A quiz in which you need to guess which of the ad hoc explanations for an evolutionary finding is the real explanation. 3) A more serious reading to accompany this video is Kerr's HARKing: Hypothesizing after results are known (1998), a comprehensive take down of this practice.

Why range is a lousy measure of variability

Climate change deniers misrepresent data and get called out

 Here is another example of how data visualizations can be accurate AND misleading. I Fucking Love Science broke down a brief Twitter war that started after National Review tweeted the following post in order to argue that global climate change isn't a thing. Note: The y-axis ranged from 110 - -10 degrees Fahrenheit. True, such a temperature range is experienced on planet Earth, but using such an axis distracts from the slow, scary march that is global climate change and doesn't do a very good job of illustrating how discrete changes in temperature map onto increased use of fossil fuels in the increasingly industrialized world. Twitter-verse responded thusly:

Totilo's "Antonin Scalia's landmark defense of violent video games"

A great example using a topic relevant to your students (video games), involving developmental psychology (the effect of violent media on children), and a modern event (Scalia's passing) in order to demonstrate the importance of both research psychology as well as statistics. This article extensively quote Scalia's majority opinion regarding Brown vs. Entertainment Merchants Association, a 2010 U.S. Supreme Court case that decided against California's attempt to regulate the sale of violent video games to minors (the full opinion embedded in the article). Why did Scalia decide against regulating violent video games in the same manner that the government regulates alcohol and cigarette sales? In part, because research and statistics. Of particular use to an instructor of statistics are sections when Scalia cites shaky psychological research and argues that correlational research can not be used to make causal arguments... ...Scalia also discusses effect sizes... ...

Stromberg and Caswell's "Why the Myers-Briggs test is totally meaningless"

Oh, Myers-Briggs Type Indicator, you unkillable scamp. This video , from Vox, gives a concise historical perspective on the scale, describes how popular it still is, and summarizes several of the arguments against the scale. This video explains why the ol' MBTI is not particularly useful. Good for debunking psychology myths and good for explaining reliability (in particular, test-retest reliability) and validity. I like this link in particular because it presents its argument via both video as well as a smartly formatted website. The text in the website includes links to actual peer-reviewed research articles that refute the MBTI.