Skip to main content

Posts

Showing posts with the label research methods

Not a particularly statsy example, but still delightful.

I mean. This is the most entertaining research methodology I have ever seen. What did this look like? This is what it looked like.  So, this is barely a statsy example, but it does include data outcomes:  n = 175, with some snakes striking the boot ( n = 6) and some coiling ( n = 3). While PIs might try to No IRB would let you get away with asking your graduate student to step on snakes. Mostly, this is funny. I found his research, too . While I think the fake leg is highly amusing, I think it is great that Morris is a passionate advocate for snake education and teaching people to be tolerant of snakes they find in the wild. Finally, I heard about this research on an NPR story about snake handling classes (taught by Morris) in Arizona. A WHOLE CLASS. 

Bella the Waitress: A fun hypothesis testing example.

Waitress Bella is on TikTok . She shares her beach looks and hauls, like plenty of other influencers. Recently, though, shared a series of TikToks that have a home in our statistics and research methods classes.  Bella had a hypothesis. She suspected that certain hairstyles influenced her customers to tip her more. So Bella tested her hypothesis over a series of within-subject, n = 1 experiments at work ( Bella, 2022a , Bella, 2022b , Bella, 2022c ) This isn't a pre-registered paper with open data, but I think this could be a good discussion piece in a research methods or statistics class. I swear that Kate isn't my burner account. If you really, really wanted to test this hypothesis properly, what would that research look like? 1) What external factors influence tips (day of the week, time of day, etc.)? 2) What factors influence reactions to waitstaff (gender, attractiveness, alcohol)? 3) Would you use a within or between research design to study this (different waitstaff wit...

Data collection via wearable technology

This article from The Economist, " Data from wearable devices are changing disease surveillance and medical research ," has a home in your stats or RM class. It describes how FitBits and Apple Watches can be used to collect baseline medical data for health research. I like it because it is very accessible but still goes into detail about specific research issues related to this kind of data: -How does one operationalize their outcome variable? Pulse, temperature, etc., as proxies for underlying problems. Changes in heart rates have predicted the onset of COVID and the flu.  -Big samples be good! One of the reasons this data works like it does is because it is harvested from a massive number of people using these devices.  -The article gives examples of well-designed experiments that use wearable technology. However, often with massive data collection via tech, the data drives the hypothesis, not the other way around. In our psychology classes, we discuss NHST and the proper w...

Pew Research compares forced-choice versus check-all response options.

This is for my psychometric instructors. (Glorious, beloved) Pew Research Center compared participant behavior when they have to answer the same question in either a) forced-choice or b) check-all format. Here are the links to the short report and to the long report . What did they find? Response options matter, such that more participants agreed with statements when they were in the forced-choice format. See below: So, this is interesting for an RM class. I also like that the short report explained the two different kinds of question responses. The article also explores a variety of reasons for these findings, as well as other biases that participants exhibit when responding to questionnaires:

Health and Human Service videos: Explaining research to participants

The U.S. Department of Health and Human Services produced a bunch of great videos to explain topics related to human subject research . The videos were created as part of a broader outreach effort intent on  explaining the research process to research participants .  I think they would fit right into a Research Methods course. Topics include: IRBs: Research design: There is also a specific video explaining social science research: All of the videos (along with handouts) are available here . All videos have closed-captions as well as Spanish versions

The Pudding's Colorism

Malaika Handa , Amber Thomas , and Jan Diehn created a beautiful, interactive website, Colorism in High Fashion . It used machine learning to investigate "colorism" at Vogue magazine. Specifically, it delves into the differences, over time, in cover model color but also how lighting and photoshopping can change the color of the same woman's skin, depending on the photo. There are soooo many ways to use this in class, ranging from machine learning, how machine learning can refine old psychology methodology, to variability and within/between-group differences. Read on: 1. I'm a social psychologist. Most of us who teach social psychology have encountered research that uses magazine cover models as a proxy for what our culture emphasizes and values ( 1 , 2 , 3 ). Here, Malaika Handa, Amber Thomas, and Jan Diehn apply this methodology to Vogue magazine covers. And they take this methodology into the age of machine learning by using k-means cluster and pixels to deter...

The Evolution of Pew Research Center’s Survey Questions About the Origins and Development of Life on Earth

Question-wording matters, friends! This example shows how question order and question-wording can affect participant response. This is a good example for all of your research methods and psychometrics students to chew on. Pew Research asked people if they believed in evolution . They did so in three different ways, which lead to three different response patterns. 1) Prior to asking about evolution, the asked whether or not the participant believes in God. 2) Asked participants if they believed in evolution. If they said "yes", they asked the participant whether or not they believe that a higher power guides evolution. 3) They asked participants if they believed in evolution and gave participants three response options:     a) Don't believe in evolution.     b) Believe in evolution due to natural selection.     c) Believe in evolution guided by a higher power. Responses to Option 1: Responses to Options 2. and 3. Oh, the classroom discus...

Nextdoor.com's polls: A lesson in psychometrics, crankiness

If you are unaware, Nextdoor.com is a social network that brings together total strangers because they live in the same neighborhood. And it validates your identity and your address, so even though you don't really know these people, you know where they live, what their name is, and maybe even what they look like as you have the option to upload a photo. Needless to say, it is a train wreck. Sure, people do give away free stuff, seek out recommendations for home improvements, etc. But it is mostly complaining or non-computer-savvy people using computers. One of the things you can do is create a poll. Or, more often, totally screw up a poll. Here are some of my favorites. In the captions, I have given some ideas of how they could be used as examples in RM or psychomtrics. This is actually a pretty good scale. A lesson in human factors/ease of user interface use? Response options are lacking and open to interpretation. Sometimes, you don't need a poll at a...

Pew Research's Quiz: How well can you tell factual from opinion statements?

Pew Research created a survey that asks participants to identify news statements as opinions or facts. They had 5000+ complete this survey AND you can complete the survey and see your results.  Description of quiz AND research methodology! An example question from the survey. This one made me think of Ron Swanson. How to use in Stats/RM: 1. A good way of introducing the truism "The plural of anecdote isn't data.". Facts and opinions aren't always the same thing, and distinguishing between the two is key to scientific thinking. Ask your student think of of objective data that could prove or disprove these statements. Get them thinking like researchers, developing hypotheses AND operationalizing those hypotheses. 2. At the end of the quiz, they describe your score in terms of percentiles. Specifically, in terms of the percentages of users who scored above and below you on the quiz items. 3. You can also access Pew's report of their survey f...

Stein's "Troubling History In Medical Research Still Fresh For Black Americans"

NPR, as part of their series about discrimination in America , talked about how it is difficult to obtain a diverse research sample when your diverse research sample doesn't trust scientists. This story by Rob Stein is about public outreach attempts in order to gather a representative sample for a large scale genetic research study. The story is also about how historical occurrences of research violations live on in the memory of the affected communities. The National Institutes for Health is trying to collect a robust, diverse sampling of Americans as part of the All of Us initiative. NIH wants to build a giant, representative database of Americans and information about their health and genetics. As of the air date for this story, African Americans were underepresented in the sample, and the reason behind this is historical. Due to terrible violation of African American research participant rights (Tuskeegee, Henrietta Lacks), many African Americans are unwilling to partic...

Compound Interest's "A Rought Guide to Spotting Bad Science"

I love good graphic design and lists. This guide to spotting bad science embraces both. And many of the science of bad science are statistical in nature, or involve sketchy methods. Honestly, this could be easily turned into a  homework assignment for research evaluation. This comes from the Compound Interest ( @compoundchem ), which has all sorts of beautiful visualizations of chemistry topics, if that is your jam. 

Raff's "How to read and understand a scientific paper: a guide for non-scientists"

Jennifer Raff  is a geneticist, professor, and enthusiastic blogger . She  created a helpful guide for how non-scientists (like our students) can best approach and make sense of research articles. The original article is very detailed and explains how to make sense of experts. Personally, I appreciate that this guide is born out of trying to debate non-scientists about research. She wants everyone to benefit from science and make informed decisions based on research. I think that is great. I think this would be an excellent way to introduce your undergraduates to research articles in the classroom. I especially appreciated this summary of her steps (see below). This could be turned into a worksheet with ease. Note: I still think your students should chew on the full article before they are ready to answer these eleven questions. http://blogs.lse.ac.uk/impactofsocialsciences/2016/05/09/how-to-read-and-understand-a-scientific-paper-a-guide-for-non-scientists/#author ...

NY Magazine's "Finally, Here’s the Truth About Double Dipping"

New York Magazine's  The Science of Us made a brief, funny video that investigates the long running issue of the dangers of double dipping.  It is based on a Scientific America report of an actual published research article  about double dipping. Yes, it includes the Seinfeld clip about George double dipping. The video provides a brief example of how to go about testing a research hypothesis by operationalizing a hypothesis, collecting, and analyzing data. Here, the abstract question is about how dirty it is to double dip. And they operationalized this question: Research design: The researchers used a design that, conceptually, demonstrates ANOVA logic (the original article contains an ANOVA, the video itself makes no mention of ANOVA). The factor is "Dips" and there are three levels of the factor: Before they double dipped, they took a base-line bacterial reading of each dip. Good science, that. They display the findings in table form (aga...

Turner's "E Is For Empathy: Sesame Workshop Takes A Crack At Kindness" and the K is for Kindness survey.

This NPR story is about a survey conducted by the folks at Sesame Street. And that survey asked parents and teachers about kindness. If kids are kind, if the world is kind, how they define kindness, etc.. The NPR story is a round about way of explaining how we operationalize variables, especially in psychology. And the survey itself provides examples of forced choice research questions and dichotomous responses that could have been Likert-type scales. The NPR Story: The Children's Television Workshop, the folks behind Sesame Street, have employees in charge of research and evaluation (a chance to plug off-the-beat-path stats jobs to your students). And they did a survey to figure out what it means to be kind when you are a kid. They surveyed parents and teachers to do so. The main findings are summarized here . Parents and teachers are worried that the world isn't kind and doesn't emphasize kind. But both groups think that kindness is more important than academic a...

Dvorsky's "Lab Mice Are Freezing Their Asses Off—and That’s Screwing Up Science"

This example can be used to explain why the smallest of details can be so important when conducting research. This piece by Dvosrsky summarizes a recently published  article that points out a (possible!) major flaw in pre-clinical cancer research using rats. Namely, lab rats aren't being kept at an ideal rat temperature. This leads to the rats behaving differently than normal to stay warm: They eat more, they burrow more, and their metabolism changes. The researchers go on to explain that there are also plenty of other seemingly innocuous factors that can vary from rat lab to rat lab, like bedding, food, exposure to light, etc. and that these factors may also effect research findings. Why is this so important? Psychology isn't the only field dealing with a replicability crisis: Rat researchers are also experiencing difficulties. Difficulties that may be the result of all of these seemingly tiny differences in lab rats that are used during pre-clinical research. I thin...

Esther Inglis-Arkell's "I Had My Brain Monitored While Looking at Gory Pictures. For Science!"

The writer helped out a PhD candidate by participating in his research, and then described the research process for io9.com readers . I like this because it is describes the research process purely from the perspective of the research participant who doesn't know what the exact hypothesis is. This could be useful for explaining what research participation is like for introductory students. You could used it in a methods class by asking the students to figure out why they used the procedures that they did, and what procedures and scales she describes in her narrative. She describes the informed consent, a personality scale (what do you think the personality scale was trying to assess?), and rating stimuli in two ways (brain scan as well as paper and pencil assessment...why do you think they needed both?) Details to Like: -She is participating is psychology research (neruo. work that may benefit those with PTSD someday) -She describes what is entailed when wearing an elect...

McFadden's "Frances Oldham Kelsey, F.D.A. Stickler Who Saved U.S. Babies From Thalidomide, Dies at 101"

This obituary for Dr. Frances Oldham Kelsey that tells an important story about research ethics, pharmaceutical industries, and the importance of government oversight in the drug creation process ( .pdf here ). Dr. Kelsey, receiving the President's Award for Distinguished Federal Civilian Service (highest honor given to federal employees) Dr.  Kelsey was one of the first officials in the United States to notice (via data!) and raise concerns about thalidomide , the now infamous anti-nausea drug that causes terrible birth defects when administered to pregnant women. The drug was already being widely used throughout Europe, Canada, and the Middle East to treat morning sickness, but Dr. Kelsey refused to approve the drug for widespread use in the US (despite persistent efforts of Big Pharm to push the drug into the US market). Time proved Dr. Oldham Kelsey correct (clinical trials in the US went very poorly), and her persistence, data analysis, and ethics helped to limit the ...

One article (Kramer, Guillory, & Hancock, 2014), three stats/research methodology lessons

The original idea for using this article this way comes from Dr. Susan Nolan 's presentation at NITOP 2015, entitled " Thinking Like a Scientist: Critical Thinking in Introductory Psychology."  I think that Dr. Nolan's idea is worth sharing, and I'll reflect a bit on how I've used this resource in the classroom. (For more good ideas from Dr. Nolan, check out her books, Psychology , Statistics for the Behavioral Sciences , and The Horse that Won't Go Away (about critical thinking)). Last summer, the National Academy of Sciences Proceedings published an article entitled "Experimental evidence of massive-scale emotional contagion through social networks ." The gist: Facebook manipulated participants' Newsfeeds to increase the number of positive or negative status updates that each participant viewed. The researchers subsequently measured the number of positive and negative words that the participants used in their own status updates. They fou...