Skip to main content

Posts

Showing posts with the label research methods

Improper data reporting leads to big EPA fines for Kia/Hyundai

On November 3, 2014, Hyundai and Kia were fined a record-setting $100 million for violating the Clean Air Act. In addition, they were fined for cooking their data and misreporting their fuel economy, using the unethical (cherry-picking) techniques described below by representatives of the federal government: " "One was the use of, not the average data from the tests, but the best data. Two, was testing the cars at the temperature where their fuel economy is best. Three -- using the wrong tire sizes; and four, testing them with a tail wind but then not turning around in the other direction and testing them with a head wind. So I think that speaks to the kinds problems that we saw with Hyundai and Kia that resulted in the mismeasurement." Video and quote from Sam Hirsch, acting assistant attorney general.    Here is EPA's press release about the fine .  How to use it in class: -Hyundai and Kia cherry-picked data, picking out the most flattering data but not the...

Tessa Arias' "The Ultimate Guide to Chocolate Chip Cookies"

I think this very important cookie research is appropriate for the Christmas cookie baking season. I also believe that it provides a good example of the scientific method. Arias started out with a baseline cookie recipe (baseline Nestle Toll House Cookie Recipe, which also served as her control group) and modified the recipe in a number of different ways (IVs) in order to study several dependent variables (texture, color, density, etc.). The picture below illustrates the various outcomes per different recipe modifications. For science! http://www.handletheheat.com/the-ultimate-guide-to-chocolate-chip-cookies Also, being true scientist, her original study lead to several follow up studies investigating the effect of different kinds of pans and flours  upon cookie outcomes. http://www.handletheheat.com/the-ultimate-guide-to-chocolate-chip-cookies-part-2 I used this example to introduce hypothesis testing to my students. I had them identify the null and alternative ...

Patti Neighmond's "What is making us fat: Is it too much food or moving to little?"

This NPR story by Patti Neighmond is about determining the underlying cause of U.S. obesity epidemic. As the name of the segment states, it seems to come down to food consumption and exercise, but which is the culprit? This is a good example for research methods because it describes methodology for examining both sides of this question. The methodology used also provides good examples of archival data usage.

ed.ted.com: TED video + assessment + discussion board

The folks of TED have created ed.ted.com , a website that allows you to use their videos (or any video available via youtube) and create a lesson around the video. You can create an assessment quiz (and save your student's grades on the assessment). You can also create discussion boards and post your own commentary/links related to the content of the video. I know, right? There are several lessons that relate to statistics and research methods . Here is a shorter video that teaches the viewer how to assess  the quality of medical research , and here is a list of TED talks about Data Analysis and Probability  While the teaching of statistics and research methods are my jam, you can use any old video from youtube/TED ( like the many talks featuring psychology research ) and create an online lesson and assessment about the talk. Pretty cool! I think these could be use as bonus points, a quick homework assignment, and as a way to reiterate the more conceptual ideas surroun...

Every baby knows the scientific method

I am the mother of a boundary-testing two year old and my little guy likes to replicate his research findings with me all day long. We're currently trying to pull a sufficient n-size to test his hypothesis of whether or not I will ever let him eat dog food. I don't want to p-hack, but I'm pretty sure that that answer is no.

Public Religion Research Institute's “I Know What You Did Last Sunday” Finds Americans Significantly Inflate Religious Participation"

A study performed by The Public Religion Research Institute  used either a) a telephone survey or b) an anonymous web survey to question people about their religious beliefs and religious service habits. The researchers found that the telephone participants reported higher rates of religious behaviors and greater theistic beliefs. The figure below,  from a New York Times summary of the study , visualizes the main findings. The NYT summary also provides figures illustrating the data broken down by religious denomination. Property of the New York Times Participants also vary in their reported religious beliefs based on how they are surveyed (below, the secular are more likely to report that they don't believe in God when completing an anonymous online survey). Property of Public Religion Research Institute  This report could be used in class to discuss psychometrics, sampling, motivation to lie on surveys, social desirability, etc. Additionally, the sour...

Marketing towards children: Ethics and research

Slate's The Littlest Tasters More research methods than statistics, this article describes the difficulty in determining taste preferences in wee humans who don't speak well if at all. slate.com The goods for teaching: They mention the FACE scale. The research methods described go beyond marketing research and this could be useful in a Developmental class to describe approaches used in data collection for children (like asking parents to rate their children's reactions to foods). I've used this as a discussion board prompt when discussing research ethics, both for simply conducting research with children as well as the ethics of marketing (not so healthy foods) towards children. Aside: They also describe why kids like Lunchables, which has always been a mystery to me. Apparently, kids are picky about texture and flavor but they haven't developed a preference for certain foods to be hot or cold. The Huffington Post's " You'll Never Look at ...

io9's "The Controversial Doctor Who Pioneered the Idea Of "Informed Consent""

This story describes a 1966 journal article that argues that signing an informed consent isn't the same as truly giving informed consent. I think this is a good example for the ethics section of a research methods class as it demonstrates some deeply unethical situations in which participants weren't able to give informed consent (prisoners, non-English speakers, etc.). Indeed, the context within which the informed consent is provided is very important. It also provides a historical context regarding the creation of Institutional Review Boards. The original 1966 article is here .

Jon Mueller's Correlation or Causation website

If you teach social psychology, you are probably familiar with Dr. Jon Mueller's Resources for the Teaching of Social Psychology website .  You may not be as familiar with Mueller's Correlation or Causation website, which keeps a running list of news stories that summarize research findings and either treat correlation appropriately or suggest/imply/state a causal relationship between correlational variables. The news stories run the gamut from research about human development to political psychology to research on cognitive ability. When I've used this website in the past, I have allowed my students to pick a story of interest and discuss whether or not the journalist in question implied correlation or causation. Mueller also provides several ideas (both from him and from other professors) on how to use his list of news stories in the classroom.

Nature's "Policy: Twenty tips for interpreting scientific claims" by William J. Sutherland, David Spiegelhalter, & Mark Burgman

This very accessible summary lists the ways people fib with, misrepresent, and overextend data findings. It was written as an attempt to give non-research folk (in particular, law makers), a cheat sheet of things to consider before embracing/rejecting research driven policy and laws. A sound list, covering plenty of statsy topics (p-values, the importance of replication), but what I really like is that they article doesn't criticize the researchers as the source of the problem. It places the onus on each person to properly interpret research findings. This list also emphasizes the importance of data driven change.

NPR's "Will Afghan polling data help alleviate election fraud?"

This story details the application of American election polling techniques to Afghanistan's fledgling democracy. Essentially, international groups are attempting to poll Afghans prior to their April 2014 presidential elections as to combat voter fraud and raise awareness about the election. However, how do researchers go about collecting data in a country where few people have telephones, many people are illiterate, and just about everyone is weary about strangers approaching them and asking them sensitive questions about their political opinions? The story also touches on issues of social desirability as well as the decisions  a researcher makes regarding the kinds of response options to use in survey research. I think that this would be a good story to share with a cranky undergraduate research methods class that thinks that collecting data from the undergraduate convenience sample is really, really hard. Less snarkily, this may be useful when teaching multiculturalism or ...

Changes in standards for data reporting in psychology journals

Two prominent psychology journals are changing their standards for publication in order to address several long-standing debates in statistics (p-values v. effect sizes and point estimates of the mean v. confidence intervals). Here are the details for changes that the Association for Psychological Science is creating for their gold-standard publication, Psychological Science, in order to improve the transparency in data reporting. Some of the big changes include mandatory reporting of effect sizes, confidence intervals, and inclusion of any scales or measures that were non-significant. This might be useful in class when describing why p-values and means are imperfect, the old p-value v. effect size debate, and how one can bend the truth with statistics via research methodology (and glossing over/completely neglecting N.S. findings). These examples are also useful in demonstrating to your students that these issues we discuss in class have real world ramifications and aren't be...

The Economist's "Unlikely Results"

A great, foreboding video  (here is a link to the same video at YouTube in case you hit the paywall) about the actual size and implication of Type II errors in scientific research. This video does a great job of illustrating what p < .05 means in the context of thousands of experiments. Here is an article from The Economist on the same topic. From TheEconomist

The Onion's "Son-Of-A-Bitch Mouse Solves Maze Researchers Spent Months Building"

Ha. This story is a good example of just how frustrating research can be, how well conceived research can go wrong, the ceiling effect, and why you should pre-test measures before going live. "Above, researchers discuss plans for a new maze, since the prick of a mouse, right, destroyed their chances of making any new discoveries whatsoever about the nature of synaptical response." -TheOnion.com

io9's "Rich, educated westerners could be skewing social science studies"

This isn't the first time this issue has been broached. However, this time, it has an awesome graphic to summarize the issue. The io9 article also has links to various citations regarding the issue. Here is an accessible, short reading on the same topic writting by Sharon Begley.

The Colbert Report's "Texas Gun Training Bill & Free Shotgun Experiment"

The Colbert Report's take on Kyle Copland's research studying whether or not gun ownership lowers crimes. Copland's method? Handing out free .22s in high crime areas (to folks that pass a background check and take a gun safety course). from ColbertNation.com This applies more to a research methods class (Colbert expresses a need for a control group in Copland's research. His suggestion? Sugar guns as well as a second experimental condition in which EVERYONE is given a gun). However, I imagine that you could show your students this video and pause it before they introduce the research project and ask your students how we could finally answer this question of whether or not gun ownership lowers crimes. Thanks to Chelsea for pointing this out!

Lesson plan: Posit Science and Hypothesis Testing

Here is a basic lesson plan that one could use to teach the hypothesis testing method in a statistics course. I teach in a computer lab but I think it could be modified for a non-lab setting, especially if you use a smart classroom. The lesson involves learning about a company that makes web-based games that improve memory (specifically, I use the efficacy testing the company did to provide evidence that their games do improve memory). Posit Science is a company that makes computer based games that are intended to improve memory. I use material from the company's website when teaching my students about the scientific method. Here is what I do... Property of positscience.com

Jon Mueller's CROW website

I have been using Mueller's CROW website for years. It is a favorite teaching resource among my fellow social psychologists , with TONS of well-categorized resources for teaching social psychology. This resource is also useful to statistics/research methods instructors out there as it contains a section dedicated to research design with a sub-section for statistics.

io9.com's "Packages sealed with "Atheist" tape go missing 10x more often than controls"

I originally came across this story via io9.com . More information from the source is available here . Essential, these high-end German shoes are made by a company of devoted atheists. They even have their mailing materials branded with "atheist". And they had a problem with their packages being lost in by the USPS. They ran a wee experiment in which they sent out packages that were labeled with the Atheist tape vs. not, and found that the Atheist packages went missing at a statistically higher rate than the non-denominational packages. I think this could be used in the classroom because it is a pretty straight-forward research design, you can challenge your students to question the research design, simply challenge your students to read through the discussion of this article at the atheistberlin website, introduce your students to Milgram's "lost letter" technique and other novel research methods. Edit: 3/9/2020 If you want to delve further into...

Statistics Meme I

from http://hello-jessica.tumblr.com Who knew that Zoidberg was an ad hoc reviewer?