Skip to main content

Posts

Showing posts with the label media

Smart's "The differences in how CNN MSNBC & FOX cover the news"

https://pudding.cool/2018/01/chyrons/ This example doesn't demonstrate a specific statistical test. Instead, it demonstrate how data can be used to answer a hotly contested question: Are certain media outlets biased? How can we answer this? Charlie Smart, working for The Pudding, addressed this question via content analysis. Here is how he did it: And here are some of their findings: Yes, Fox News was talking about the Clintons a lot. While over at MSNBC, they discussed the investigation into Russia and the 2016 elections ore frequently. While kneeling during the anthem was featured on all networks, it was featured most frequently on Fox And context matters. What words are associated with "dossier"? How do the different networks contextualize President Trump's tweets? Another reason I like this example: It points out the trends for the three big networks. So, we aren't a bunch of Marxist professors ragging on FOX, and we ar...

APA's "How to Be A Wise Consumer of Psychological Research"

This is a nice, concise hand out from APA that touches on the main points for evaluating research. In particular, research that has been distilled by science reporters. It may be a bit light for a traditional research methods class, but I think it would be good for the research methods section of most psychology electives, especially if your students working through source materials. The article mostly focuses on evaluating for proper sampling techniques. They also have a good list of questions to ask yourself when evaluating research: This also has an implicit lesson of introducing the APA website to psychology undergraduates and the type of information shared at APA.org. (including, but not limited to, this glossary of psychology terms .)

Weinberg's "How One Study Produced a Bunch of Untrue Headlines About Tattoos Strengthening Your Immune System"

In my Honors Statistics course, we have discussion days over the course of a semester. One of the discussion topics involves instances when the media has skewered research results (for another example, see this story about  fitness trackers ,) Jezebel writer Caroline Weinberg   describes a  modest study  that found that people who have at least one previous tattoo experience a boost in their immunity when they get subsequent tattoos, as demonstrated via saliva samples of Immunoglobulin A. This is attributed to the fact that compared to tattoo newbies, tattoo veterans don't experience a cortisol reaction following the tattoo. Small sample size but a pretty big effect. So, as expected, the media exaggerated these effects...but mostly because the researcher's university's marketing department did so first. Various new outlets stated things like  "Sorry, Mom: Getting lots of tattoos could have surprising health benefits"  and  "Getting multip...

Paul Basken's "When the Media Get Science Research Wrong, University PR May Be the Culprit"

Here is an article from the Chronicle of Higher Education ( .pdf  in case you hit the pay wall) about what happens when university PR promotes research findings in a way that exaggerates or completely misrepresents the findings. Several examples of this are included (Smelling farts cures cancer? What?), including empirical study of how health related research is translated into press releases ( Sumner et al. , 2014). The Sumner et al. piece found, that among other things, that 40% of the press releases studied contained exaggerated advice based upon research findings. I think that this is an important topic to address as we teach our student not to simply perform statistical analyses, but to be savvy consumers of statistics. This may be a nice reading to couple with the traditional research methods assignment of asking students to find research stories in popular media and compare and contrast the news story with the actual research article. If you would like more di...

Harry Enten's "Has the snow finally stopped?"

This article and figure from Harry Enten (reporting for fivethrityegiht) provides informative and horrifying data on the median last day of measurable snow in different cities in America. (Personally, I find it horrifying because my median last day of measurable snow isn't until early April). This article provides easy-to-understand examples of percentiles, interquartile range, use of archival data, and median. Portland and Dallas can go suck an egg.

Chris Taylor's "No, there's nothing wrong with your Fitbit"

Taylor, writing for Mashable , describes what happens when carefully conducted public health research (published in the  Journal of the American Medical Association ) becomes attention grabbing and poorly represented click bait. Data published in JAMA (Case, Burwick, Volpp, & Patel, 2015) tested the step-counting reliability of various wearable fitness tracking devices and smart phone apps (see the data below). In addition to checking the reliability of various devices, the article makes an argument that, from a public health perspective, lots of people have smart phones but not nearly as many people have fitness trackers. So, a way to encourage wellness may be to encourage people to use the the fitness capacities within their smart phone (easier and cheaper than buying a fitness tracker). The authors never argue that fitness trackers are bad, just that 1) some are more reliable than others and 2) the easiest way to get people to engage in more mindful walking...

Saturday Morning Breakfast Cereal and statistical thinking

Do you follow  Saturday Morning Breakfast Cereal  on  Facebook  or  Twitter ? Zach Weinersmith's hilarious web comic series frequently touches upon science, research methods, data collection, and statistics. Here are some such comics. Good for spiffing up a power point, spiffing up an office door (the first comic adorns mine) or ( per this post ) testing understanding of statistical concepts. http://www.smbc-comics.com/?id=2080...also a good example of the availability bias! http://www.smbc-comics.com/?id=3129 http://www.smbc-comics.com/?id=3435 http://www.smbc-comics.com/?id=1744 http://www.smbc-comics.com/?id=2980 http://smbc-comics.com/index.php?id=4084 http://www.smbc-comics.com/comic/2011-08-05 https://www.smbc-comics.com/index.php?id=4127 http://smbc-comics.com/comic/false-positives https://www.smbc-comics.com/comic/relax

Jon Mueller's Correlation or Causation website

If you teach social psychology, you are probably familiar with Dr. Jon Mueller's Resources for the Teaching of Social Psychology website .  You may not be as familiar with Mueller's Correlation or Causation website, which keeps a running list of news stories that summarize research findings and either treat correlation appropriately or suggest/imply/state a causal relationship between correlational variables. The news stories run the gamut from research about human development to political psychology to research on cognitive ability. When I've used this website in the past, I have allowed my students to pick a story of interest and discuss whether or not the journalist in question implied correlation or causation. Mueller also provides several ideas (both from him and from other professors) on how to use his list of news stories in the classroom.

Lesson Plan: The Hunger Games t-test review

Hey, nerds- Here is a PPT that I use to review t-tests with my students.  All of the examples are rooted in The Hunger Games. My students get a kick out of it and this particular presentation (along with my Harry Potter themed ANOVA review) is oft-cited as an answer to the question "What did you like the most about this class?" in my end of the semester reviews. Essentially, I have found various psychological scales, applied them to THG, and present my students with "data" from the characters. For example, the students perform a one-sample t-test comparing Machvellianism in Capital leadership versus Rebellion leadership (in keeping with the final book of the series, the difference between the two groups is non-significant). So, as a psychologist, I can introduce my students to various psychological concepts in addition to review t-tests. Note: I teach in a computer lab using SPSS, which would be a necessity for using exercises. Caveat: I would recommend usi...

US News's "Poll: 78 Percent of Young Women Approve of Weiner"

Best. Awful. Headline. Ever. T his headline makes it sound like many young women support the sexting, bad-decision-making, former NY representative Anthony Weiner. However, if one takes a moment to read the article, one will learn that the "young women" sampled were recruited from SeekingArrangement.com. A website for women looking for sugar daddies. If you want your brain to further explode, read through the comments section for the article. Everyone is reacting to the headline. Very few people actually read through the article themselves...which provides further anecdotal evidence that most folks can't tell good data from bad (and that part of our job as statistics instructors, in my opinion, is to ameliorate this problem).

Gerd Gigerenzer on how the media interprets data/science

Gerd "I love heuristics" Gigernezer talking about the misinterpretation of research by the medi a (in particular, misinterpretation of data about oral contraceptives leads to increases in abortions). He argues that such misinterpretation isn't just bad reporting, but unethical.

Lesson plan: Teaching margin of error and confidence intervals via political polling

One way of teaching about margin of error/confidence intervals is via political polling data. From  mvbarer.blogspot.com Here is a good site that has a break down of polling data taken in September 2012 for the 2012 US presidential election. I like this example because it draws on data from several well-reputed polling sites, includes their point estimates of the mean and their margin of errors. This allows for several good examples: a) the point estimates for the various polling organization all differ slightly (illustrating sampling error), b) the margin of errors  are provided, and c) it can be used to demonstrate how CIs can overlap, hence, muddying our ability to predict outcomes from point estimates of the mean. I tend to follow the previous example with this gorgeous polling data from Mullenberg College : This is how sampling is done, son! While stats teachers frequently discuss error reduction via big n , Mullenberg takes it a step further by o...

Media Matter's "Today in dishonest Fox News charts"

How to lie with accurate data...note how Fox News used a "creative' graph in order to make an 8.6% unemployment rate look like a 9% unemployment rate. Full story available at Media Matters (which, admittedly, is very left-leaning). From Media Matters

io9's "You're bitching about the wrong things when you read an article about science"

Colorful title aside,  this article  teaches critical thinking when analyzing scientific writing for validity and reliability. Property of io9.com As a Social Psychologist, I'm especially grateful that they covered the "Study of Duh" criticism. It also adresses the difference between bad science and bad journalism and why one needs to see the source material for research before they are in a position to truly evaluate a study.

Newsweek's "What should you really be afraid of?" Update 6/18/15

I use this when introducing the availability heuristic in Intro and Social (good ol' comparison of fatal airline accidents vs. fatal car crashes), but I think it could also be used in a statistics class. For starters, it is a novel way of illustrating data. Second, you could use it to spark a discussion on the importance of data-driven decision making when it comes to public policy/charitable giving. For instance, breast cancer has really good PR, but more women are dying of cardiovascular disease...where should the NSF concentrate its efforts to make the biggest possible impact? Property of Newsweek More of same from Curiosity.com... curiosity.com  https://pbs.twimg.com/media/Bur_W0hCMAAOidE.png

PHD Comics, 1/20/2010

Jorge Cham of PhD Comics quickly summarizes the problems that can occur as research is translated to the masses via the media. Funny AND touches on CI, sampling, psychometrics, etc. Property of phdcomics.com

Newsweek's "About 40 percent of American women have had abortions: The math behind the stats"

This exercise encourages students to think critically about the statistics that they encounter in the media.  Note: This data is about abortion. When I use this exercise, I stress to my students that the exercise isn't about being pro-choice or anti-abortion, it is about being anti-bad statistics. Writer Sarah Kliff published an  article in Newsweek about de-stigmatizing abortion. In the article, she makes the claim that 40% of American women have had abortions. Some readers questioned this estimate. This is her reply to those readers. I challenge my students to find flaws or possible flaws/points of concern in the mathematics behind the 40% estimate. Some points that they come up with: a) The data doesn't include minors, b) the data doesn't include women who were alive between 1973 (Row v. Wade) and 2004 and but died/moved out of the US before the 2005 census data (which was used in her calculations), c) data estimates that about half of women having an abortion at...

CBS News/New York Times' Poll: Gays in the Military

Words can be powerful and value-laden. This can have an impact upon survey responses, as it did for this survey about attitudes towards gays serving in the military. This survey was taken in 2010, and gays can now openly serve in the military, but I still think this example is still a powerful way of teaching the weight of words when creating surveys. property of cbs.com I tend to use this as an extra credit, asking my students to respond to two questions:

Hunks of Statistics: Sharon Begley

I decided that I shouldn't limit my Hunks of Statistics list to man-hunks. There are some lady-hunks as well. Like Sharon Begley. Sharon Begley, from thedailybeast.com

NPR's "Data linking aspartame to cancer risk are too weak to defend, hospital says"

This story from NPR is a good example of 1) the media misinterpreting statistics and research findings as well as 2) Type I errors and the fact that 3) peer-reviewed does not mean perfect. Here is a print-version of the story , and here is the radio/audio version... (note: the two links don't take you to the exact same stories...the print version provides greater depth but the radio version eats up class time when you forget to prep enough for class AND it doesn't require any pesky reading on the part of your students).