Skip to main content

Posts

Showing posts from March, 2015

Christie Aschwanden's "The Case Against Early Cancer Detection"

I love counterintuitive data that challenges commonly held beliefs. And there is a lot of counterintuitive health data out there (For example, data questioning the health benefits associated with taking vitamins  or data that lead to a revolution in how we put our babies to sleep AND cut incidents of SIDS in half ). This story by Aschwanden for fivethirtyeight.com discusses efficacy data for various kinds of cancer screening. Short version of this article: Early cancer screening detects non-cancerous lumps and abnormalities in the human body, which in turn leads to additional and evasive tests and procedures in order to ensure that an individual really is cancer-free or to remove growths that are not life-threatening (but expose an individual to all the risks associated with surgery). Specific Examples: 1) Diagnosis of thyroid cancer in South Korea has increased. Because it is being tested more often. However, death due to thyroid cancer has NOT increased (see figure below)...

Izadi's "Tweets can better predict heart disease rates than income, smoking and diabetes, study finds"

Elahe Izadi, writing for the Washington Post, did a report on this article by Eichstaedt et. al, (2015) . The original research analyzed tweet content for hostility and noted the location of the tweet. Data analysis found a positive correlation between regions with lots of angry tweets and the likelihood of dying from a heart attack. The authors of the study note that the median age of Twitter users is below that of the general population in the United States. Additionally, they did not use a within-subject research design. Instead, they argue that patterns in hostility in tweets reflect on underlying hostility of a given region. An excellent example of data mining, health psychology, aggression, research design, etc. Also, another example of using Twitter, specifically, in order to engage in public health research ( see this previous post detailing efforts to use Twitter to close down unsafe restaurants ).

Harry Enten's "Has the snow finally stopped?"

This article and figure from Harry Enten (reporting for fivethrityegiht) provides informative and horrifying data on the median last day of measurable snow in different cities in America. (Personally, I find it horrifying because my median last day of measurable snow isn't until early April). This article provides easy-to-understand examples of percentiles, interquartile range, use of archival data, and median. Portland and Dallas can go suck an egg.

Weber and Silverman's "Memo to Staff: Time to Lose a Few Pounds"

Weber and Silverman's article for the Wall Street Journal has lots of good psychy/stats information  ( here is a .pdf of the article if you hit a pay wall ). I think it would also be applicable to health and I/O psychology classes. The graph below summarizes the main point of the article: Certain occupations have a greater likelihood of obesity than others (a good example of means, descriptive statistics, graphs to demonstrate variation from the mean). As such, how can employers go about increasing employee wellness? How does this benefit an organization financially? Can data help an employer decide upon where to focus wellness efforts? The article goes on to highlight various programs implemented by employers in order to increase employee health (including efficacy studies to test the effectiveness of the programs). In addition to the efficacy research example, the article describes how some employers are using various apps in order to collect data about employee health and...

Das and Biller's "11 most useless and misleading infographics on the internet"

io9.com Das and Biller, reporting for io9.com , shared several good examples of bad graphs. The graphs are bad for a variety of reasons. I have highlighted a few below. Non-traditional display of data that create the illusion that the opposite of the truth is true: Note the y-axis is flipped (0 at the top...huh?), so murders have actually INCREASED since "Stand Your Ground".  Cherry picking data: Confusing data presentation: I think that this could be fun to use in class as a discussion piece to pick apart bad graphs, so that your students 1) think critically about all graphs and figures they see and 2) learn how to make truthful graphs. Another fun way to use this in class would be to present these graphs to your students and then ask them to create APA style manual compliant graphs of the same data.

Chris Taylor's "No, there's nothing wrong with your Fitbit"

Taylor, writing for Mashable , describes what happens when carefully conducted public health research (published in the  Journal of the American Medical Association ) becomes attention grabbing and poorly represented click bait. Data published in JAMA (Case, Burwick, Volpp, & Patel, 2015) tested the step-counting reliability of various wearable fitness tracking devices and smart phone apps (see the data below). In addition to checking the reliability of various devices, the article makes an argument that, from a public health perspective, lots of people have smart phones but not nearly as many people have fitness trackers. So, a way to encourage wellness may be to encourage people to use the the fitness capacities within their smart phone (easier and cheaper than buying a fitness tracker). The authors never argue that fitness trackers are bad, just that 1) some are more reliable than others and 2) the easiest way to get people to engage in more mindful walking...