Skip to main content

Posts

Showing posts with the label efficacy studies

Suicide hotline efficacy data: Assessment, descriptive data, t-tests, correlation, regression examples abound

ASIDE: THIS IS MY 500th POST. PLEASE CLAP. Efficacy data about a mental health intervention? Yes, please. The example has so much potential in a psych stats classroom. Or an abnormal/clinical classroom, or research methods. Maybe even human factors, because three numbers are easy to remember than 10? This post was inspired by an NPR story  by Rhitu Chatterjee. It is all about America's mental health emergency hotline's switch from a 10-digit phone number to the much easier-to-remember three digits (988), and the various ways that the government has measured the success of this change. How to use this (and related material) in class: 1) Assessment. In the NPR interview, the describe how several markers have improved: Wait times, dropped calls, etc.  Okay, so the NPR story sent me down a rabbit hole of looking for this data so we can use it in class. Here is the federal government's website about  988  and a link to their specific  988  performance data,...

Stein's "Is It Safe For Medical Residents To Work 30-Hour Shifts?"

This story describes an 1) an efficacy study that 2) touches on some I/O/Health psychology research and 3) has gained the unwanted attention of government regulatory agencies charged with protecting research participants.   The study described in this story is an efficacy study that questions a decision made by the 2003 Accreditation Council for Graduate Medical Education. Specifically, this decision capped the number of hours that first-year medical student can work at 80/week and a maximum shift of 16 hours. The PIs want to test whether or not these limits improve resident performance and patient safety. They are doing so by assigning medical students to either 16-hour maximum shifts or 30-hour maximum shifts. However, the research participants didn't have the option to opt out of this research. Hence, an investigation by the federal government. So, this is interesting and relevant to the teaching of statistics, research methods, I/O, and health psychology for a numbe...

Christie Aschwanden's "The Case Against Early Cancer Detection"

I love counterintuitive data that challenges commonly held beliefs. And there is a lot of counterintuitive health data out there (For example, data questioning the health benefits associated with taking vitamins  or data that lead to a revolution in how we put our babies to sleep AND cut incidents of SIDS in half ). This story by Aschwanden for fivethirtyeight.com discusses efficacy data for various kinds of cancer screening. Short version of this article: Early cancer screening detects non-cancerous lumps and abnormalities in the human body, which in turn leads to additional and evasive tests and procedures in order to ensure that an individual really is cancer-free or to remove growths that are not life-threatening (but expose an individual to all the risks associated with surgery). Specific Examples: 1) Diagnosis of thyroid cancer in South Korea has increased. Because it is being tested more often. However, death due to thyroid cancer has NOT increased (see figure below)...

Weber and Silverman's "Memo to Staff: Time to Lose a Few Pounds"

Weber and Silverman's article for the Wall Street Journal has lots of good psychy/stats information  ( here is a .pdf of the article if you hit a pay wall ). I think it would also be applicable to health and I/O psychology classes. The graph below summarizes the main point of the article: Certain occupations have a greater likelihood of obesity than others (a good example of means, descriptive statistics, graphs to demonstrate variation from the mean). As such, how can employers go about increasing employee wellness? How does this benefit an organization financially? Can data help an employer decide upon where to focus wellness efforts? The article goes on to highlight various programs implemented by employers in order to increase employee health (including efficacy studies to test the effectiveness of the programs). In addition to the efficacy research example, the article describes how some employers are using various apps in order to collect data about employee health and...

Chris Taylor's "No, there's nothing wrong with your Fitbit"

Taylor, writing for Mashable , describes what happens when carefully conducted public health research (published in the  Journal of the American Medical Association ) becomes attention grabbing and poorly represented click bait. Data published in JAMA (Case, Burwick, Volpp, & Patel, 2015) tested the step-counting reliability of various wearable fitness tracking devices and smart phone apps (see the data below). In addition to checking the reliability of various devices, the article makes an argument that, from a public health perspective, lots of people have smart phones but not nearly as many people have fitness trackers. So, a way to encourage wellness may be to encourage people to use the the fitness capacities within their smart phone (easier and cheaper than buying a fitness tracker). The authors never argue that fitness trackers are bad, just that 1) some are more reliable than others and 2) the easiest way to get people to engage in more mindful walking...