Skip to main content

Posts

Showing posts with the label psychometrics

Stromberg and Caswell's "Why the Myers-Briggs test is totally meaningless"

Oh, Myers-Briggs Type Indicator, you unkillable scamp. This video , from Vox, gives a concise historical perspective on the scale, describes how popular it still is, and summarizes several of the arguments against the scale. This video explains why the ol' MBTI is not particularly useful. Good for debunking psychology myths and good for explaining reliability (in particular, test-retest reliability) and validity. I like this link in particular because it presents its argument via both video as well as a smartly formatted website. The text in the website includes links to actual peer-reviewed research articles that refute the MBTI.

Smith's "Rutgers survey underscores challenges collecting sexual assault data."

Tovia Smith filed a report with NPR that detailed the psychometric delicacies of trying to measure the sexual assault rates on a college campus. I think this story is highly relevant to college students. I also think it also provides an example of the challenge of operationalizing variables as well as self-selection bias. This story describes sexual assault data collected at two different universities, Rutgers and U. Kentucky. The universities used different surveys, had very different participation rates, and had very different findings (20% of Rutgers students met the criteria for sexual assault, while only 5% of Kentucky students did). Why the big differences? 1) At Rutgers, students where paid for their participation and 30% of all students completed the survey. At U. Kentucky, student participation was mandatory and no compensation was given. Sampling techniques were very different, which opens the floor to student discussion about what this might mean for the results. Who m...

Scott Ketter's "Methods can matter: Where web surveys produce different results than phone interviews"

Pew recently revisited the question of how survey modality can influence survey responses.  In particular, this survey used both web and telephone based surveys to ask participants about their attitudes towards politicians, perceptions of discrimination, and their satisfaction with life. As summarized in the article, the big differences are: "1)  People expressed more negative views of politicians in Web surveys than in phone surveys."  "2)  People who took phone surveys were more likely than those who took Web surveys to say that certain groups of people – such as gays and lesbians, Hispanics, and blacks – faced “a lot” of discrimination ."  "3)  People were more likely to say they are happy with their family and social life when asked by a person over the phone than when answering questions on the Web ."     The social psychologist in me likes this as an example of the Social Desirability Bias. When spea...

Anya Kamenetz's "The Past, Present, And Future of High-Stakes Testing"

Kamenetz (reporting for NPR) talks about her book , Test , which is about the extensive use of standardized testing in our schools. Largely, this is a story about the impact these tests have had on how teachers instruct K-12 education in the US. However, a portion of the story discusses alternatives to annual testing of every student. Alternatives include using sampling to assess a school as well as numerous alternate testing methods (stealth testing, assessing child emotional well-being, portfolios, etc.). Additionally, this story touches on some of the implications of living in a Big Data society and what it is doing to our schools. I think this would be a great conversation starter for a research methods or psychometric course (especially if you are teaching such a class for a School of Education). What are we trying to assess: Individual students or teachers or schools? What are the benefits and short comings of these different kinds of assessments? Can you students come up with...

Pew Research's "Global views on morality"

Pew Research went around the globe and asked folks in 40 different countries if a variety of different behaviors qualified as "Unacceptable", "Acceptable", or "Not a moral issue". See below for a broad summary of the findings. Summary of international morality data from Pew The data on this website is highly interactive...you can break down the data by specific behavior, by country, and also look at different regions of the world. This data is a good demonstration of why graphs are useful and engaging when presenting data to an audience. Here is a summary of the data from Pew.  It nicely describes global trends (extramarital affairs are largely viewed as unacceptable, and contraception is widely viewed as acceptable). How you could use this in class. 1) Comparison of different countries and beliefs about what is right, and what is wrong. Good for discussions about multiculturalism, social norms, normative behaviors, the influence of religion ...

minimaxir's "Distribution of Yelp ratings for businesses, by business category"

Yelp distribution visualization, posted by redditor minimaxir This data distribution example comes from the subreddit r/dataisbeautiful  (more on what a reddit is  here ). This specific posting (started by minimaxir) was prompted by several  histograms illustrating  customer ratings for various Yelp (customer review website) business categories as well as the lively reddit discussion in which users attempt to explain why different categories of services have such different distribution shapes  and means. At a basic level, you can use this data to illustrate skew, histograms, and normal distribution. As a more advanced critical thinking activity, you could challenge your students to think of reasons that some data, like auto repair, is skewed. From a psychometric or industrial/organizational psychology perspective, you could describe how customers use rating scales and whether or not people really understand what average is when providing customer feedba...

Cory Turner's "A tale of two polls"

LA Johnson for NPR Cory Turner , reporting for NPR, found that differences in survey word choice affected research participant support of the Common Core in education. The story follows two polling organizations and the exact phrasing they used when they asked participants whether or not they support the Common Core. Support for the Core varied by *20%* based upon the phrasing (highlighted below): Education Next  Question : "As you may know, in the last few years states have been deciding whether or not to use the Common Core, which are standards for reading and math that are the same across the states. In the states that have these standards, they will be used to hold public schools accountable for their performance. Do you support or oppose the use of the Common Core standards in your state?" (53% support) PDK/Gallup Question: "Do you favor or oppose having the teachers in your community use the Common Core State Standards to guide what they teach?"  (...

Public Religion Research Institute's “I Know What You Did Last Sunday” Finds Americans Significantly Inflate Religious Participation"

A study performed by The Public Religion Research Institute  used either a) a telephone survey or b) an anonymous web survey to question people about their religious beliefs and religious service habits. The researchers found that the telephone participants reported higher rates of religious behaviors and greater theistic beliefs. The figure below,  from a New York Times summary of the study , visualizes the main findings. The NYT summary also provides figures illustrating the data broken down by religious denomination. Property of the New York Times Participants also vary in their reported religious beliefs based on how they are surveyed (below, the secular are more likely to report that they don't believe in God when completing an anonymous online survey). Property of Public Religion Research Institute  This report could be used in class to discuss psychometrics, sampling, motivation to lie on surveys, social desirability, etc. Additionally, the sour...

Marketing towards children: Ethics and research

Slate's The Littlest Tasters More research methods than statistics, this article describes the difficulty in determining taste preferences in wee humans who don't speak well if at all. slate.com The goods for teaching: They mention the FACE scale. The research methods described go beyond marketing research and this could be useful in a Developmental class to describe approaches used in data collection for children (like asking parents to rate their children's reactions to foods). I've used this as a discussion board prompt when discussing research ethics, both for simply conducting research with children as well as the ethics of marketing (not so healthy foods) towards children. Aside: They also describe why kids like Lunchables, which has always been a mystery to me. Apparently, kids are picky about texture and flavor but they haven't developed a preference for certain foods to be hot or cold. The Huffington Post's " You'll Never Look at ...

Nature's "Policy: Twenty tips for interpreting scientific claims" by William J. Sutherland, David Spiegelhalter, & Mark Burgman

This very accessible summary lists the ways people fib with, misrepresent, and overextend data findings. It was written as an attempt to give non-research folk (in particular, law makers), a cheat sheet of things to consider before embracing/rejecting research driven policy and laws. A sound list, covering plenty of statsy topics (p-values, the importance of replication), but what I really like is that they article doesn't criticize the researchers as the source of the problem. It places the onus on each person to properly interpret research findings. This list also emphasizes the importance of data driven change.

Time's "Can Time predict your politics?" by Jonathan Haidt and Chris Wilson

This scale , created by Haidt and Wilson, predicts your political leanings based upon seemingly unrelated questions. Screen grab from time.com You can use this in a classroom to 1) demonstrate interactive, Likert-type scales, 2) face validity (or lack there of). I think this would be 3) useful for a psychometrics class to discuss scale building. Finally, the update at the end of the article mentions 4) both the n-size and the correlation coefficient for their reliability study, allowing you discuss those concepts with students. For more about this research, try yourmorals.org

The Onion's "Are tests biased against students who don't give a shit?"

The language blue, so use at your own risk...  but this faux debate is hilarious . I use it in my I/O and statistics classes to illustrate reliability, psychometric concerns related to test takers who are not totally engaged in their task, etc.

PHD Comics, 1/20/2010

Jorge Cham of PhD Comics quickly summarizes the problems that can occur as research is translated to the masses via the media. Funny AND touches on CI, sampling, psychometrics, etc. Property of phdcomics.com

CBS News/New York Times' Poll: Gays in the Military

Words can be powerful and value-laden. This can have an impact upon survey responses, as it did for this survey about attitudes towards gays serving in the military. This survey was taken in 2010, and gays can now openly serve in the military, but I still think this example is still a powerful way of teaching the weight of words when creating surveys. property of cbs.com I tend to use this as an extra credit, asking my students to respond to two questions:

Hunks of Statistics: Sharon Begley

I decided that I shouldn't limit my Hunks of Statistics list to man-hunks. There are some lady-hunks as well. Like Sharon Begley. Sharon Begley, from thedailybeast.com

Dilbert, 4/13/04

I like to use this comic for extra credit points on the big Sampling Distribution of the Mean/Central Limit Theorem statistics exams. Property of Scott Adams Typically, I ask my students to identify the flaw in Dogbert's data collection technique, and the student reply with some variation of 1) sample size and 2) the data can not be provided by anyone who has been killed. I did have one smart ass reply, "Dogs can't talk". I gave him the extra credit points. 

Hyperbole and a Half's "Boyfriend doesn't have Ebola. Probably."

While not a statsy blog, H and a H is HILARIOUS ( esp. this blog about dogs and moving . I moved across town and my house broken dog promptly peed all over the house...can't imagine a cross-country move!). However, the entry " Boyfriend doesn't have Ebola. Probably " IS psychometric-y and hilarious. It critiques the FACES pain scale often used in hospitals. NOTE: I absolutely don't own the images below, they belong to Hyperbole and a Half. Property of Hyperbole and a Half The language is NSFW but who gives a fuck about that (see what I did there?). I use this as a discussion board prompt in my online statistics class (which is tailored to professionals seeking their BS in nursing) and those students seem to relate to this posting in terms of their professional life but non-BSN students have been exposed to this scale via trips to the doctor and can discuss its utility, come up with examples when a non-verbal scale is particularly useful, ways to improve ...