Monday, December 22, 2014

Justin Wolfers' "A Persuasive Chart Showing How Persuasive Charts Are"

Wolfers (writing for the New York Times) summarizes a study from Wansink and Tal (2014) in which participants were either a) presented with just in-text data about a drug trial or b) the text as well as with a bar graph that conveyed the exact same information. The results can be read below:


Wolfers/NYT

According to Wansink and Tal, the effects seems to be strongest in people who agreed with the statement "I believe in science". So, a graph makes a claim more "sciencier" and, therefore, more credible? Also, does this mean that science believers aren't being as critical because they already have an underlying belief in what they are reading? 

I thinks this is a good way of conveying the power of graphs to students in a statistics class as well as the need for better scientific literacy/statistical consumerism/skepticism. I think it is also a good example of Elaboration Likelihood Model

Finally, can we take just one moment to discuss the name of the original research article, Blinded with science: Trivial graphs and formula increase ad persuasiveness and belief in product efficacy. I'm always a fan of the "<funny cultural reference>: <serious science-y stuff>" naming convention oft used in scientific articles, and this is a real gem.


Monday, December 15, 2014

Kristopher Magnusson's "Interpreting Cohen's d effect size"

Kristopher Magnusson (previously featured on this blog for his interactive illustration of correlation) also has a helpful illustration of effect size. While this example probably has some information that goes beyond an introductory understanding of effect size (via Cohen's d) I think this still does a great job of illustrating how effect size measures, essentially, the magnitude of the difference between groups (not how improbably those differences are). See below for a screen shot of the tool.

http://rpsychologist.com/d3/cohend/, created by Kristopher Magnusson

Wednesday, December 10, 2014

UCLA's "What statistical analysis should I use?"

This resource from UCLA is, essentially, a decision making tree for determining what kind of statistical analysis is appropriate based upon your data (see below).
Screen shot from "What statistical analysis should I use?"
Now, such decision making trees are available in many statistics text book...however...
what makes this special is the fact that with each test comes code/syntax as well as output for SAS, Stata, SPSS, and R. Which is helpful to our students (and, let's be honest, us instructors/researchers as well).

Monday, December 1, 2014

Tessa Arias' "The Ultimate Guide to Chocolate Chip Cookies"

I think this very important cookie research is appropriate for the Christmas cookie baking season. I also believe that it provides a good example of the scientific method.

Arias started out with a baseline cookie recipe (baseline Nestle Toll House Cookie Recipe, which also served as her control group) and modified the recipe in a number of different ways (IVs) in order to study several dependent variables (texture, color, density, etc.). The picture below illustrates the various outcomes per different recipe modifications.

For science!

http://www.handletheheat.com/the-ultimate-guide-to-chocolate-chip-cookies


Also, being true scientist, her original study lead to several follow up studies investigating the effect of different kinds of pans and flours upon cookie outcomes.

http://www.handletheheat.com/the-ultimate-guide-to-chocolate-chip-cookies-part-2
I used this example to introduce hypothesis testing to my students. I had them identify the null and alternative hypotheses, the control and experimental groups, etc.

Wednesday, November 26, 2014

Facebook Data Science's "What are we most thankful for?"

Recently, a Facebook craze asked users to list three things you are thankful for for five days. Data scientists Winter Mason, Funda Kivran-Swaine, Moira Burke, and Lada Adamic at Facebook have analyzed this data to better understand the patterns of gratitude publically shared by Facebook users.

The data analysts broke down data by most frequently listed gratitude topic:

Most frequently "liked" gratitude posts: (lots of support for our friends in recovery, which is nice to see).


Gender differences in gratitude...here is data for women. The wine gratitude finding for women was not present in the data for men. Ha.


Idiosyncratic data by state. I would say that Pennsylvania's fondness for country music rings true for me.



How to use in class: This example provides several interesting, easy to read graphs, and the graphs show how researchers can break down a single data set in a variety of interesting ways (by gender, by age, by state). Additionally, this data strikes me as a replication of Seligman's gratitude exercise from the positive psychology literature. You could use this example to discuss the ways in which students think this data (specifically, the things Facebook users say they are grateful for) would differ when the data is collected via Facebook's public forum versus when participants keep a private journal. How might a public display of gratitude differ from a private reflection upon gratitude?

Monday, November 24, 2014

Diane Fine Maron's "Tweets identify food poisoning outbreaks"

This Scientific American podcast by Diane Fine Maron describes how the Chicago Department of Public Health (CDPH) used Twitter data to shut down restaurants with health code violations. Essentially, the CDPH monitored Tweets in Chicago, searching for the words "food poisoning". When such a tweet was identified, an official at CDPH messaged the Twitterer in question with a link to an official complain form website.

The results of this program?

"During a 10-month stretch last year, staff members at the health agency responded to 270 tweets about “food poisoning.” Based on those tweets, 193 complaints were filed and 133 restaurants in the city were inspected. Twenty-one were closed down and another 33 were forced to fix health violations. That’s according to a study in the journal Morbidity and Mortality Weekly Report. [Jenine K. Harris et al, Health Department Use of Social Media to Identify Foodborne Illness — Chicago, Illinois, 2013–2014]"

I think this is a good example for using big data/new media data/archival data in a cheap and novel manner for the public good. I think it would also be interesting to ask your students to see how such a method could be employed within their community or campus. What are the big problems at your campus? How could you a) monitor Twitter/Facebook/YikYak/IG accounts and b) come up with a fast solution (like the online complaint forms) in order to follow up less formal data collection with more formal data collection?

(PS: HAPPY THANKSGIVING! Don't forget to properly reheat your leftovers!)

Monday, November 17, 2014

Free stats/methods textbooks via OpenStax

 OpenStax CNX "is a dynamic non-profit digital ecosystem serving millions of users per month in the delivery of educational content to improve learning outcomes." So, free text books that can be easily downloaded. Including nearly 7,000 free statistics text books as well as over 1,500 research methods texts.


How OpenStax works (viahttp://cnx.org/about)


I like this format because it is free but also because it is flexible enough that you can pick and choose chapters from different text books to use in a class. Additionally, if you are feeling generous, you can upload your own content to share.

Monday, November 10, 2014

Geoff Cumming's "The New Statistics: Estimation and Research Integrity"

Geoff Cumming
Geoff Cumming gave a talk at APS 2014 about the "new statistics" (reduced emphasis on p-value, greater emphasis on confidence intervals and effect sizes, for starters).

This workshop is now available, online and free, from APS. The three hour talk has been divided into five sections, and each sections comes with a "Table of Contents" to help you quickly navigate all of the information contained in the talk.

While some of this talk is too advanced for undergraduates, I think that there are portions, like his explanation of why p-values are so popular, p-hacking, confidence intervals can be nice additions to an Introduction to Statistics class.



Monday, November 3, 2014

John Venn's Google Doodle

Make pretty Venn diagrams via this archived version of the Google Doodle that celebrated John Venn's 180th birthday.

A good example of a Venn diagram as well as a way to (approximately) illustrate shared variance.
The overlap between vegetation and things that can fly

Monday, October 27, 2014

Nell Greenfieldboyce's "Big Data peeks at your medical records to find drug problems"

NPR's Nell Greenfieldboyce (I know, I thought it would be hyphenated as well) reports on Mini-Sentinel, an effort by the government to detect adverse side effects associated with prescription drugs as quickly as possible. Specifically, instead of waiting for doctors to voluntarily report adverse effects, they are mining data from insurance companies in order to detect side effects and illnesses being experienced by people on prescription drugs.

Topics covered by this story that may apply to your teaching:

1) Big data
2) Big data solving health problems
3) Data and privacy issues
4) Conflict of interest
5) An example of the federal government pouring lots of money into statistics to make the world a little safer
6) An example of a data and statistics being used in not-explicitly-statsy-data fields and occupations

Free American Psychological Association style tutorials/quiz

Here are two free, Flash tutorials about APA style directly from APA. The first tutorial is provides an introduction to APA style, while the second provides a list of changes in the 6th edition.

And here is a free quiz on reference alphabetization, also from the APA Style Blog (you can also download the quiz in PDF format for in-class use).

Also, don't forget on these resources (1, 2) for help crafting results sections in APA style.


Monday, October 20, 2014

Quoctrung Bui's "Who's in the office? The American workday in one graph"

Credit: Quoctrung Bui/NPR
Bui, reporting for NPR, shares an interactive graphs that demonstrates when people in different career fields are at the office. Via drop down menus, you can compare the standard work days of a variety of different fields (here, "Food Preparation and Serving" versus "All Jobs").


If you scoff at pretty visualizations and want to sink your teeth into the data yourself, may I suggest the original government report entitled, "American Time Use Survey" or a related publication by Kawaguci, Lee, & Hamermesh, 2013.


Demonstrates: Biomodal data, data distribution, variability, work-life balance, different work shifts.




Tuesday, October 7, 2014

Free webinar on Simpson's Paradox teaching example/Bayesian logic for undergraduate statistics


Attend CAUSE Web's free Journal of Statistics Education webinar on 10/21/14 to learn about 1) a classroom example  of Simpson's Paradox as well as 2) ways to incorporate Bayesian logic into undergraduate statistics courses.

More information on past JSE webinars available here.

Monday, October 6, 2014

Mara Liasson's "The challenges behind accurate opinion polls"


This radio story by Mara Liasson (reporting for NPR) discusses the surprising primary loss of former Republican House Majority Leader Eric Cantor. It was surprising because internal polling conducted by Cantor's team gave him an easy win, but he lost out to a Tea Part favorite, David Brat. The story goes on to describe why it is becoming increasingly difficult to conduct accurate voter polling via telephone and the internet.



Some specific points from this story that teach students about sampling techniques:

1) Sample versus population: One limitation of polling data is the fact that many telephone call-based sampling techniques include land lines and ignore the growing population of people who only have cell phones.
2) Response rates for political polling are on a decline, making the validity of the available sample shrink.
3) Robocalls, while less expensive, have no way of validating that an actual registered voter is responding to the questions. Additionally, restrictions on placing robocalls to cell phones (but not land lines) create more difficulties for pollsters.

I used this example as a way of introducing sampling error and, eventually, distribution of the sampling mean/standard error.

I emphasized that Cantor's polling had been conducted by high-end polling firm McLaughlin and Associates (source). I also explained just how powerful Eric Cantor was and if an extraordinarily rich and powerful person using a very expensive polling firm has to contend with sampling error, then what does that say for the rest of us?

Monday, September 29, 2014

Slate & Rojas-LeBouef's "Presenting and Communicating Your Statistical Findings: Model Writeups"

Holy smokes. This e-book (distributed for free via Open Stax) contains sample result sections for multiple statistical tests, which is helpful but not particularly unique. There are other resources for creating APA results sections (love U. Washington's resources) but I feel that this book is particularly useful in that:

1) It addresses how to include effect sizes in tests (most of the result section examples I have been able to find neglect this increasingly important aspect of data analysis).
2) The writers translate SPSS output into results sections.
3) The writers aren't psychologist but they are APA compliant (and even point out instances when their figures and tables aren't APA compliant).
4) It is gloriously free.

The only shortcoming is that they don't provide examples for more types of data analyses. The book does, however, cover chi-square, correlation, t-test, and ANOVA, so most of what is covered in introductory statistics courses.

I think this book provides good examples to those of us to teach statistics and are starting to integrate more of the "new statistics" into our teaching. I think this would also be a great resource that is accessible enough for intro students but I bet that graduate students could make use of this book as well.


Thursday, September 25, 2014

Kristoffer Magnusson's" Understanding correlations, an interactive visualization"

Kristoffer Magnusson is a psychology graduate student with a background in web design and he is using his talents to create succinct, beautiful visualizations of statistical concepts. Below is a screen shot of his interactive tool for better understanding correlation and how it relates to shared variance (users can change the n-size and r and watch the corresponding changes in shared variance and the scatter plot). Follow Magnussen's work and statistical visualizations via @rpsychologist.

Special thanks to Randy McCarthy for recommending this resource!
Using the "Slide me" bar at the top, you can adjust the correlation in order to visualize the scatter plot, slope, and shared variance.

Monday, September 22, 2014

Center for Open Science's FREE statistical & methodological consulting services

Center for Open Science (COS) is an organization that seeks "to increase openness, integrity, and reproducibility of scientific research". As a social psychologist, I am most familiar with COS as a repository for experimental data. However, COS also provides free consulting services as to teach scientists how to make their own research processes more replication-friendly



As scholars, we can certainly take advantage of these services. As instructors, the kind folks at COS are willing to provide workshops to our students (including, but not limited to, online workshops). Topics that they can cover include: Reproducible Research Practices, Power Analyses, The ‘New Statistics’, Cumulative Meta-analyses, and Using R to create reproducible code (or more information on scheduling, see their availability calendar).

I once heard it said that the way you learn how to conduct research and statistics in graduate school will be the way you are inclined to conduct research and statistics for the rest of your professional life. As such, why not introduce our students (both graduate and undergraduate) to an aspect of data collection and analysis that both cultivates ethical behavior and is a growing expectation for publication in our top journals?  

Thursday, September 18, 2014

So I wrote a book: Shameless self-promotion 4

When I'm not busy thinking about statistics and research methods, I like to think about positive psychology. I like to think about it so much that I co-authored a positive psychology book with Rich Walker (Winston-Salem State University) and Cory Scherer (Penn State - Schuylkill). The book is called Pollyanna's Revenge and published by Kendall-Hunt. And the book makes a case for the fact that (contrary to many pop-psych reports) there are many good side effects to being a Pollyanna and that our minds engage in all manner on non-conscious processes that help us maintain positive affect (with special attention paid to the role of the Fading Affect Bias and memory in maintaining good moods).




Cross-promotion, y'all!

Monday, September 15, 2014

minimaxir's "Distribution of Yelp ratings for businesses, by business category"


Yelp distribution visualization, posted by redditor minimaxir

This data distribution example comes from the subreddit r/dataisbeautiful (more on what a reddit is here). This specific posting (started by minimaxir) was prompted by several histograms illustrating customer ratings for various Yelp (customer review website) business categories as well as the lively reddit discussion in which users attempt to explain why different categories of services have such different distribution shapes and means.

At a basic level, you can use this data to illustrate skew, histograms, and normal distribution. As a more advanced critical thinking activity, you could challenge your students to think of reasons that some data, like auto repair, is skewed. From a psychometric or industrial/organizational psychology perspective, you could describe how customers use rating scales and whether or not people really understand what average is when providing customer feedback.

Thursday, September 11, 2014

Cory Turner's "A tale of two polls"

LA Johnson for NPR
Cory Turner, reporting for NPR, found that differences in survey word choice affected research participant support of the Common Core in education. The story follows two polling organizations and the exact phrasing they used when they asked participants whether or not they support the Common Core. Support for the Core varied by *20%* based upon the phrasing (highlighted below):

Education Next Question:
"As you may know, in the last few years states have been deciding whether or not to use the Common Core, which are standards for reading and math that are the same across the states. In the states that have these standards, they will be used to hold public schools accountable for their performance. Do you support or oppose the use of the Common Core standards in your state?" (53% support)

PDK/Gallup Question:

"Do you favor or oppose having the teachers in your community use the Common Core State Standards to guide what they teach?" (33% support)

Turner speculates that people love the idea of accountability and that the use of that word by Education Next (as well as more contextual information) leads to the difference in support.

I used this example in my statistics class in order to emphasize the importance of word choice when creating scales. 

Monday, September 8, 2014

University of Manchester's Academic Phrasebank

One consistent problem I find in undergraduate writing is a tendency towards flowery prose. I think it is one of the reasons that APA style can be so difficult to teach: The less-is-more approach to concise writing is not a lesson that they are necessarily getting from other classes. To further muddy the waters, students really don't have any experience writing about numbers/data/statistics/results in a way that a) doesn't convey too much certainty in data or b) imply causality when not appropriate.

That is why I love the Academic Phrasebank. It provides lists and lists and lists of concise, accurate ways to describe research findings. For example, how to write up statistical results:

http://www.phrasebank.manchester.ac.uk/reporting-results/
In addition to providing examples for wording in a results section, they also clarify the type of  guarded language that should be used in a discussion:

http://www.phrasebank.manchester.ac.uk/using-cautious-language/
Another non-statsy/researchy aspect undergraduate writing pet peeve of mine is when student do not connect their sentences and paragraphs into a cohesive narrative (full disclosure: This is something I struggled with in graduate school). This site has a whole section devoted to connecting paragraphs, sentences, ideas together.

While this collection isn't explicitly APA-compliant, I would say that it embraces the spirit of APA style.

Monday, September 1, 2014

MathIsFun.com's linear equation Flash applet

When I teach regression, I usually introduce the regression line by reminding my students of the long-ago days of algebra class and graph paper and rulers.

MathIsFun.com has created an interactive applet that mimics the graph paper and allows users to adjust the y-intercept and the slope. This is a slightly fancier, more high-tech way to get your students thinking about the linear equation and then fitting that old knowledge into the new concept of regression.

Use the bars to adjust slope and y-intercept as a quick linear equation primer before teaching regression

Monday, August 25, 2014

Regina Nuzzo's "Scientific method: Statistical errors"


This article from Nature is an excellent primer on the concerns surrounding the use of p-values as the great gate keeper of statistical significance. The article includes historical perspective on how p-values came to be so widely used as well as some discussion on solutions and alternative measures of significance.

This article also provides good examples failed attempts at replication (good examples of Type I errors) and a shout out to Open Science Framework folks.

Personally, I have revised my class for the fall to include more discussion of and use of effect sizes. I think this article may be a bit above an undergraduate, introduction to statistics class but it could be useful for us as instructors as well as a good reading for advanced undergraduates and graduate students.

Wednesday, August 20, 2014

Patti Neighmond's "What is making us fat: Is it too much food or moving to little?"

This NPR story by Patti Neighmond is about determining the underlying cause of U.S. obesity epidemic. As the name of the segment states, it seems to come down to food consumption and exercise, but which is the culprit? This is a good example for research methods because it describes methodology for examining both sides of this question. The methodology used also provides good examples of archival data usage.

Monday, August 18, 2014

Piktochart.com

If you are looking for an alternative to using good ol' Excel and SPSS to create graphs and charts, perhaps you students would like to create infographics via a free, online resource.

One such tool is Piktochart. It requires registration (via email, Facebook, or Google). It has many free templates as well as a "pro" pay to play package. Below are a few screen grabs of what it is like to personalize one of their templates with your own data. Below, I input a bit of user data from this blog into a pre-existing template.

Piktochart template

User interface for entering your own data (if you can use Excel, you can use this)

End result, with data from this blog
It is pretty easy to use, they have multiple different kinds of figures (from good old pie charts and bar graphs to visualizations that stray far from the APA style manual but still do a good job of conveying data to an audience).

This coming semester, I am adding a service learning component to my statistics lab class. We are collecting mental health awareness data from students for our university's counseling center. I am thinking that my students might use piktochart in order to create visually appealing ways to share their findings around campus.

Monday, August 4, 2014

Five Lab's Big Five Personality Predictor


five.com's prediction via status update

It might be fun to have students use this app to measure their Big Five and then compare those findings to the youarewhatyoulike.com app (which I previously discussed on this blog), which predicts your scores on the Big Five based on what you "Like" on FB.

youarewhatyoulike.com's prediction via "Likes"



As you can see, my "Likes" indicate that I am calm and relaxed but, I am a neurotic status updater (crap...I'm that guy!). By contrasting the two, you could discuss reliability, validity, how such results are affected by social desirability, etc. Further more, you could also have your students take the original scale and see how it stacks up to the two FB measures.

Note: If you ask your students to do this, they will have to give these apps access to a bunch of their personal information.

Monday, July 28, 2014

First day of class: Persuading students to treat statistics class as more than a necessary evil (with updates)


I am busy prepping my statistics class for the fall (as well as doing a bunch of stuff that I should have done in June, but I digress). Most of my students are required to take statistics and are afraid of mathematics so I'm going to try to convince them to embrace statistics by showing them that more and more non-statsy jobs require data collection, data analysis, data driven decisions, program assessment, etc.. 

I find that my students are increasingly aware of the current job market as well as their student loan debt. As such, I think that students are receptive to arguments that explain how even a little bit of statistical knowledge can make them more attractive to potential employers.

Here are some resources I have found to do just that. 


This article by Susan Adams for Forbes lists the top ten skills employers are looking for in employees. Included in the top ten:

"2. Ability to make decisions and solve problems

5. Ability to obtain and process information
6. Ability to analyze quantitative data
10. Ability to sell and influence others"

Erin Palmer at Business Insider provides a more direct endorsement of statistics by arguing that statistical skills are in demand, both within the context of explicitly statistical jobs but also in other career fields.


"Taking a statistics class in college is a good career move, even if your ultimate career goals have nothing to do with math. Before you roll your eyes, consider all the non-mathematical careers that use statistics. Executives, politicians, supply chain managers, entrepreneurs and marketers are among the many professionals who analyze data and statistics regularly."

Katie Bardaro's New York Times piece  STEM Skills Aren’t Just for STEM Majors argues that college students can be STEMy without having a STEM major:

"That being said, not all college students have the interest or ability to major in a STEM field. Another possibility is to major in a non-STEM field, but take some analytically focused courses like economics or statistics. Many jobs that previously didn't require analytic thought or data handling now do, and arming yourself with these skills is one way to get a leg up in the labor market."

12/27/14 Update: Linked-In data analysis has determined that "Statistical analysis and data mining" is the #1 "Hottest Skill of 2014"

4/9/15 Update: USA Today - College Edition opinion piece from a doctor-in-training who argues that "statistics might be the most important class you take in college".

7/21/15 Update: This article describes how and why the gender gap that exists throughout most of mathematics isn't as enormous in the field of statistics. Includes women in the field describing why they love their careers in statistics. 

9/3/15 Update: The article was written by a journalist, Laura Miller. She reflects on the fact that she wishes she had taken a statistics course in college, because her job has shown her that data is used to persuade people as much as language. She also does a good job of touching on how probability can be counter intuitive and recommends some pop statistics readings.

12/21/16 Update:

New National Survey By SHRM Shows Employers Struggling to Meet Growing Demand for Data Analysts


Monday, July 21, 2014

ed.ted.com: TED video + assessment + discussion board


The folks of TED have created ed.ted.com, a website that allows you to use their videos (or any video available via youtube) and create a lesson around the video. You can create an assessment quiz (and save your student's grades on the assessment). You can also create discussion boards and post your own commentary/links related to the content of the video.

I know, right?

There are several lessons that relate to statistics and research methods. Here is a shorter video that teaches the viewer how to assess the quality of medical research, and here is a list of TED talks about Data Analysis and Probability While the teaching of statistics and research methods are my jam, you can use any old video from youtube/TED (like the many talks featuring psychology research) and create an online lesson and assessment about the talk. Pretty cool! I think these could be use as bonus points, a quick homework assignment, and as a way to reiterate the more conceptual ideas surround statistics.

From Not all scientific studies are created equal by David H. Schwartz

Also, if you are looking for more statsy videos to use with this tool, I do use a "video" label with this blog. Not all of the videos links I provide are hosted by youtube, but I bet that you could find most of these videos in youtube with just a little bit of Googling.

Do note:
1) In order to have full use of this site, you and your students do need to register.
2) I don't see a way to automatically upload assessment data into Blackboard or other learning management systems.

Monday, July 14, 2014

Nate Silver and Allison McCann's "How to Tell Someone’s Age When All You Know Is Her Name"

Nate Silver and Allison McCann (reporting for Five Thirty Eight, created graphs displaying baby name popularity over time. The data and graphs can be used to illustrate bimodality, variability, medians, interquartile range, and percentiles.

For example, the pattern of popularity for the name Violet illustrates bimodality and illustrates why measures of central tendency are incomplete descriptors of data sets:

"Other names have unusual distributions. What if you know a woman — or a girl — named Violet? The median living Violet is 47 years old. However, you’d be mistaken in assuming that a given Violet is middle-aged. Instead, a quarter of Violets are older than 78, while another quarter are younger than 4. Only about 4 percent of Violets are within five years of 47."



Relatedly, bimodaility (resulting from the current trend of giving classic, old-lady names to baby girls) can result in massive variability for some names...



...versus trendy baby names that have smaller interquartile ranges...

Your blogger exceeds the 75th percentile for Jessica Age. She considers herself a trendsetter, not old for her name.

Monday, July 7, 2014

Every baby knows the scientific method

I am the mother of a boundary-testing two year old and my little guy likes to replicate his research findings with me all day long. We're currently trying to pull a sufficient n-size to test his hypothesis of whether or not I will ever let him eat dog food. I don't want to p-hack, but I'm pretty sure that that answer is no.


Monday, June 23, 2014

Public Religion Research Institute's “I Know What You Did Last Sunday” Finds Americans Significantly Inflate Religious Participation"

A study performed by The Public Religion Research Institute used either a) a telephone survey or b) an anonymous web survey to question people about their religious beliefs and religious service habits. The researchers found that the telephone participants reported higher rates of religious behaviors and greater theistic beliefs.

The figure below, from a New York Time's summary of the study, visualizes the main findings. The NYT summary also provides figures illustrating the data broken down by religious denomination.

Property of the New York Times
Participants also vary in their reported religious beliefs based on how they are surveyed (below, the secular are more likely to report that they don't believe in God when completing an anonymous online survey).

Property of Public Religion Research Institute

 This report could be used in class to discuss psychometrics, sampling, motivation to lie on surveys, social desirability, etc. Additionally, the source article provides a good literature review on various ways to "count" religious behavior, including going to churches and counting the people in the pews as well as framing such questions as to take the focus away from religion (ironically, in hopes of prompting more honest answers).

It might also be interesting to ask student to generate a list of other sensitive topics about which people are inclined to lie, ways in which telephone and online respondents may differ demographically, or to think of ways to encourage more honest responding.

Also, The Onion weighed in on the report:

Property of The Onion