Skip to main content

Posts

Showing posts with the label machine learning

Citizen Scientists, Unite! The Merlin App, Machine Learning, and Bird Calls

Every Spring and Summer, I become obsessed with the Merlin App. This app allows you to record bird songs using your phone and then uses machine learning to identify the bird call. The app can also do visual IDs if your phone has a much better camera than mine.  It is like PokemonGo. I have to catch them all. But no augmented reality, just reality reality.  Here is my "life list" of all the birds I've identified in about a year of using the App: This app brings joy. It is also a quick example of how citizens can become scientists, how Apps can generate data from citizen scientists, and how machine learning makes it work. So, this isn't a lengthy example for class, but it is an accessible example that shows how apps and phones can be harnessed for the better good. And science is super fun. How this App gathers data from users: But how? Via machine learning: Here is even more info on how their machine learning works: AND THEN, the data can be used for scientific research...

Mark Rober's 14 minute long primer on machine learning

I'm a fan of former NASA engineer and current YouTuber/science comm pro  Mark Rober . He meets the sweet spot of containing YouTube content that is safe for kids but also engaging for adults. You may know him for creating obstacle courses for squirrels in his backyard and holding the world record for the tallest elephant toothpaste explosion .  Recently, I discovered that he made a stats-adjacent video  explaining machine learning by studying baseball signals and creating a way to de-code baseball signals . Anyway, if you touch on your topics in your classes, this is a great, quick explainer. It is well-edited, well-produced, and has captioning. You don't need to be a baseball fan to follow this example. 

A fast, interactive example for explaining what we mean when we talk about "training" AI/ML

When I teach regression, I touch on AI/Machine Learning. Because it is fancy regression and ties classroom lessons to real life. During discussions about AI/ML, we often talk about "training" computers to look for something by feeding computers data. Which is slightly abstract. And a bit boring, if you are just talking about a ton of spreadsheets. As an alternative to boring, I propose you ask your students to help train Google's computers to recognize doodles . Visit this website, and a prompt flashes on your screen: You draw the prompt (I used my touchscreen), and Google tries to guess what you drew. Here is my half-done wine glass. Google guessed what it was. The website includes additional information on the data that has already been collected. For every one of the doodles above, you can click through and look at all the ones created in response to each prompt. SO MUCH INFORMATION. If you would like, you can also show your students this explainer video.

AI and COVID: A quick example of garbage in, garbage out

Sometimes, I post whole class lessons. Sometimes, I post short little example nuggets. Today I share the latter.  This one is a brief, easy-to-understand example of why AI only learns what we teach it and how even a smarty pants computer can get a little confused about correlations and what they mean. A great way to introduce ML, AI, problems with both, and even discuss correlation and predictions and regression. https://twitter.com/hoalycu/status/1507770891786096643...in my head, I imagine that AI was just judging comic-sans font. The text in this tweet was from a MIT Technology Review article by WIll Douglas Heaven: https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/ If you want to go deeper with this example, I strongly recommend reading Dr. Cat Hicks's thread about this post:  https://twitter.com/grimalkina/status/1508095358693302275 .

The Pudding's Colorism

Malaika Handa , Amber Thomas , and Jan Diehn created a beautiful, interactive website, Colorism in High Fashion . It used machine learning to investigate "colorism" at Vogue magazine. Specifically, it delves into the differences, over time, in cover model color but also how lighting and photoshopping can change the color of the same woman's skin, depending on the photo. There are soooo many ways to use this in class, ranging from machine learning, how machine learning can refine old psychology methodology, to variability and within/between-group differences. Read on: 1. I'm a social psychologist. Most of us who teach social psychology have encountered research that uses magazine cover models as a proxy for what our culture emphasizes and values ( 1 , 2 , 3 ). Here, Malaika Handa, Amber Thomas, and Jan Diehn apply this methodology to Vogue magazine covers. And they take this methodology into the age of machine learning by using k-means cluster and pixels to deter...

Pew Research's "Gender and Jobs in Online Image Searches"

You know how every few months, someone Tweets about stock photos that are generated when you Google "professor"? And those photos mainly depict white dudes? See below. Say "hi" to Former President and former law school professor Obama, coming it at #10, several slots after "novelty kid professor in lab coat". Well, Pew Research decided to quantify this perennial Tweet, and expand it far beyond academia. They used Machine Learning to search through over 10K images depicting 105 occupations and test whether or not the images showed gender bias.  How you can use this research in your RM class: 1. There are multiple ways to quantify and operationalize your variables . There are different ways to measure phenomena. If you read through the report, you will learn that Pew both a) compared actual gender ratios to the gender ratios they found in the pictures and b) counted how long it took until a search result returned the picture of a woman for a given j...