|
|
|
When most people hear I’m teaching a new class called AI and Media Literacy, they assume it’s a sharp turn from chemistry. In some ways, they’re right. My TED Talk, 3 Rules to Spark Learning focused on curiosity and inquiry in science classrooms, and chemistry has long been my home base. But this new course grows from that same philosophy: it’s about giving students safe, hands-on ways to play, question, and create with the unknown. In this case, the “unknown” is artificial intelligence.
The class centers on two big ideas: Predictive AI and Generative AI. Rather than treating AI as a mysterious black box, students learn to work inside it—to build, test, and critique it. I want them to experience the useful side of AI right alongside the problematic side, building the kind of fluency that can only come from making things. Project 1: Predictive AI: Our first project explores Predictive AI, which powers tools that classify, sort, and detect patterns based on trained data. Students use Teachable Machine to build simple but meaningful predictive models—image, sound, or pose classifiers—that serve a purpose in their community. Students start by comparing how different AI models (ChatGPT, Gemini, Claude) define predictive versus generative AI, then move into hands-on modeling. From there, they train AI to do something useful: maybe detect hand signals for accessibility, identify safe vs. unsafe environmental conditions, or recognize actions that can help others. The final challenge is to turn that model into a functioning web app, using a mix of Claude, ChatGPT, and Netlify. The results are surprisingly creative. You can browse their finished apps here on Padlet. If you’re curious about the full structure and rubric, you can view the Predictive AI Project. Project 2: Generative AI: The second project flips the perspective. Instead of using AI to predict something about the world, students use Generative AI tools—Gemini, NotebookLM, Teachable Machine, and Google AI Studio—to analyze the world. Specifically, they build a web app that determines whether an image was created or altered by AI. The process starts with research. Students prompt Gemini to curate ten recent YouTube videos explaining how to identify AI-generated imagery, then feed those into NotebookLM to digest and summarize the key ideas as a mind map and audio overview. From there, they design a rubric in NotebookLM listing 10–12 signs of AI manipulation. They train a Teachable Machine model using real and AI-generated examples, then combine that model with their rubric inside Google AI Studio to produce a working app that analyzes uploaded images and explains why it thinks an image is or isn’t AI-made. The full project guide is available here: Generative AI Project. The real goal of the course isn’t to turn students into coders. It’s to make them critical and confident participants in the age of AI. They learn how predictive systems make judgments, how generative systems can deceive or inform, and how both can be used for creativity and good. For teachers interested in exploring AI in the classroom, I am hopeful that these projects strike a balance between creation and critique. They show students that AI isn’t magic—it’s math, data, and design choices made by humans. And like any good chemistry experiment, the best learning happens when they roll up their sleeves and see what reacts. Comments are closed.
|
Categories
All
Archives
November 2025
|
RSS Feed