Pedagogy & Practice

AI in the Classroom

Practical guides for faculty integrating AI into teaching and pedagogy.Real-world strategies, lesson plans, and pedagogical insights.

Teaching Research Skills When Students Can Generate Citations in Seconds

The first time a student shows you a perfectly formatted bibliography for a paper they wrote in 45 minutes, don't panic - - but do pause. What we're witnessing isn't the death of research instruction; it's a correction. For years, many of our students treated citations as a box-checking exercise: grab something from a database, format it in MLA or APA, and move on. AI tools have simply exposed how little that approach had to do with actual research thinking. Here's what still matters, perhaps more than ever: the question. Teaching students to craft researchable questions, narrow enough to answer, broad enough to matter, has become our most valuable work. An AI can generate a bibliography on "climate change policy," but it cannot define why a student cares about climate change policy in their particular community, for their particular major, with their particular career in mind. When you build assignments around student-generated questions, you're asking something AI cannot produce: genuine intellectual investment. The second shift is equally important. Instead of treating citation generation as a skill to test, make the evaluation about what happens before and after the citation. Ask students to annotate their sources: Why did they choose this one over the ten others they found? What did they have to discard, and why? How does this source complicate or confirm their argument? These are the moves that separate researchers from content consumers, and no chatbot can do the choosing for them. Finally, be honest with your students about what you're teaching. Tell them directly: "I'm not grading your ability to generate a Works Cited page. Your phones can do that. I'm grading whether you know why a source belongs in a paper, whether you can evaluate its credibility, and whether you can build an argument that uses evidence rather than just displays it." When students understand the real assignment, most of them want to meet that standard. The technology changes, but the intellectual work at the center of good research hasn't...and that's the piece only you can teach.

Written by Chuck Hampton

AI as Your Teaching Amplifier, Not Your Replacement

Here's something worth remembering as you navigate the AI conversation on your campus: the technology doesn't have to be the enemy of your pedagogy. In fact, it can become one of your strongest allies. The key shift is thinking of AI not as a content replacement, but as an amplifier of the teaching commitments you already hold dear. Consider what this looks like in practice. If you value formative feedback, AI can help you provide more of it, drafting response suggestions that you refine and personalize. If you emphasize critical thinking, AI can generate provocation texts for students to analyze and push back against. If you care about accessibility, AI tools can help you convert your materials into multiple formats faster than ever before. The technology handles the time-intensive scaffolding; you maintain the intellectual authority. The Mandela University scholars pushing for Africa-centered approaches to AI and digital humanities get this right. They're not rejecting the tools—they're insisting the tools serve their intellectual and cultural commitments, not the other way around. That's the posture worth adopting. Your content, your values, your classroom culture - - these remain the center. AI becomes the instrument that extends your reach. Start small. Pick one assignment or workflow where you feel stretched thin and explore whether AI could lighten that load without compromising your standards. You'll likely find the technology far more useful when you're using it to amplify what you already do well, rather than worrying about what it might replace.

Written by Chuck Hampton

Fostering Ethical Discourse: Integrating AI into Critical Thinking Exercises

Incorporating AI into the classroom opens up rich opportunities for critical discourse, particularly surrounding the ethical implications of AI across various fields. A practical approach is to have students first explore a specific discipline—let's say medicine—without the lens of AI. They can write essays discussing the ethical boundaries of medical practices, patient privacy, and the decision-making process in healthcare. By grounding their arguments in established ethical frameworks, students can develop a solid understanding of the complexities involved in medical ethics. Once students have articulated their perspectives, they can engage with AI to craft a counterargument to their initial essays. This interaction not only allows students to see their arguments challenged but also encourages them to critically assess the AI-generated responses. By evaluating the AI's points, students can confront biases, inaccuracies, and ethical dilemmas that might arise from AI-generated content. This exercise promotes a deeper understanding of how AI can both inform and complicate discussions on ethics, pushing students to think critically about the implications of AI in their field. Moreover, this method nurtures an environment of open dialogue and critical thinking. Students learn to appreciate multiple viewpoints while sharpening their analytical skills. As educators, it is essential to guide these discussions, ensuring that students recognize the limitations and potential biases inherent in AI systems. This awareness will serve them well in their future careers, where ethical considerations will be paramount in their decision-making processes. Ultimately, integrating AI into classroom discussions about ethics not only enriches the learning experience but also prepares students for a future where AI's role will be increasingly significant. By fostering critical discussions around these topics, we empower students to navigate the complexities of their respective fields with a well-rounded and ethically informed perspective.

Written by Chuck Hampton

UNC Professor Bridges Humanities and AI in Research and Teaching

A Romance studies professor at UNC-Chapel Hill is combining her expertise in language and literature with digital technologies, demonstrating how humanities perspectives can inform artificial intelligence development. Her work illustrates an emerging model for universities seeking to integrate AI across disciplines while maintaining the critical lens that humanistic inquiry provides.

Written by Chuck Hampton

Empowering Educators with AI Skills

This comprehensive guide offers educators strategies to effectively integrate the Claude AI model into their teaching practices. By equipping faculty with the necessary skills, university leaders can enhance the learning experience and drive innovation in educational methodologies.

Written by Chuck Hampton

The Applied AI Classroom Parallax View

"The concrete includes the abstract and exceeds it in value." Nancy Frankenberry wrote this about Jonathan Z. Smith's cartography of religion, but it applies equally well to my undergraduate religious studies course-turned-venture-studio. The concrete act of building a company must exceed the abstract theory of entrepreneurship. The religious act cannot be mistaken for the study of the religious act. When the balance reverses, the map eats the territory. This is what happened in my classroom. The Assignment That Seemed Clever At the start of the semester, I gave my students what I thought was an elegant pedagogical hack. They would audit their own LinkedIn profiles and resumes using an LLM, identifying three "hiring stoppers": specific gaps that recruiters cited as blocking their candidacy for target roles. Then, working in cross-functional teams, they would reverse-engineer a venture concept that, through the simple act of building it, would generate the bullet points they lacked. A finance student needing "evidence of commercial ownership" might co-found a revenue-generating service. An engineering student lacking "shipped artifacts" might own the technical build. The venture would be real enough to sell, small enough to ship by May, and theoretically engaged enough to satisfy the course's "academic" requirements around projection theories of religion. I gave them a detailed prompt template, access to LLMs, and a single class session to generate their first concept statements. The results would tell me whether they understood the syllabus and whether I had designed something that could actually work. The Confidence Trap What returned was, on first scan, impressive. Six teams produced concept statements with "Aha Moments," metaphysical throughlines citing Feuerbach, and evidence trails mapping each hiring stopper to specific product features. But as I read more closely, a pattern emerged that I hadn't anticipated: every single venture was a platform for documenting the kind of work the students were currently doing. Group 1 proposed "Judgment Ledger," where students logged business decisions to prove their judgment. Group 5 offered "Veritas Trail," a "dashcam for your brain" that captured decision trails. Group 6 created "Reality Ledger," measuring how AI affects perception of truth. The ventures weren't pointing outward at strangers with problems; they were curved mirrors reflecting the assignment back at itself. The students, or the LLMs, or all of us, had followed my prompt so literally that they had built a recursive loop where the solution to "I need resume lines" was "a product that generates resume lines." I had accidentally designed an assignment that trained them to confuse the map for the territory, and the LLM, eager to please and pattern-matching on startup discourse, had enthusiastically abetted the confusion. The prompt asked students to reverse-engineer a venture from their resume gaps. But something strange happened: the "echo chamber" effect that plagues AI-augmented learning. When students used LLMs to brainstorm, the models (trained on startup discourse about "solving problems") mirrored back exactly what the students were already doing in class. The result? Six variations of "a platform to help students prove they have skills," ventures that were technically about credentialing but were actually just the assignment itself, staring back at them. I have started calling this tendency the Applied AI Classroom Parallax View: when the tool meant to expand imagination instead collapses it into recursive self-reference. The students ended up ideating themselves to such an extent that they built mirrors. Judgment Ledger, Veritas Trail, Reality Ledger: all systems for documenting judgment that were, themselves, exercises in documenting judgment. The syllabus became the product. The hiring stopper became the solution. The map ate the territory. Ironically, or poetically, Feuerbach's projection theory came to life. In the parallax view, the assignment seemed straightforward: reverse-engineer a venture from your resume gaps. The students were given a prompt, an LLM, and a deadline. What emerged was not six ventures, but six mirrors, each reflecting the assignment back at itself with increasing fidelity. The Echo Chamber Effect When Group 1 asked GPT-4 to "generate a venture concept that addresses gaps in the students' resumes," the model, trained on thousands of startup pitch decks, did what it was designed to do: it found the pattern. The pattern was that students struggle to prove their skills because they do not yet have experience. So it generated "Judgment Ledger," a platform for students to document their decision-making. The students didn't notice that the "customer" was themselves in three months. The LLM was suggesting they build a tool for the problem of "needing to build a tool." Group 5 took this further. "Veritas Trail" was a "dashcam for your brain while you work," a meta-tool so recursive it threatened infinite regress, turtles in an infinite loop. If you used Veritas Trail to document your work on Veritas Trail, did you need a Veritas Trail for your Veritas Trail? The LLM, asked to solve "no evidence of end-to-end ownership," proposed a product that was only evidence of end-to-end ownership. The map had eaten the territory. Lol. The Mirroring Problem The syllabus required ventures to engage metaphysics: truth, authority, meaning. The students, forced by the prompt to look inward, found these categories in their own academic experience. Group 3's "ClearGround" treated dental consent as a "truth ledger" because they had just read about "inspectable truth" in the prompt. Group 6's "Reality Ledger" measured "how AI reshapes perceptions of truth," which was, conveniently, exactly what they were doing in class. The exercise did not do what I had hoped: expand their imaginations. It collapsed them. It curtailed them. When asked for "metaphysics," it returned academic theology. When asked for "venture," it returned ed-tech. What I hoped would turn into a list of possible ventures for them to start became a list of theoretical frameworks for why ventures are hard. Lol. The Parallax View Slavoj Žižek defines parallax as "the apparent displacement of an object (the shift of its position against a background), caused by a change in observational position that provides a new line of sight." The philosophical twist is that the observed difference is not simply "subjective," due to the fact that the same object which exists "out there" is seen from two different stances. Rather, subject and object are inherently "mediated," so that an "epistemological" shift in the subject's point of view always reflects an "ontological" shift in the object itself. In the Applied AI classroom, parallax became the failure to recognize that the object and the background were the same. The students stood at the intersection of two interpretive systems: the academic (what they were learning) and the entrepreneurial (what they were building). But the LLM, trained on text where these coordinates are often conflated, kept returning them to the intersection itself. My head is still spinning, and I love it. When Group 2 proposed "Tactile Narrative Systems," haptic devices for invisible phenomena, they nearly escaped. The solar eclipse example was concrete, physical, not about credentialing. But even here, the "aha moment" revealed the parallax: "The real product wasn't the hardware. It was the translation layer between raw data and human meaning." They had built a metaphor for their own assignment. Or maybe they had built an assignment for the metaphor we are all experiencing when we interface with LLMs. Why It Happened The prompt was designed to make the personal universal: your gaps become the world's needs. But LLMs are symmetry machines. They find the shortest path between input and output, and the shortest path from "my resume is weak" is "build a tool that fixes resumes." The students, working quickly, accepted the first plausible output. They didn't iterate because the initial output felt right. It addressed the rubric, cited the readings, satisfied the constraints. And, why I am even writing this: we are all new at learning with AI. We do not know what to trust, or how to build trustworthy teaching protocols. We are learning as we go, as we grow. What they missed was the indexicality of the venture: a real business points outward, to strangers with problems. Their ventures pointed inward, to students with rubrics. The "customer" was always another version of themselves. I could add 60 pages here about why the study of religion helps navigate these challenges with insiders and outsiders and all that. But I won't. The Correction Resume lines write themselves when the work is for someone else. My students (and I) are now back on track and next up is finalizing a product/service idea and doing customer research. The fix was hermeneutical: we are solving a customer's problem, not our own. This is the anti-parallax: the venture and the learning are perpendicular, not parallel. The student learns by building; the customer benefits by using. Neither is a mirror of the other. The Lesson AI-assisted education risks this collapse whenever the prompt allows the student's situation to become the content. As many of us build out "How" to teach in the age of AI, one solution may involve constraining the coordinate system: build for strangers, in physical space, with money changing hands. What would this look like in a non-entrepreneurial class? In a literature seminar, instead of asking students to "analyze the themes of this novel using AI," you might ask them to "generate a reading guide for a specific type of reader who is not yourself: a high school teacher in rural Montana, a prison book club, a translation app developer." The analysis still happens, but it points outward. In a biology lab, instead of "use AI to explain this cellular process," you might ask students to "create a troubleshooting guide for a community health worker in a region with intermittent electricity." The knowledge still gets demonstrated, but it must survive contact with constraints that are not academic. Žižek describes the parallax gap as "the confrontation of two closely linked perspectives between which no neutral common ground is possible." This is what my students encountered: the academic perspective and the entrepreneurial perspective, linked by the shared vocabulary of "venture" but separated by irreconcilable demands. The academic wants reflection; the entrepreneur wants traction. The LLM, trained to please, offered a synthesis that was actually a collapse. The parallax view is seductive because it feels like integration. But it is also disorienting because it is confusing and collapsing POV with priority and expectation. Real integration requires friction, the resistance of the world pushing back. Learning with AI is hard to build, hard to sell, and hard to explain. That difficulty is the point. It is the difference between a mirror and a window. I am here for it. So are my students.

Written by Christopher Driscoll

Harnessing AI to Teach Digital Literacy: A Guide for Educators

As educators, it is crucial to equip our students with the skills necessary to navigate an increasingly digital world, where algorithms significantly influence the information they encounter. The recent AI Lesson Plan developed by CRAFT, an initiative by the Stanford Graduate School of Education, offers a valuable resource for teaching students about the biases embedded in search algorithms. By integrating this lesson plan into your curriculum, you can foster critical thinking and digital literacy among your students while addressing essential topics such as equity, representation, and the ethical use of technology. To begin, consider setting the stage by discussing the concept of algorithms and their role in shaping the online experiences of users. Use real-world examples to illustrate how biases can affect search results, leading to skewed perceptions and potential misinformation. Encourage students to share their experiences with search engines and social media platforms, prompting a conversation about the trustworthiness of information. This collaborative dialogue not only builds rapport but also allows students to recognize the pervasive nature of algorithmic bias in their everyday lives. Next, guide students through a hands-on activity where they can experiment with various search engines and queries. Challenge them to compare the results from different platforms, analyzing the differences in content, diversity, and representation. This exercise will help students understand that algorithms are not neutral; rather, they reflect the values and biases of their creators. Encourage students to reflect on their findings and discuss the implications of algorithmic bias in society, empowering them to become more discerning consumers of information. Finally, wrap up the lesson by facilitating a discussion on how students can advocate for more equitable algorithms and practices in their own digital interactions. Encourage them to think critically about the tools they use and the information they consume. By incorporating the AI Lesson Plan from CRAFT into your teaching, you can not only enhance your students' understanding of digital literacy but also cultivate a generation of informed, responsible digital citizens who are equipped to challenge and change the status quo.

Written by Chuck Hampton