Edu Ex Machina Maria Gomberg | Providence, Rhode Island | September 8, 2025
This spring, I took a class on the history of AI. I chose it not so much for its subject matter but for one of the instructors, an intellectual historian with a following of Europhile undergrads, whose entourage I had been hoping to infiltrate. In this class, marketed as “experimental” and “interdisciplinary” and team-taught by the historian and a heavy hitter from computer science, I found myself unusually far outside my wheelhouse of women’s history. We read science fiction and historiography; Kant, Valéry, Popper and Project 2025. A screening of Ex Machina broke up a week when we’d been assigned several unpublished CS working papers. No word in the semantic family of AI was left unparsed—“intelligence,” “machine,” “large”—you name it.
The classroom was explicitly AI-proofed. Instead of essays there were quizzes, and grading was based on attendance. In this, it was not unique. Every class I took that semester felt similarly experimental, if more insecure about this fact. A history lecture course with a reputation for writing instruction removed the option of a final paper from the grading scheme altogether. Instead, we took an exam proctored by TAs who mournfully watched us etch out chicken scratch. My religion course reduced the length of assignments, and every class I took had its own strict AI clause included in the syllabus. These policies ranged from tepid permissiveness to threat of execution by firing squad if ChatGPT were to be as much as consulted. Technopessimism dominated syllabus week, and, for the first time since I started college, I felt like my professors were scared of me—willing to sacrifice core attributes of their courses and methodologies for the false security of allegedly preventing plagiarism.
This paranoia is not wholly unjustified. From a student’s perspective, AI has been just as prolific and perhaps as pernicious as professors have assumed. Sometime between freshman and senior year, it went from being a gadget for techies to a mainstream writing implement, akin to find-and-replace or highlighters. (You know it’s bad when the freaks and geeks are using DeepSeek.) A friend told me that during discussion for a class of just seven people she watched a freshman with frequent nosebleeds read her AI-generated contribution verbatim. In response, I told a story about a kid in my AI class reciting a definition of the “digital scramble for Africa” from GPT Pro.
I cannot be accused of Luddite puritanism. I have, for instance, resorted to using ChatGPT for all communication with my landlord. In having to routinely reference my lease—a text of such modernist ambiguity that no human could parse it, let alone weaponize it, unassisted—I have decided that in this arena, AI is fair game. Less justifiably, any time I need a synonym or an analogy, I turn to old faithful.
The fact is, most students aren’t simply consulting AI to write; they are using it to offload all the preliminary thinking for them. A student can easily feed a PDF of the assigned book chapter to their AI application of choice and pass off the bot’s lukewarm analysis as their own; construct a study guide by uploading their notes to an AI tool marketed on LinkedIn by their peers; and even generate plausible rebuttals to arguments posed in discussion sections—all without arousing any suspicion from their overworked TAs.
●
At the end of the semester, our professor explained that as an intellectual historian, she was interested in artificial intelligence because it is a “thought problem”—as in a question about what thought is, and simultaneously a social issue concerning thinking. For people trying to figure out how to teach undergrads, the solution to this “problem” lies in the answer to the question: How do we make these punks think in the first place?
But, in my experience, most professors have no clue about how students are actually using AI, and much of the ambient anxiety across the humanities derives from the fact they know they’re missing something. This mismatch of information between students and teacher evokes not so much a thought problem as a thought experiment—something out of the most cartoonified of game-theory textbooks. Citing historian Peter Galison’s “The Ontology of the Enemy,” my AI professors introduced game theory as a theoretical precursor to artificial intelligence—a wartime discipline, or “Manichean science,” designed specifically to destroy the enemy other. It seems like, across departments, professors conceive of undergrads as using AI with similarly militant intentions, and they respond accordingly.
But a fear of being outsmarted can lead to solutions that often counterproductively stifle learning. Some, willing to suspend disbelief, seem to have decided that if the most overt incidents of cheating are averted, they’ve won the game against the conniving enemy undergrad. On the other hand, the professors who I admire most—the pedagogues concerned with their students’ thinking—are the ones most likely to shoot in the dark at the obscure threat, dissolving the components of their courses that made them stand out as teachers in the first place. Some professors are willing to totally upend the goals of their instruction. In a recent op-ed for the New York Times, Clay Shirky makes an increasinglycommon argument for what he confesses to be a “medieval” strategy for combating AI use in the lecture hall, calling for a “return to an older, more relational model of higher education” characterized by reading, dialogue and the blue-book exam. He notes that writing papers is a somewhat new phenomenon in university curricula and wonders why we ought to prioritize them when the real issue is thought.
My worry is that sacrificing writing is too big a pill to swallow. Ludwig Wittgenstein, who we read for the AI class when discussing language, wrote in his own “blue book”—much different to the ones I am growing used to—that “it is misleading to talk of thinking as of a ‘mental activity.’” Rather, “we may say that thinking is essentially the activity of operating with signs”—a totally external process. However unscientific, I’m willing to entertain the claim if it helps us see writing as an essential tool for our own thinking. Without the opportunity to work on long-term projects that require a process of externalized thought, I can feel my “mental activity” becoming lamer and my arguments shallower—no matter how much “think, pair and share” we do during class.
Being in this class about AI in the midst of the growing tumult around its emergence only made me more conscious of how the products of my “AI-proofed” syllabi began resembling AI itself: I was, mostly, spitting out vaguely appropriate regurgitations of a set body of assigned texts. In never being asked to formulate a thought for longer than a couple of hours at a time, my in-class essays, discussion-section contributions and self-timed take-home finals were all becoming all the more superficial as I learned I could easily get away with significant lapses in reasoning and argumentation.
If the same people who believe that AI is still incapable of producing work at the quality of an educated individual with some artistic inclination also curtail all opportunities for thoughtful writing instruction, they might be playing a part in a self-fulfilling prophecy. If we never teach students how to write in the first place, the next generation might in fact have no choice but to depend on machines.
Despite the intelligence and integrity of each member of my College’s AI committee, our progress (of which there has been little, if any) could never outpace the AI advancements with which we have been working to coexist. But keeping up with advances in AI technology is not the biggest challenge we face. To come up with a good AI policy for a university, a department or even a household, one first has to have an idea of which skills and formative experiences they are prepared to lose for the sake of AI use, and which ones they will fight to retain. And it’s here that we have discovered that consensus is most importantly lacking.
Interested in submitting to Forms of Life?
Pitch us! Forms of Life pieces are short (no more than 1,000 words), and are grounded in a concrete experience or current event. Send your FoL ideas to the editors via Submittable.
Since it was founded in 2009, The Point has remained faithful to the Socratic idea that philosophy is not just a rarefied activity for scholars and academics but an ongoing conversation that helps us all live more examined lives. We rely on reader support to continue publishing.