Many fields will be transformed by AI. Education already has been, says Zuckerman. Illustration by Vincent Kilbride

What I’m teaching my students about AI

The academy has been rocked by the rollout of powerful artificial intelligence that can write an essay in seconds. The key is knowing that students will use it—and trusting them to disclose it
December 6, 2023

The rise of powerful artificial intelligence systems, exemplified by ChatGPT, is threatening an upheaval in the world of work. Unlike previous generations of automation, in which assembly-line workers were replaced by robots, the jobs threatened by AI include white-collar roles: paralegals, bookkeepers, financial advisers and so on.

In many of these fields, the day-to-day effects of AI are in the future. But one field is in crisis right now: education. Within moments of the launch of ChatGPT in 2022, enterprising students found that the chatbot could be ­prompted to give plausible-­sounding answers to exam questions and write impressive-looking ­essays. Many of my fellow professors freaked out. How can we evaluate our students’ performance if automated systems can instantly produce work that would receive top marks?

Some of my colleagues responded by instituting a blanket ban. Using ChatGPT to complete an assignment would be considered plagiarism, a cardinal sin in the academy. To enforce this rule, they turned to AI detectors, which use statistical patterns to identify texts produced by machines. These are so unreliable that OpenAI, the creator of ChatGPT, pulled its own detection product due to inaccuracy. This hasn’t prevented other firms from advertising their own, similarly flawed tools to universities as safeguarding “academic integrity”.

Other professors have gotten more creative. I recently spoke to an academic who teaches machine learning, and who revamped his take-home exam to be unsolvable using ChatGPT. This required him to feed a slew of possible questions to the chatbot and select only those that it consistently got wrong. He reports that it took him five times as long as preparing a normal exam: “Next time, I may go old school—pen-and-paper exams in a proctored exam room, no phones or computers.”

Before teaching an undergraduate class this semester I turned to my favourite pedagogical resource, the blog of Catherine Denial, a distinguished professor of American history. Denial writes insightfully about teaching from a position of kindness, helping academics understand what young people have been going through during Covid-19, explaining the importance of respecting students’ pronouns and pronouncing their names correctly, making the case for treating them as collaborators, not antagonists. Denial advised fellow educators that generative AI was sufficient enough of a shift that it should be a point of discussion with students—not just about what AIs can and cannot do, but about the industry’s privacy standards and labour practices. I designed my class policies on AI around the idea that it should be a topic for discussion and around ­Denial’s injunction, “Do not default to distrust.”

Students who learn how to use AI will have an advantage over those who choose not to become cyborgs

The core of the instructions to my students this semester: you’re welcome to use AI, but be warned that contemporary AIs perform some tasks better than others. However, you must disclose whether you used AI, preferably sharing your prompts and inputs. I hadn’t realised it, but this set up a natural experiment. Two-thirds of the way through the semester, roughly 30 per cent of my students have disclosed their use of AI. Their work is not measurably better or worse than that of students who have not disclosed use of AI, which isn’t all that surprising. A new study from a team of business school professors working with Boston Consulting Group found that AI had a levelling effect on many tasks, with the bottom performers significantly enhancing their output, and top performers improving only a little.

I encouraged students to use AI to polish their writing, to help them rewrite awkward sentences or shorten sections. AIs like the original Grammarly program have been on the market for more than a decade, before the generative AI revolution that has seen programs create original texts on their own, and have been a boon for people who are not native speakers of the language of instruction. I’m teaching about media and democracy, not English grammar, so if an AI removes barriers that prevent students from sharing their thinking, I’m all for it.

I’ve also encouraged experimentation with ChatGPT as a brainstorming partner. Most of my assignments require students to explain a problem in our contemporary public sphere and describe a solution—when stuck for an idea, you might ask ChatGPT for several possible solutions and develop one in detail. I’m unsurprised that fewer students have used AI for this than for polishing writing. In my experience, generative AIs give competent but uninspiring answers to such questions. The surprising and exciting ideas I’ve read in papers this semester appear to have come from the students themselves, something I’ve verified by spending much more time talking with them in my office hours than I was able to during the pandemic.

There’s a critical use I warned my students away from: generating whole paragraphs of text. The reason is simple. As I’ve written about in this column before, ChatGPT generates plausible-sounding text but hallucinates details, including academic references. In other words, if you ask a generative AI to write about Walter Lippmann’s theory of the “restless searchlight” and to include ­academic ­references, you may get footnotes that look believable but reference articles that don’t exist. When grading papers, I follow all unfamiliar references, which means an imaginary reference would send me down a rabbit hole searching for a book that ChatGPT invented. Fortunately, my students took this advice to heart and I’ve not encountered imaginary references in their writing.

In talking about AI with my students, they expressed concerns about becoming too dependent on technology. “I learned to drive with a GPS,” one explained to me. “I can’t navigate my hometown the way my mother does, from landmarks and memory.” It’s possible that students will end up similarly dependent on assistive AI… and yet, I have no strong desire to return to navigation based on paper maps.

In some fields, students who learn how to use AI productively will likely have an advantage over those who choose not to become cyborgs. The machine learning professor who’s considering pen-and-­paper exams told me that he encourages his research assistants—graduate students who are helping him write code for his research—to use generative AI as a “co-pilot”, helping automate the tedious parts of programming. These students are vastly more productive than those writing code entirely by hand, he notes, but only because they have good coding skills to start with and can detect when their AI co-­pilot makes errors. The problem is that the AI co-pilots can ace most assignments for an introductory programming course, meaning that we might not be able to train competent programmers in the first place, as beginners will be able to avoid the frustrating and challenging work of learning to code that is a necessary part of developing expertise.

I am less worried about students gaming the system than I am about losing their trust. My students were most concerned about being falsely accused of using AI, particularly from professors using detectors. Several of them reported receiving failing grades in other classes for using AI on assignments they insisted were completed manually. That’s plausible—a popular AI detector labels the US Constitution as “almost certainly written by AI”. I advise my students to write in a text editor like Google Docs, which can maintain snapshots of a document as it is written, showing that it was not cut and pasted from AI.

But more importantly, I am serving on my university’s task force on generative AI, making the case that not only should we discourage the use of AI detectors but that we should approach teaching from a stance of kindness rather than distrust. In the next few years, virtually every profession will need to figure out what it means to use AI ethically and effectively. Our job is preparing students for work in the new world that we’re figuring out together, not penalising them for the fact that our methods of academic evaluation need to change.