AI and the classroom
The News spoke with members of the University’s English and Computer Science departments, along with faculty associated with Poorvu and the Executive Committee, to recap the University’s evolving response to the rise of artificial intelligence.
Surbhi Bharadwaj, Senior Photographer
Artificial intelligence has made its mark on Yale classrooms.
In the full academic year since the launch of ChatGPT, Yale administrators and instructors have altered their guidelines and teaching styles to accommodate for the new technology. Though many remain concerned about plagiarism with respect to AI, some professors have embraced using AI in the classroom.
Ben Glaser, a professor of English, initially became interested in applying AI to the humanities after he took a summer course for faculty members on natural language processing in 2022. He recalled thinking that AI was going to “transform the writing landscape.”
“I thought, ‘I’ll teach a writing course, because then we’ll be thinking about how to write well, which I do all the time,’” Glaser said. “‘We’ll be thinking about these tools that might hinder or maybe help.’”
In the fall of 2023, Glaser taught an introductory English seminar titled “Writing Essays with AI,” in which he and his students discussed how artificial intelligence could be applied to writing. They also explored the relationship between authorship, creativity and AI.
His ultimate goal, he said, was to help his students become better writers.
“It’s easy to say, ‘Oh, the AI can never be creative,’” Glaser said. “And I’m like, ‘Can’t it? Let’s interrogate these distinctions.’”
Students in the course read AI-generated stories, analyzed the differences between poetry written by humans and written by AI and even practiced using AI to plan essays. For their final projects, they researched how various industries are using AI.
In November 2022, shortly after Glaser designed and proposed the course – roughly a year before he first taught it — the company OpenAI launched ChatGPT, its popular generative AI program. According to Glaser, the release of the chatbot program prompted students and teachers to pay more attention to how AI could transform learning.
Poorvu Center responds
For the Poorvu Center, the University’s center for teaching and learning support, the release of ChatGPT was a catalyst to develop academic AI guidelines, said Alfred Guy. The deputy director of the Poorvu Center and director of the Poorvu Center’s writing and tutoring programs, Guy has helped conduct workshops on AI and facilitate professor education programs about the technology.
Guy learned of the imminent release of ChatGPT through Facebook, and he immediately felt that the Poorvu Center ought to respond. After the chatbot went public, the Poorvu Center released its first guidelines on AI usage in Yale classrooms in January 2023.
“The very first thing we said is, ‘These tools are powerful, and people are going to use them,’” Guy said. “Everything that comes after this should be thought about in terms of how people are going to use these tools.”
As AI software like ChatGPT became more mainstream, Guy noted that Yale instructors did not generally react with panic or fear. Still, they wondered how AI would affect them, and whether they needed to take concrete steps of action in their own classrooms. Guy said that the Poorvu Center’s teaching guidelines, which are now overseen by a five-person committee, are aimed at answering these questions.
The current guidelines include suggestions for how instructors can address AI in their syllabi, remind students to correctly cite AI and offer precautions when using AI technologies. Beyond providing links to dozens of articles and webinars on AI, they also encourage instructors to try AI tools for themselves and share their feedback with the center.
Guy said the Poorvu Center’s approach to AI over the past year has changed. Rather than striking a cautionary tone, the Center now encourages hands-on exploration of AI technology.
“We have shifted very slightly toward AI in our tone and in our attitude,” Guy said. “Even in our specific advice to faculty, we are saying, ‘You really should engage.’”
Glaser also pointed out that Yale’s open approach to teaching and learning with AI does not necessarily mirror that of other colleges and universities. He said that because of broader institutional support and awareness of the technology, he thinks that AI is not as contentious at Yale as it may be at other universities.
“If we get out of the Yale bubble, the writing landscape looks really different, and AI tools are gonna behave differently,” Glaser said.
AI in the classroom
As one of their class projects, Glaser required the class to revise Poorvu’s suggested AI-use guidelines.
Jared Wyetzner ’27 was a member of Glaser’s class and previously co-founded Myndful-AI, a machine learning chatbot that provides high school students with mental health resources.
Wyetzner said the class generally agreed that AI tools had a role in Yale education.
“What we moved toward is that AI can belong in your classroom,” he said. “There’s just a certain way that should be used to facilitate work.”
To Wyetzner, AI is best seen as a tool that he compared to the “calculator” of writing.
“You learn how to do math, addition, multiplication all that, by yourself, and then it gets to the point where you use a calculator and that becomes the standard,” he said. “How can we still learn from our writing and also have AI tools in the process?”
One Yale course quickly added AI to its classroom: CPSC 100, Yale’s introductory programming class, which is co-taught with Harvard’s CS50 course.
According to Ozan Erat, a Yale computer science professor who helps run CPSC 100, the course used two different AI technologies. The first, a chatbot called Duck Debugger, allowed students to ask questions about the course and helped them debug code. The second, CS50 Duck Bot, was integrated with the online forum Ed Discussion and replied to students’ questions.
Because of Duck Debugger, Erat says attendance at CS50 office hours dropped by roughly 30 percent. He said that this was a positive development, as students with easier questions could ask Duck Debugger at home while those with more in-depth questions could receive more attention during office hours.
Though Erat was initially worried about academic dishonesty, he said that by the end of the semester there was not an “excessive use of cheating.” He noted that the percentage of students referred to the University’s Executive Committee did not drastically change, and the CS50 instructors plan to continue to use Duck Debugger and Duck Bot as part of the course.
Plagiarism worries
The News spoke with Mick Hunter, the Chair of the Yale College Executive Committee, about students using AI to commit plagiarism and academic dishonesty. He said the Committee began to see cases related to AI shortly after the launch of ChatGPT. In response, the University added a section on AI use to its academic integrity guidelines.
While the use of AI was “clumsy” in these first cases — students generating false citations for a paper, for example — now the cases are “less blatant,” Hunter said.
“We’re still seeing students who are breaking the rules and using AI in ways that are not allowed, but it seems students are either learning to cover their tracks or use AI more responsibly,” Hunter added.
However, Hunter estimated that the Executive Committee has received around 10 cases related to AI since November 2022, placing them “in the minority” of academic dishonesty cases.
By virtue of his job in the Poorvu Center, Guy considers himself “just short of an expert” on pre-AI plagiarism.
Though Guy acknowledged that AI could contribute to plagiarism in writing, he expressed optimism about different ways instructors could help limit it.
Guy referred to multiple research studies which demonstrate that rates of plagiarism decrease when instructors require students to write low-stakes responses to class material, set interim deadlines for larger assignments through the semester and foster conversations where students describe their ideas.
Changing assignments to include requirements beyond the capacity of a language-generating AI model, he proposed, could also reduce instances of plagiarism.
Glaser and his students also discovered that using chatbots for writing comes with its own challenges. Due to its capacity for error and “idiosyncratic” responses, students need a type of “literacy” to interpret AI outputs and screen for errors.
“We quickly realized in the classes that to get anything good out of them, you actually have quite a bit of a dialogue,” he said.
By the time the students finish writing ChatGPT a good prompt, assessing the quality of its response and incorporating the response into their writing — not to mention citing it — the process might be more effort than it is worth, Glaser pointed out.
“By the end of that process, I’m not worried about plagiarism,” Glaser said. “I’m just wondering, ‘Was that actually efficient or useful? Did it make you a better writer?’ I think the answer is sometimes yes.”
Yale established the Poorvu Center in 2014.