- Two professor philosophers said they caught their students submitting essays written by ChatGPT.
- They said some red flags alerted them to the use of AI.
- If students don’t admit to using the program, professors say it can be hard to prove.
A few weeks after the launch of the AI chatbot ChatGPT, Darren Hick, professor of philosophy at Furman University, said he caught a student writing an AI-generated essay.
Hick said he became suspicious when the student turned in an essay on the topic that included well-written misinformation.
After running it through Open AI’s ChatGPT detector, the results indicated that it was 99% likely that the trial was generated by the AI.
Antony Aumann, a professor of religious studies and philosophy at Fordham University, told Insider that he caught two students submitting essays written by ChatGPT.
After the writing style raised alarm bells, Aumann sent them back to the chatbot asking how likely they were written by the program. When the chatbot said it was 99% sure the essays were written by ChatGPT, it passed the results to the students.
Both Hick and Aumann said they confronted their students, who all eventually confessed to the offence. Hick’s student failed in class, and Aumann had his students rewrite the essays from scratch.
“It was really badly written”
There were some red flags in the trials that alerted professors to the use of AI. Hick said the essay he found referenced several facts not mentioned in class and made a nonsensical claim.
“Vert for word, it was a well-written essay,” he said, but on closer inspection, a claim about the prolific philosopher, David Hume “made no sense” and was “just plain fake”.
“A really well-written mistake was the biggest red flag,” he said.
For Aumann, the chatbot wrote just too perfectly. “I think the cat writes better than 95% of my students ever could,” he said.
“Suddenly you have someone who doesn’t demonstrate the ability to think or write at that level, writing something that perfectly meets all the requirements with sophisticated grammar and complicated thoughts that are directly related to the prompt from the tryout,” he said. .
Christopher Bartel, a philosophy professor at Appalachian State University, said that while the grammar of AI-generated essays is near perfect, the substance tends to lack detail.
He said, “They’re really fluffy. There’s no context, there’s no depth or insight.”
Plagiarism difficult to prove
If students don’t admit to using AI for essays, it can leave academics in a tough spot.
Bartel said the rules of some institutions have not evolved to combat this type of cheating. If a student decides to be stubborn and deny the use of AI, it can be difficult to prove.
Bartel said the proposed AI detectors were “good but not perfect”.
“They give a statistical analysis of the likelihood of the text being generated by AI, which puts us in a difficult position if our policies are designed in such a way that we have definitive and demonstrable proof that the essay is a fake,” he said. said. “If it comes back with a 95% chance that the essay is AI-generated, there’s still a 5% chance it’s not.”
In Hick’s case, although the detection site said it was “99% certain” the essay was AI-generated, he said that was not enough for him without a confession.
“The confessions were important because everything else looks like circumstantial evidence,” he said. “With AI-generated content, there is no physical evidence, and physical evidence carries much more weight than circumstantial evidence.”
Aumann said that while he thought chatbot analysis would be sufficient evidence for disciplinary action, AI plagiarism was still a new challenge for colleges.
He said: “Unlike the old plagiarism cases where you can just say, ‘Hey, here’s the paragraph from Wikipedia. There’s no reversal evidence you can provide other than the cat says it’s the statistical probability.”