Professor Falsely Accuses Students of ChatGPT Plagiarism

Last week, Rolling Stone reported on the tale of Dr. Jared Mumm, a professor at Texas A&M University–Commerce, who had seemingly failed a group of students over allegations that they had all used ChatGPT to create their assignments.

The news first broke when Twitter user DearKick posted an image of an email sent by Mumm to his students. In that email, Mumm accuses some of his students of using “Chat GPT” to write their papers and says that they will receive a grade of “X”, which is an incomplete.

He then offered a new assignment for students who were unhappy with their grade, including a new writing prompt for them to follow.

However, what made the story interesting, and the subject of national news, was how Mumm reached the determination. According to the email, he submitted each of the assignments to ChatGPT and asked twice if ChatGPT wrote that paper. On the ones where ChatGPT twice said it had written it, he gave the assignment a zero.

This, however, is not how ChatGPT works. It simply cannot detect its own writing, at least not this way. Though there are tools that strive to make detection of AI writing easier, ChatGPT itself is not such a tool.

This was highlighted by other Reddit users, who posted screenshots of ChatGPT having claimed to have written both Mumm’s email and the abstract of his doctoral dissertation.

That said, it appears that some of the original reporting was wrong, as it was initially reported that Mumm had failed his entire class and that several students were unable to graduate due to the dispute.

However, in a comment from Texas A&M University-Commerce, the school clarified that no one had received a failing grade, just an incomplete while the matter was sorted, and that no students were barred from graduating. They further said that Mumm is working with each student individually and that several students had been cleared of academic dishonesty.

The school also said that it is working and developing policies related to AI.

Even though it seems to have ended fairly well, to call this situation unfortunate is an understatement. That said, Mumm is far from alone in both being concerned about AI writing, but also lacking a fundamental understanding of it works. As such, cases like this are going to get more common long before things start to get better.

The Problem with AI Writing

There’s no doubt that AI has been the major copyright and plagiarism story of both 2022 and 2023. Instructors, like Mumm, are right to be concerned about students generating papers or assignments using AI.

However, one of the big challenges is that there is no great way to detect AI writing. To be clear, there are tools that can detect AI writing and some that do it fairly reliably (at least seemingly), but there is no baseline to determine what is normal and there has been little research done at this stage. This makes basing any kind of punishment off an accusation of AI cheating difficult. 

However, Mumm’s story takes things one step further. Not only did he accuse students of AI-based plagiarism inappropriately, he did so based on a faulty understanding of how AIs like ChatGPT work. Though it’s possible to use the large language models (LLMs) that AI’s use to detect AI writing, that’s not something AIs themselves are designed to do.

As a result, the answers Mumm got were faulty, a fact proven by other users who ran Mumm’s work through the same process. If Mumm had checked to see if his approach worked, namely feeding it known human and AI-written content, he likely would have figured it out pretty quickly. 

Though it’s easy to blame Mumm for his mistakes, his concern over AI writing and his lack of understanding of AI are both easily understandable and are traits shared by countless instructors. However, the fact that so many students came back as cheating should have given Mumm pause.

That said, Mumm seems to share another trait with many instructors: A lack of trust in his students. The truth is that Mumm felt it was plausible that a large percentage of his students were using AI to cheat on that specific assignment. He was confident enough to move forward with his is accusations without doing any checking of his findings.

But all this raises a question: What are teachers and students to do moving forward? 

What Instructors and Students Need to Do

With the school year coming to an end, now is a good time to focus on next school year and what both students and instructors should be doing when it comes to AI.

For students, this means being prepared to face false allegations of plagiarism. Though such allegations are still very rare, the landmine that is AI is still prevalent, and Mumm likely won’t be the last instructor to put themselves in this position.

This means writing your papers as if they could be challenged, making sure to use services that time/datestamp your writing and have good versioning. Likewise, be sure to keep your notes close to hand and not delete them after you turn the assignment in.

For instructors, things get much more complicated. Right now, there is simply no definitive way to detect AI-based plagiarism, at least not with enough certainty to act on it.

Research is being done on various AI detection tools to try and find the ones that work best and to, hopefully, determine what baselines should be used when making decisions based on their findings. However, that work is likely a way off and, even if it isn’t, there’s no guarantee that the answers will be certain enough to enable enforcement based on.

Right now, many cases of AI plagiarism are detected when the AI goes off the rails and makes mistakes that are difficult to ignore or dismiss. However, that’s not something that can be counted on. 

Where possible, instructors should focus on adjusting their assignments to make them ore “Google Resistant”. This can include more in-class elements, using topics that are difficult for an AI to write, asking for handwritten first drafts and requiring mixed media as part of the assignment. 

These adjustments have serious drawbacks, especially in larger classes. They typically take longer to grade and place a time burden on instructors, who are often overstretched as they are. However, they are the best tools for combatting AI, the same as they were the best tools for combatting essay mills.

Bottom Line

Academia is overdue for a reevaluation of assessment. Though I don’t believe that the essay is dead, a chestnut that has been floated around for decades, changes in how student grades are assessed are inevitable. 

This is equally true both with or without generative AI. If it weren’t AI it would be essay mills, if it weren’t essay mills, it would be other forms of contract cheating, if it weren’t that it would be something else.

Assessment does need to change and grow, but schools are famously slow to adapt as pressures change. There’s a strong desire to stick with traditional and “tried and tested” methods, both because of familiarity and consistency.

However, AI may not afford schools that luxury. If instructors want to assess a student’s human authorship, there are two choices, accept at least some uses of AI or ensure that assignments are only for humans.

Because, even if we do get to a point where we can detect AI writing with enough confidence to act on it, it will always be a cat and mouse game. There will always be a higher number next to ChatGPT, and there will always be a new LLM on the horizon.

As such, while it’s easy to mock Mumm’s actions and his lack of understanding of ChatGPT, it belies a serious issue that academia has to wrestle with. AI is here, it does pose real challenges, and technology alone isn’t going to make those issues go away.

For once, we need a human solution, even if it means shaking up assessment more broadly. 

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free