The Backlash Against AI Accusations

University of Minnesota Logo

On January 17, Minnesota Public Radio shared the story of Haishan Yang, a 33-year-old former Ph.D. student at the University of Minnesota.

The school expelled Yang after ruling that he improperly used AI on a preliminary exam, which was a requirement for starting his dissertation.

The school cited several reasons for drawing its conclusions, including using AI detection tools. It also examined the length, formatting and language of his answers and noted that his responses closely mirrored answers provided by ChatGPT.

Yang has vehemently denied wrongdoing. He says that while he does use AI for various tasks, he did not use it on the test. He has filed a lawsuit against the university, seeking over $1.3 million in damages.

Less than a week after that story, USA Today told the story of Marley Stevens, a University of North Georgia student similarly accused of using AI in a paper. She denies the allegations and says she used Grammarly, as recommended by the school.

Though she was not expelled, she claims getting a zero for the paper hurt her GPA and cost her a scholarship.

Both Yang and Stevens say that the accusations hurt both their academic career and their mental health.

These stories have reignited the debate about when and how to accuse students of using AI. Many are urging extreme caution, saying that the accusations can do more harm than good. But this leaves both teachers and students in an awkward place, and it isn’t likely to change any time soon.

The Problems with AI in Academia

When it comes to AI in academia, there are three significant challenges:

  1. The Newness of AI
  2. The Difficulty in Detecting AI Content
  3. The Need to Teach Students About AI

The newness of AI is relatively straightforward. Generative AI systems didn’t become widely available until November 2022. That means the widespread use of AI systems is barely two years old.

While that means that the future of AI is very uncertain, it comes with other issues. School policies have not caught up with AI. According to the USA Today article, an EDUCAUSE study found that nearly half of all faculty and staff respondents disagreed that their institution has appropriate guidelines.

However, the problem is even more fundamental than that. As discussed in the Gradient of AI Usage post, we often lack a common language around AI usage in academia. Even something as simple as “the use of AI Is barred” can have complicated meanings.

On the second point, detecting AI writing remains a significant challenge. Though tools are improving, they all have one major problem: False positives. If a regular plagiarism detection system makes a mistake, a human can review the report and make the final judgment. That’s not possible with AI.

That’s because AI detection is a black box. No matter how small, any error rate leaves it open to challenges.

Finally, schools want and need to teach students about AI. Generative AI is a reality that isn’t going away, and avoiding it altogether is a disservice to students. However, it’s challenging to teach something that is both new and difficult to detect, especially when its many uses are academic integrity violations.

So, what should schools do?

Changing the Approach

My favorite take on this came from Dr. Tricia Bertram Gallant, an academic integrity and ethics expert at the University of San Diego. In a LinkedIn post, she called for a change in how schools approach academic integrity.

To be clear, this isn’t a new refrain. Gallant, myself and countless others in this space have been making similar calls for decades. By focusing on punishment and an “us versus them” mentality, students fear what should be an opportunity to learn.

Still, I agree with Gallant. Though schools have an obligation to ensure that their degrees have integrity, that’s a goal that can be achieved through different means.

One of the significant problems with AI is that when a teacher suspects AI usage, there is only one potential response: Punishment. Since it’s difficult to be certain about AI usage, this encourages instructors to either ignore their suspicions or make broad, sweeping claims.

To be clear, I’m not speaking to any specific case here. Punitive measures will always be needed in some cases. But the current punishment-first approach makes stories like these more likely.

For example, in Yang’s case, he said the school made several other accusations against him that didn’t result in disciplinary action. If the school had another track for such cases, it might have helped him in the long run.

While we can’t say what would have happened, I think most agree the story would read differently if the school had tried everything before taking the disciplinary route.

Advice for Teachers and Students

The current system is unlikely to change unless there is a tectonic shift in academia. To that end, I offer (or rather repeat) this advice to teachers and students who must navigate it.

For teachers, focus on crafting assignments that are both plagiarism and AI-resistant. There are many approaches to achieve this. Sometimes, subtle changes can make a big difference. For example, requiring early drafts to be submitted in handwriting or asking students questions about their projects can help identify students who didn’t do the work.

Similarly, you can also ask questions that play to AI’s weaknesses. Ask students to connect the events in a story with something from their personal lives. Encourage students to use sources and materials that aren’t available online. Etc.

Finally, ensure you have ways to detect AI usage beyond AI detectors. Though AI detectors are valuable in some situations, they can’t carry the burden of proof alone. Ask your students questions about a suspect paper and how they found the sources they used. Engaging with students makes it difficult for them to hide AI usage.

My advice to students is unchanged: Do not use AI in any way unless you are expressly allowed to. While I don’t think students should remove Grammarly or similar tools, they should be careful of their usage.

However, the most crucial step is to ensure that you use a word processor that saves versions of your document. Versioning is a simple way to show how a document is written. Likewise, you should keep all your notes and evidence of your work for some time after a paper is submitted.

Finally, everyone, teachers and students, need to communicate more. Teachers need to express expectations, and students need to discuss concerns. Actual dialog may be the best way to prevent issues with AI.

Bottom Line

In the end, I don’t have any particular comment about the USA Today or MPR stories. I don’t know all the facts of these cases, so I am not comfortable commenting on whether the punishments were justified.

What I do find interesting and can comment on is their existence. The fact that these two stories came out and received traction so close together says a great deal. There is a great deal of concern about overzealous schools harming the academic careers of innocent students over AI.

Schools need to be prepared for that, and likewise, they need to be prepared for lawsuits, as in Yang’s case.

One of the best things schools can do to guard against it is to have robust policies surrounding the use of AI. These policies should include details about when and how AI is acceptable, how AI usage will be monitored and what the repercussions are.

Leaving these issues up to faculty and administrators to decide on the spot is incredibly risky. They require careful consideration and debate, and no one should be improving their way through this.

However, that is the reality at many schools. Instructors and administrators are broadly not confident in their school’s policies, which is literally step one for dealing with these issues.

Until that is handled, there’s not much that technology or human intervention can do to prevent controversies like these.

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free