5 Plagiarism Issues to Watch in 2025

Calendar Image

Next year, Plagiarism Today will turn 20 years old. In that time, a great deal has changed regarding plagiarism, authorship and content misuse.

For one, the internet itself has changed. The Web of 2025 looks almost nothing like the Web of 2005. New technologies, the rise of modern social media and digital streaming have changed how we create, consume and engage with content.

But as much as things have changed over the past two decades, we are in particularly tumultuous times right now. For one, generative AI continues to dominate the headlines, directing much of the conversation around plagiarism and copyright.

However, we’re also seeing the increased weaponization of plagiarism, both politically and privately. In classrooms, more students are facing allegations of plagiarism than ever, and there is less certainty about what those allegations mean.

When it comes to plagiarism, we’re in a time of transition. While that transition will likely be many more years in the making, here are five things to watch out for, specifically in 2025.

1: AI and Authorship

When I sat down to write this post, I was tempted to reshare this January 2023 post about how AI would be the big copyright and plagiarism story of the upcoming year.

Everything in it still holds. We still have no legal certainty around copyright with AI and even less on how AI will change authorship. We may get some legal answers this year, but the authorship ones will likely be the thorniest.

The question is no longer “Is it acceptable to use AI?” but “How is it acceptable to use AI?” AI, for better or worse, is now baked into everything. Google Gemini, Apple Intelligence, Microsoft Copilot, Samsung Galaxy AI, and more.

Every device you use has AI baked into it in some way. So, the challenge is making it clear what uses of AI are acceptable for what environments. Likewise, those uses need to be communicated to the audience.

In November, I published a gradient of AI usage to help users discuss how AI was implemented when writing a piece. However, that conversation will need to happen across all media.

As an additional bonus prediction, it’s also likely that we’ll see creators promoting their work as “100% Human” or “AI-Free” even though, as the gradient highlighted, almost all creative work has some automated assistance.

2: Increased Weaponization of Plagiarism Internationally

2024 was a banner year for the weaponization of plagiarism. Between the DEI plagiarism allegations, the allegations against Kamala Harris and other stories, there was a rise in politically motivated plagiarism allegations in the United States.

However, the election is in the rearview mirror in the United States. Predictably, the string of stories died down in November. While it will be interesting to see what happens come midterm elections in 2026, I don’t expect the US to see many stories this year.

But that doesn’t stop other countries from copying the formula. To be clear, there is a long history of plagiarism scandals rocking politics across the globe. This has been particularly true in Europe.

However, the political forces that prompted these investigations are not unique to the United States. Though the impact of those stories was mixed, it’s still an effective way to generate unwanted headlines about political opponents. I expect to see the approach exported in 2025, especially to target academia.

3: Focus on Controlled Writing Environments

Schools are in a difficult position when it comes to AI-based plagiarism. As discussed above, AI is ubiquitous, making it trivial for students. However, detecting AI writing is fraught. Even if the detectors are better than they were a year or two ago, false positives are still an issue.

Proving that a student used AI is a real challenge. Schools are dealing with this in a variety of ways. Some are just trusting AI detection and hoping for the best. Others are adding elements to assignments that AI can’t generate. Finally, some quiz students about their submissions to ensure they know what they wrote.

However, one solution that is gaining steam is using authorship verification or controlled writing environments. We discussed the idea back in August when Grammarly introduced such a tool. However, schools will likely start mandating their own as time goes on.

The idea is simple: use software to control and monitor how the student writes the paper to confirm if and how AI was used. It’s similar to anti-cheat technology for tests in virtual classrooms.

These tools do raise significant privacy and ethical concerns. But that doesn’t change the fact that one of the few ways to know if AI was used on a paper is to know how the paper was written. This approach does just that.

4: Many, Many More Fake Authors & Journalists

When it comes to AI, there’s been an enormous concern that humans will use it to generate work for which they take credit. However, that ignores a significant reality of AI.

The people most excited about AI aren’t journalists and authors who have been writing content for years. Instead, it’s the editors and owners who are excited about the possibility of generating content without having to pay humans to create it.

To be clear, this isn’t news. In 2017, The Washington Post announced that it used an AI reporter to generate 850 stories. That was well before the modern AI boom. However, more recently, more organizations have dipped their toes into these waters. That includes Sports Illustrated, CNet, Buzzfeed and many others.

However, all those examples ended in disaster. While major publications have been gunshy about AI (at least publicly) since then, AI has become more normalized. I expect to see news publications make another run at public use of AI, likely with dubious disclosure.

To be clear, AI is not significantly better at generating news stories. It’s that, for some, the anti-AI backlash has cooled off.

5: More Plagiarism Detection Services

2024 was notable in the plagiarism detection space because Turnitin shuttered PlagScan, a lower-cost alternative for those not needing a large volume of checks. While its closure is a major blow, especially for those who wanted access to PlagScan’s library of content, a slew of new services are coming online.

Right now, you can roughly divide the plagiarism detection market into two groups. Plagiarism checkers that have added AI detection tools or AI detection tools that have added plagiarism detectors. No matter which kind of service one launches, the other is a natural add-on.

Since AI is such a hot topic, many companies have launched AI detection tools. Many of those companies saw an easy lateral move and introduced their own plagiarism detection software.

To be clear, we don’t know the quality of these services. However, it seems likely that at least some may be worthwhile in some use cases. The challenge will be sorting the wheat from the chaff.

Bottom Line

Without a doubt, 2025 is going to be a significant year when it comes to plagiarism and authorship. While AI is the most important story to watch, it’s having secondary impacts that may do as much to shape the landscape.

Right now, we’re still very much in the “settling” phase of the process. Generative AI didn’t become a thing for the public until November 2022. As of right now, that’s just over two full years.

Even on the internet, things do not move that quickly. It will take time for society to sort out the legal and ethical implications of generative AI.

That’s because it is here to stay no matter how sick some are of it. That’s why it’s important to start drawing boundaries now and finding ways to enforce them.

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free