How AI Will Change Authorship and Plagiarism
The bots are coming for writers too...
Depending on to whom you’re speaking, AI is either a groundbreaking technology that is going to cause major upheavals in our society or it’s fad that will probably burn itself out in due time.
Either way though, there’s not much doubt that the technology is advancing, becoming both more capable and more accessible.
But, while there has been wide exploration as to what AI means for a variety of jobs, what hasn’t gotten nearly as much attention is what it means for writing and plagiarism, in particular when looking at academic integrity.
Though the idea of robots writing school papers might seem to be the realm of science fiction, the truth is robots are already writing content. In September 2017, the Washington Post announced their AI reporter, dubbed Heliograf, had penned some 850 stories in the prior year. This included some 300 reports from the Rio Olympics.
The year before, the AP announced that it was going to exploit AI to produce some 3000 quarterly earnings reports, up from 300 per quarter the prior year.
In short, you’ve likely already read things written by an AI and didn’t realize it. While these dispatches are, generally, short and formulaic, the technology is moving forward and dipping its toes into more and more complicated tasks.
So what happens when AI reaches the classroom? That’s not a simple question but it one that we definitely need to start thinking about now.
The AI of Today
Currently, there is no commercially available AI tools that students, or any writers, can use as a substitute for original work. All of the tools that do exist are enterprise-level tools that aid in the writing of short, formulaic work that don’t really require a human author.
In short, as of right now, there’s just no substitute for the real thing when it comes to writing.
The closest thing that does exist are so-called automated paraphrasing tools that are able to semi-intelligently replace words in text with synonyms. Though they have been touted as a threat to academic integrity, they are best known for producing low-quality work that, while able to pass inspection in a plagiarism checker, is ultimately unreadable.
But while it’s easy to make fun of these primitive tools, it’s important to remember that, just over a decade ago, they were pinnacle of technology in article rewriting and were actually expensive systems that spammers paid thousands of dollars per year to use.
What was previously a high-prized secret, in 2019, can be found by any student with a Google search and is easily used by simply pasting text into a form. No money or skill is required.
While much of this decline was due to Google changes that made article spinning useless for its original purpose, the trend for technology to become cheaper and more accessible is universal.
When it comes to writing. AI technology will improve and it will become more accessible.
What happens when everyday users are able to turn to bots to help with their writing?
The First (Likely) Bots
Just like we didn’t jump straight into self-driving cars, we’re not going to jump straight into bots that fully automate the writing process.
Instead, the first real push into AI will likely be automated editing tools. Much like how lane assist and automated braking are a natural extension of cruise control, automated editing tools are a natural extension of spelling and grammar checkers that we have now.
To some extent, we’re already seeing this. Grammarly, for example, offers a suite of writing assistance tools, many of them going beyond regular grammar/spell checking and replicating the function of a human editor. The Hemingway Editor does something similar using the famous author as its inspiration.
As time goes on, tools such as these will become better and better substitutes for human editors. Though it’s unlikely that they’ll fully replace human editors, they will be able to provide more of the function and edit works in more significant ways.
However, at this point such tools don’t raise any real ethical concerns. The changes to the work are decidedly human-driven. It is up to the author to approve or reject the proposed changes. This leaves no question to authorship of the work and certainly is no more dangerous from a plagiarism perspective than a human editor.
The problem will likely begin with what could happen next.
The Automatic Rewriters
As the editing tools get smarter and smarter, it’s only a matter of time before they start making more and more significant changes. Right now, even the most advanced tools are editing works on a more granular level but it’s not a huge leap for them to start editing paragraphs or sections.
From there, it stands to reason that the tools will become more and more automated. We’ll go from approving every little change that suggested to tools that, in large, edit a work at the push of a button.
These tools could, in theory, also fact check a work. For example, if a student lists the wrong year for a big battle in their essay, it could correct the statement.
From an authorship standpoint, this is when things start to get complicated. The first time students or reporters are dealing with authorship and AI likely won’t be trying to take credit for what an AI wrote, but trying to blame the AI for mistakes the program inevitably makes.
However, this raises an interesting authorship question. If an AI significantly edits a work and those changes are not expressly approved by the original author, who really is writing the piece? Is it all the responsibility of the original author? A joint authorship? Or is the AI responsible for its mistakes.
We see this with vehicles today, specifically with Tesla’s Autopilot. Though far afield of being a true self-driving car, it combines existing technology such as cruise control, lane assist, automatic braking, etc. and automates them. The mode has been described as a “legal nightmare” though, in general, the human driver is still considered responsible for the operation of the vehicle.
Though the authorship implications of an essay or an article are not as significant as life and death ones on a roadway, they are important and it stands to reason they would follow the same pattern. In short, authors will be responsible, good and bad, for what AI does to their work.
At this point though, technology may be ready to make its final leap and bring with it a whole new set of questions.
The Final Phase
The endgame for AI and writing is, obviously, push button writing. The ability to feed a bot a topic and some paramaters and then have it spit out a fully formed work.
That, obviously, is a long way off, especially on a commercial level. Though the AI bots used by news publications are impressive in their own right, they’re clearly not a replacement for human authors, as evidenced by the fact both the Washington Post and the AP used AI to supplement, not replace, human reporters.
Still, it’s almost certain that such tools are coming, it’s only a matter of when, not if. As bots get smarter at editing and honing our writing, they’ll get better at writing their own material.
When this happens, we’re looking at some pretty serious authorship questions: What does it mean to be a writer? What does it mean to be an author? Does auto-generating an assignment using AI mean completing it or is it plagiarism? What if AI writing just becomes the norm like word processing?
For users of AI, these questions are a two-pronged problem. It pertains not just to whether they wrote the work, but also to what responsibilities they have for it. After all, what happens when an AI bot commits plagiarism, libel or some other literary crime? Who takes responsibility?
Historically, it’s been as cut and dry as to say “If your name appears on it, you’re the author and responsible for it.” We’ve even adopted this approach when ghostwriters are involved, holding the named author to blame. Does that work with AI?
Most of these questions are likely distant future ones but the technology still has the ability to creep up on us. If we aren’t prepared to discuss the practical and philosophical questions AI writing will raise, then we may be caught flat-footed when the technological realities catch up to us.
That’s not to say that the anti-plagiarism side is just kicking back and waiting. They’ve been working on tools of their own.
Anti-Plagiarists Strike Back
Though anti-plagiarists don’t have to worry about armies of AI bots generating papers just yet, they do have to worry about a problem that is similar from a technological perspective: Contract cheating.
From a pure user experience standpoint, the problems are remarkably similar. With essay mills a student (or other wannabe author) can input a topic plus a few paramaters and get a fully-written work. The only difference is that, on the backend, it’s a human generating it not an AI.
As such, the challenge of spotting AI-generated papers is very similar to the challenge of spotting human-ghostwritten papers. As such, there’s been a huge push in recent years to go beyond traditional plagiarism detection, which looks for copied text, and focusing on authorship detection.
Turnitin has teamed with seven different universities to develop an authorship investigation tool. Though the tool is still in development, it aims to be able to spot when a student’s writing has changed significantly and flag such works for further review.
Disclosure: I am a paid consultant/blogger for Turnitin though I do not work on this project.
However, they’re not the only ones working on it. Emma Identity (Emma) is an AI tool for detecting authorship you can play around with today. Simply provide it with a (large) sample of your writing and it will attempt to guess what is and is not your work.
In my test, it was less than perfect, but it’s still an interesting tool and it will undoubtedly get smarter as it learns.
Interestingly, since the same tools that are being developed to combat contract cheating would also likely work on AI writing, it may be one of the first times in history that the anti-plagiarism side is ahead on technology.
Still, that doesn’t mean that the battle has been won, just that it might be delayed if this authorship-detection technology is able to truly work.
Bottom Line
For most of history, there wasn’t much of a technology race between plagiarists and anti-plagiarists. However, with the invention of copy/paste and the rise of the internet, a tech race was on.
I referred to this previously as Plagiarism 3.0, the third major iteration of plagiarism. It’s already begun in the form of contract cheating but, as AI technology improves, automated ghostwriters will be begin to take the place of humans ones.
This raises not just technological challenges, but philosophical and human ones as well. What it means to be an author or a writer may ultimately shift and we might have to confront the idea of AI authors being tools like word processors or spell checking software.
While we’re a long way off from having to confront those issues face-to-face, the time to start thinking about them is now.
Given the rate with which technology moves, five or ten years may be too late to start the conversations.
The last thing we want is this technology to creep up on us and we are already seeing all the warning signs that it will.
Want to Reuse or Republish this Content?
If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.