In academia, the rise of artificial intelligence (AI) writing tools has sent shockwaves at nearly all levels of education. It is the topic that has been on the mind of every instructor, administrator and student currently occupying the world’s classrooms.
However, at this stage, much of the panic is excessive. There are multiple approaches to detecting AI writing and, as a test by Digital Trends showed, many of the services available perform well at that task.
While assessment processes do need to be updated and changed, we’re not at a point yet where AI makes it easy for students to plagiarize without fear of detection.
To give even better news, the tools for detecting AI writing are getting better. Turnitin, for example, is actively working on their AI-writing detection suite.
But none of this means that change is not coming. AI has been on the horizon for at least a decade and, though it arrived more suddenly than most figured, there are still difficult questions that instructors and administrators need to answer.
To that end, here are five questions that every educator should be asking both of themselves and their colleagues.
1: What is the Boundary of Cheating with AI?
When it comes to ChatGPT and other AI systems, the focus has been on using them as a plagiarism engine. The idea is that students would prompt the AI with their assignment and get an essay ready to turn in.
However, as we’re seeing with the Bing/ChatGPT partnership, that is just one potential use of an AI. Students could, for example, use an AI to speed up internet-based research and find sources they might not have found through a regular search.
Students could also use an AI to help edit or correct their papers, though never use the AI’s words as their own.
There’s a whole spectrum of uses of AI from search query improvement tool all the way to verbatim plagiarism.
To that end, some schools have drawn a hard line and outright banned the use of ChatGPT for students. Others, however, are taking more of a “wait and see” approach.
Finding this line is going to be crucial not just to explain to students the ethical use of AI, but determining at which point disciplinary action should be taken.
2: How Can We Change/Improve Assessment?
AI creates a challenge to assessment broadly as it, theoretically, can answer almost any question an instructor would ask a student. This includes essays, short answers, multiple choice and so forth.
This creates a serious problem. How does an instructor ensure that they are assessing the student’s understanding and not an AI’s?
Creating plagiarism-resistant assignments is a good first step. Anything that is difficult to search for would likely cause an AI to struggle. However, the concerns go beyond that.
There are many approaches such as requiring students to do more in class work, demanding additional drafts of papers and so forth.
However, many of these solutions create extra work for instructors and many of them do not work well in a distance learning environment.
Assessment is going to change because of AI and now is the opportunity to start thinking about how to do it.
3: How Do You Educate About AI?
It’s a simple truth that AI is not going to go away, and it will change our lives. Though there are open questions about the extent of those changes, it’s not something that can be simply ignored.
To that end, students are going to need to learn about AI. Though such instruction is already somewhat common in computer science classes from the perspective of creating an AI, this is a technology that will, eventually, touch the lives of all students.
This means, at some point, schools will have to teach students about how to use AI tools. While some of this calls back to the first question, discussions about how and when to safely and ethically incorporate AI into lesson plans need to start happening.
After all, students are going to learn about AI one way or another, at least if they learn from their educators, they can also learn the boundaries for ethical use.
4: Should AIs Train on Student Works?
One of the thornier questions regarding AI outside of academia is the legality and ethics of training AI on copyright-protected works without the creators’ permission.
Some liken AI training to a theft of their work and an attempt to automate the creation of derivative works. Others say it’s not significantly different from the way humans learn a craft through representative examples.
Either way, an AI needs data to train on, and how it obtains that information is one of the more divisive issues.
That said, if and how should AIs train on student work? There would be a great deal of benefit to training AIs using such work. Such AIs could better detect human authorship or, as with some authorship products, train to detect a particular student’s writing and spot changes in it.
Turnitin has been doing such training for quite some time. However, their use of student papers, even in their traditional plagiarism detection tools, has been controversial. This even resulted in a lawsuit, which Turnitin won, that was filed in 2007.
AIs have an insatiable appetite for content, and students produce a lot of the most valuable content. It’s only a matter of time before those issues collide in a major way.
5: How Do We Deal with the Students Who Do Use AI to Cheat?
Finally, as has already begun happening, students are getting caught using ChatGPT and other AI tools to cheat in the classroom. The question becomes: How do we handle these cases?
The obvious answer to many is to treat them the same as if they used a human author, such as an essay mill. If that is the case, then there is significant work to be done.
The rules students follow need to be updated for AI, honor codes also need to be updated and clarified. Both the rules and the punishments have to be spelled out clearly.
Schools, however, are often slow to update these codes. This can leave students in a lurch and create uncertainty when issues arise. The time to think about what should happen to such students is now and ensure that those decisions are put in writing.
There’s no doubt that AI creates a large number of challenges for schools. However, much of the discussion right now is phrased around AI simply being a tool for cheating and plagiarism.
While that is definitely a danger and a risk, the potential issues and changes run much deeper.
It was admittedly difficult to discuss these issues when the threat of AI was more theoretical. However, now that it’s very real, the time for discussion and decision-making is now.
Hopefully, schools can see past the immediate threats and discuss the challenges AI brings more in a more holistic manner. Otherwise, schools are in danger of being left behind by both the technology and their own students.
In the end, this is as much about keeping up with a changing world as it is addressing a new threat. That is something educators need to keep in mind.