Can We Detect AI Writing?

Magnifying glass over text image.

On November 30, 2022, OpenAI launched ChatGPT to the public. That launch significantly accelerated the use (and misuse) of generative AI writing, making it a prominent issue.

To be clear, AI writing existed for some time before that launch. In 2017, the Washington Post announced its AI reporter, Heliograf, which it used to supplement human reporting. However, before November 2022, generative AI wasn’t broadly available to the public, and its use was severely limited.

But the launch of ChatGPT has put AI in a strange place. We know that it is more common than it once was and that we are actively reading and engaging with AI-generated text. But we have little idea exactly how much of it is out there.

This has placed a heavy focus on AI detection. Not only have dozens of companies sprung up claiming to offer AI detection services, but many existing companies, such as Turnitin, have begun offering it as an additional service.

However, the performance of those services has been, at best, a mixed bag.

All of this raises the question of whether we can detect AI writing. If so, what does it take to do so? The answer is uncertain, but what we know isn’t promising.

What Do We Mean By Detecting AI Writing?

Before we answer the titular question, we must examine what detecting AI writing means.

The reason is that there are three different ways to detect AI writing:

  1. Automatically detect AI writing.
  2. Detecting AI writing as human readers.
  3. Detecting AI writing through a hybrid approach.

Each plays to different scenarios where one might want to separate AI from human-written content.

For example, if you’re running a search engine, you must fully automate AI detection, as no level of human involvement will scale enough. However, if you’re a social media user, you must focus on human detection of AI, as you can’t trust the algorithms to protect you.

Finally, if you’re an instructor in the classroom, you must focus on all three. How much automation is possible or required depends on the size of the class, the types of assignments and other issues.

As such, we will look at these three areas separately, as each has its challenges and opportunities.

Can We Automatically Detect AI Writing?

Of the three, this is probably the most challenging question to answer. The reason is that the goalposts are constantly moving. As detectors improve and new AI models come online, the effectiveness of automated AI detectors changes.

However, a recent study (which is still in pre-publication and has not been peer-reviewed), paints a fairly dire picture of the current state of AI detection. The study tracks how effective 12 detectors were against 11 separate models and how that effectiveness changed with different variables.

The findings were not promising. To quote the conclusion:

Detectors are not yet robust enough for widespread deployment or high-stakes use: many detectors we tested are nearly inoperable at low false positive rates, fail to generalize to alternative decoding strategies or repetition penalties, show clear bias towards certain models and domains, and quickly degrade with simple black-box adversarial attacks.

RAID: A Shared Benchmark for Robust Evaluation
of Machine-Generated Text Detectors

That said, the news wasn’t all bad. The researchers did find that some detectors, including Originality.ai, performed well under some circumstances. In a low-stakes environment, this performance might be good enough.

However, there aren’t many actual low-stakes environments. Getting AI detection right is essential in everything from search engines to classroom assessments. Even 99% accuracy is too low in those cases.

So, can humans do better? Probably not.

Can Humans Detect AI Writing?

The research on whether humans can detect AI writing is mixed. One study from June 2023 found that humans could generally detect AI writing and even used human methods to create an automated system.

However, in September 2023, another study found that even linguistic experts struggled to spot AI writing. None of the 72 experts could correctly identify all four writing samples.

It is clear that when humans successfully detect AI writing, it’s based on a close examination of the work. This includes looking at specific patterns in the language, examining the facts and information of the piece and generally reading the writing very carefully.

Unfortunately, most people don’t read most writing that closely. Every day, we are bombarded with countless written works that we skim, barely read, or even scroll past. With that level of engagement, it’s nearly impossible to separate AI writing from human writing.

The simple truth is that if humans can detect AI work, they can’t detect it without intense engagement. While that might not matter for a lot of content, it may matter greatly when it comes to writing meant to inform or influence people.

Even if humans wanted to detect every piece of AI writing they saw, they would likely lack the time or energy to do so.

What About Hybrid Approaches?

A study published in the International Journal of Educational Integrity found that humans and automated tools did reasonably well at detecting AI writing when evaluating articles submitted for journal publication.

However, both also had weaknesses. For example, human reviewers successfully identified 96% of AI-rephrased articles but also classified 12% of human-written articles as AI. Students, however, could only identify 76% of AI-generated articles successfully.

Though the automated systems did well overall, with Originality.ai scoring 100%, the researchers note that they used ChatGPT-3.5, an already outdated model. As the research above showed, detectors struggled more against newer models, especially when adding other attack vectors.

However, the main takeaway from the article was that the combination of automated and human detection can be reasonably effective. While the system isn’t perfect, it produces the highest detection and lowest false positive rates.

But that comes with a caveat. This is also the most resource-intensive approach. This requires both paying for and using automated systems AND careful human reading. While this is fine for specific works, it’s not a system that scales well.

And that, in turn, is the biggest problem with AI detection.

Bottom Line

So, can we detect AI writing? The answer ultimately is: It depends.

If we apply all of our resources to a single work, we can likely determine with confidence whether it was AI-generated. However, the less time and resources we have, the less sure we can be.

When we scale up to trying to detect AI writing in thousands or millions of works, certainty becomes nearly impossible. Purely automated approaches don’t work reliably enough, especially on current AI models and attack vectors.

But that points to another problem. This space is constantly evolving, and all of this information could change tomorrow or already have changed. New models come online, new detection systems are being developed, and new approaches to spotting AI content are being developed even as you read this.

So, while the current situation is pretty dire, especially if you need a scalable solution, things could improve or worsen tomorrow.

That’s the nature of this space right now. It is evolving rapidly, and the research is struggling to keep up.

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free