How Do I Prove That I’m Human?

Image of Humanoid Robot

I have a quick question: How do you know that a human wrote this post?

To be clear, I wrote this. I’m sure the slew of typos and grammar mistakes may be a tip-off for some. Others will look at my history and see that I’ve been doing this for 20 years, long before the advent of generative artificial intelligence.

But how can you be certain? AI detectors may be improving, but they still make mistakes. As a reader, there’s no perfect system for you to know that this is a human-written post. Even if this article passes every check, AI “humanizers” are growing in popularity. How do I prove I didn’t use one of those?

I can tell you how I wrote it: I used the WordPress editor to write and the Grammarly extension to edit grammar and spelling mistakes. I did not use any of Grammarly’s generative AI features. But how do I prove that? More importantly, how do I prove that in the time frame you will likely spend on this post?

This represents one of the core problems with AI. Most AI usage is not disclosed. Even when it is, that disclosure can get lost as a work is shared online. It’s almost 100% sure that you’ve read, viewed, or interacted with AI-generated content without realizing it.

So, how do you know that this article isn’t? Simple. You don’t. There is no simple, easy way for me to prove that point. That is a significant problem.

The Question of Authorship

The problem is simple: Most who use AI don’t disclose it. Often, this is done to deceive others. The recent University of Zurich/Reddit scandal highlights this problem.

Other times, it’s just the nature of the internet. Attribution erodes online. Even properly disclosed AI works may lose that disclosure as they travel the internet.

Since the bots aren’t disclosing themselves, it’s up to the humans to prove that they are human. Both Grammarly and Turnitin have developed systems to track how a paper is written, allowing teachers to see the student’s writing process.

To clarify, these systems can indeed work. However, they require human evaluation. Even if this information were available, readers are unlikely to put the effort in for every article, social media post, or piece of writing they come across.

Our digital lives are such a blur of content that we often have no idea where it came from, even pre-AI. From organized disinformation campaigns to social media scams, dubious content was a problem long before 2022.

Simply put, we’ve never been certain that people are who they say they are on the internet.

However, AI takes this to the next level. Now, it’s not just other humans; it’s a wave of bots.

Why This Matters

Some will argue that this doesn’t matter. They argue that generative AI is merely another form of writing, representing a new type of authorship. Even if that is true, there are times when it is essential to know confidently that the human claiming the work is the author.

The first is academia. Many have compared AI to calculators. Setting aside that flawed analogy, there are situations where it is still essential for students to solve problems without a calculator. The calculator didn’t make it less necessary for students to understand how math works. They are great tools for improving speed and accuracy, but only after the basics are understood.

The second is situations where the weight of the words is attributed to the author, whether it’s because the author is an expert, a trusted friend, a member of a particular group, or simply someone perceived as trustworthy.

AI isn’t necessarily inappropriate in these cases, especially if the author takes responsibility for what they publish, but it needs to be disclosed. Readers have a legitimate interest in knowing how much of the author is in the work they’re reading.

However, the biggest issue is that many people value human creations more. They value the extra effort, skill and care that goes into human creativity. That might change over time, but it is true today. But if there’s no way to prove a human is human, that extra value can be lost quickly, and we end up with a very different internet.

Dead Poets and Dead Internets

According to the Dead Internet Theory, the internet is almost entirely bot activity, with few humans. To support this argument, the theory points to the sharp rise in bot internet traffic and the rise of generative AI.

However, while the theory raises legitimate points about bot traffic and other issues, no evidence supports it. The internet is not dead. At least not yet.

However, as AI-generated content becomes more prevalent, a tipping point may be reached where people stop assuming everything is posted by a human and start thinking it’s from a bot. Even if humans still make up most of the internet, the default assumption will be that everything is AI until proven otherwise.

If everything is assumed to be a bot, what is the point of being a human? Why would I put in that extra effort when it won’t be recognized?

Without a way for the humans to prove that they are human, it may not matter.

What Can Be Done?

There’s no simple solution here. However, there is one way that things could improve. Since the internet is so heavily siloed right now, the major content sites such as Facebook, Instagram, YouTube, Medium, Substack, etc. could build in checks to their upload process to determine whether the content was human-created.

These checks would not be perfect and require humans to use a controlled environment, similar to what Grammarly and Turnitin do now. That would be a massive problem for YouTube, Instagram, and similar services where creators upload rather than create on the platform. However, plugins and extensions that work with popular editing apps could likely solve much of this.

But then there’s the bigger problem. Facebook and Google control so much of this market that, without their cooperation, the effort would likely fail. However, they are among the most prominent AI companies with billions invested in the technology.

Promoting human authorship invariably means discouraging AI usage. As such, they aren’t likely to do that, even if it makes for a worse experience for end users.

In short, the people who are in the best position to proactively identify and mark AI content are also the ones who are most invested in AI. That makes any progress on this front unlikely at best.

Bottom Line

As someone who has been on the internet for 30 years, it’s been difficult seeing the internet grow and change the way it has. Rather than becoming a place where everyone can have a digital homestead, it’s become a handful of content silos.

But even with that, I’ve generally remained optimistic. Complaints about the false promise of the internet go back over a decade, but, in general, it’s been a powerful tool to connect humans with each other.

But that may be changing. While bots are not a new problem on the internet, AI represents a significant escalation.

AI may very well be the future. But humans should decide if it is. Currently, humans are not getting that choice because there is no clear way to separate AI from non-AI work.

Some might say that this is unimportant. If AI creations are indistinguishable from human creations, why does it matter? However, the author of a piece is a critical piece of information about a work. This is why plagiarism matters.

Though some approaches could take back some of that control, those who could implement them have no motivation.

If AI “wins,” it won’t be because it was a better form of authorship. It will win because it has overwhelmed human authorship.

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free