Is There an Ethical Way to Use AI Writing?

One year ago today, OpenAI announced the public launch of ChatGPT, giving the public open and free access to a high-end generative AI.

The impact was felt almost immediately. Despite the launch happening with barely a month to go in the year, AI quickly became the dominant copyright and plagiarism story of 2022.

2023 would be similarly dominated by AI headlines, especially as tech giants such as Google, Microsoft, Adobe, Canva, Nvidia and more launched their own AI systems.  

But, as the conversation around AI grew faster and faster, questions about both the legality and ethics of AI began to arise. Several lawsuits were filed against AI firms over how their systems were trained, schools have raced to craft policies around the use of AI and, as we discussed recently, news organizations have been working to create guidelines for their reporters on the matter.

In recent weeks, those conversations have reached something of a pinnacle. The recent Sports Illustrated AI debacle, where they published AI articles attributed to AI-generated (seemingly human) reporters, along with an SEO expert bragging about using AI to “steal” millions of visits from a competitor, have put a spotlight on some of the worst abuses of AI writing.

But, despite these significant ethical issues, AI’s supporters have continued to tout the promise of AI to eliminate mundane tasks, open up new forms of creativity and generally make our lives easier. Yet, time and again, that’s not how AI is being used, at least not when creating public facing work.

So this begs a simple question: Is there an ethical way to use AI writing? Especially when creating content for the public to view? 

The answer is complicated and, frankly, doesn’t bode well for generative AI in its current form.

Some Recent History

To be clear, generative AI didn’t magically come into existence on November 30, 2022. That date was simply the launch of ChatGPT, which gave the public its first taste of an advanced AI system and kicked off the current land rush in that space.

AI, or “automated storytelling technology” had been around for years prior. In 2017, The Washington Post announced it was expanding its use of Heliograf, an in-house generative AI tool that covered high school football by looking at box scores.

Heliograph was not controversial at the time. In fact, the story went largely unnoticed, especially since the paper wasn’t cutting reporters and, instead, using this tool to cover games that it couldn’t send reporters to. The use was also well-disclosed and, in general, seemed like a boon to reader, reporter and paper alike.

However, the public launch of ChatGPT changed that game. Where the introduction of AI had, up to that point, been gradual, OpenAI threw a boulder into the pond that is the internet and the ripples have still not finished settling.

But, where Heliograf was disclosed and extremely was limited in both use and functionality, much of the conversation about AI writing is about replacing or reducing the use of human authors and then deceiving the audience into thinking that the work was written by a human being.

And that is deeply problematic, especially when one considers how AI systems are trained.

How the Sausage is Made

The vast majority of AI systems have a massive legal and ethical issue: Their systems were trained on the writing of human authors without the permission of the creators or the rightsholders.

Ed Newton-Rex, a now-former executive at Stability AI, famously quit over this precise issue. Though a supporter of generative AI, he said he “Can only support generative AI that doesn’t exploit creators by training models —  which may replace them  —  on their work without permission.”

The issue is also playing out in the courtroom. Several creators, including multiple groups of authors, have filed lawsuits against AI companies over the unauthorized use of their work to train AI systems.  

The law and ethics here are clearly not settled, and it will likely take many more years before anything resembling a consensus is reached on this issue (both inside and outside the courtroom). However, it is clear that, for many, there is no such thing as ethical use of these AI systems.

This poses a serious problem when it comes to ethical AI writing. Though systems exist such as Adobe Firefly that claim to be trained solely on licensed and public domain content, none of the major AI writing tools can make that claim.

That said, it’s only a matter of time before someone does release a generative writing AI that is only trained on licensed and public domain work, whether it’s an open-source framework or a private one targeting greater legal and ethical certainty.

What happens then?

Human Replacement = Human Standards

When it comes to using AI writing, I have a simple standard: If it’s going to be used in the same capacity as a human author (or coauthor), then it needs to be held to the same standards as a human author.

This means several things, including:

  1. Clearly Disclosing the Use of AI: If AI significantly contributes to the creation of a work, that contribution needs to be clearly disclosed. To that end, I ask the question: If the contribution were done by a human, what would the human be granted? Would they be a coauthor? Would there be a byline? Would there be a special thanks? Whatever would be done for a human contributor should be done clearly and plainly for an AI one.
  2. Fact Checking and Editing: Second, any work contributed by an AI needs to be held to the same quality and ethical standards. Any work contributed by an AI should be fact checked, proofread, plagiarism checked and polished to the same standard as a human author’s work. It’s wise, at this phase, to treat AI like an author who is new or isn’t broadly trusted. Clearly, that places a much higher burden on editors and the editing process with AI works.
  3. Revised Copyright/Use Statements: Finally, in the United States, AI-generated work does not qualify for copyright protection. Though that viewpoint isn’t universal, a court in China recently found the opposite, currently and in most locations, it’s assumed that AI-generated works do not qualify for copyright protection. Copyright statements and licensing pages should reflect this, indicating that AI-generated content is free for others to use. Doing otherwise is claiming ownership of something in the public domain.

Obviously, taking these steps would defeat much of the purpose of using AI in the first place. For many, the goal of using AI is to quickly and cheaply generate content that they can exploit and deceive both search engines and human readers into believing was created by humans. Transparency, editorial standards and clear licensing language work against those goals.

If you are trying to use AI writing ethically and find that any of these standards are troublesome, it is important to ask why. Why is transparency in how a work was written a bad thing? Why is holding that work to the same standards as a human problematic? Why is accurately explaining the copyright situation around the work bad?

None of this holds AI writing to a different standard than human authors have been held to for centuries. It’s just inconvenient for many of those who want to use AI writing.

Bottom Line

For myself and this site, I’ve made the (rather obvious) decision that I will not be using AI writing outside clearly defined places where I am highlighting or criticizing AI writing as part of an article. 

But I am also not someone who is fully against AI. I have played with ChatGPT and other systems as part of brainstorming sessions for ideas, to refine headlines or just get a different viewpoint. I haven’t used any of the ideas or headlines, but I was genuinely impressed with the suggestions that I got and found them helpful in coming up with improved ones. 

AI, as a tool, does have a great deal of promise. However, like any tool, it can both be built in an unethical or illegal way, and it can be used in an unethical or illegal way.

What we are seeing right now is a parade of the most unethical ways that these tools can be manufactured and used. Whether it’s the widespread use of “shadow libraries” in training AI systems, the impact of AI on academic integrity, the Sports Illustrated story involving fake AI reporters or, as that one SEO expert bragged, using AI to generate garbage articles to tank a competitor, AI has brought out the worst out of people on both sides of the screen.

While these stories mask the countless users who take advantage of ChatGPT and similar system to make small but impactful improvements in their lives, they raise very serious ethical and legal issues that, unfortunately, won’t be answered fully for quite some time. 

As such, if you’re seeking an ethical way to use AI writing, the good news is that there are certainly things that you can do to at least improve the ethics of it. However, there is simply no certainty there and, as we’ve repeatedly seen, those that have leaned into its use clearly don’t care about the ethics.

In fact, in most cases, they were drawn to AI precisely because of a lack of ethics and a clear comfort with lying and misrepresenting. So, even if you could find an ethical way to use AI writing, I doubt that many would believe it. 

It’s a frustrating truth, but one that was created organically by how the internet, to date, has used AI writing.

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free