The article focused on OpenAI’s February announcement that it has been training a new AI language model it dubbed GPT-2. However, due to concerns about “malicious applications of the technology” they are not releasing the trained model and, instead, are offering a much smaller one for researchers to examine.
Patrick House, a neuroscientist and author, is one of the few that have gotten the chance to use the trained model. He was hired by OpenAI to write short stories alongside the bot and recently detailed his experiences in a piece on the Los Angeles Review of Books.
There, some of his worst fears were calmed as he and the bot never wrote the same story. Instead, he found himself using the AI as a writing tool for improving his work and making his stories stronger.
Naughton, in his article, said of the AI”s writing, “At this point, the reader gets the eerie uncanny valley feeling: This is almost, but not quite, authentic. But the technology is getting there.”
Indeed, the samples of the AI’s writing are perfectly readable and intelligible, but feel “off” or “wrong” in difficult to define ways. Still, it would be easy to dismiss the strangeness as being due to the author not being a native English speaker or simply not being a very good writer.
While it may be comforting that AI isn’t quite ready to replace human authors, the story also points to how AI can be used ethically in the writing process. House, perhaps unintentionally, may have found a way that AI can be a useful writing tool that empowers humans rather than replaces them.
The Ethical Quandary
As we discussed back in February, the publicly-available AI writing tools of today (and likely tomorrow) are either rudimentary in nature or are focused more on improving or altering writing rather than writing from scratch.
This comes in a variety of formats from automatic “paraphrasing” tools that attempt to rewrite work to grammar and spell checking tools such as Grammarly and The Hemmingway Editor that use algorithms to parse writing and suggest improvements.
These, by in large, don’t generate any significant ethical concerns by themselves. Though automatic paraphrasing tools are often used by students in a bid to avoid detection by plagiarism checking tools, they produce low-quality writing that’s of little value. More importantly, the human is still in charge and making the choices, using an automatic paraphrasing tool is no different than changing words around yourself.
With true AI writing, the human author is not in control. The human feeds a prompt, the AI does the rest. The end-user has no say in the process other than the prompt they provide and what they do with the results.
This raises a slew of ethical and legal complications. We already looked at the complications AI could cause copyright, but any area of law that can be touched on through writing has potential issues with AI. Will AI write our contracts? What happens when an AI commits libel? What happens when AI creates material that’s outright illegal?
While the easy answer is to hold the end-user accountable, many of these issues involve tests of intent and an AI doesn’t have intent the way a human does.
But the ethics are a thornier and likely more immediate concern than the legal issues. If I feed a prompt into an AI, am I the writer? Most would argue not, even if I edit or improve the work by hand.
When and if we make the shift from being authors helped by automatic proofreaders to being proofreaders that help automated authors, there will be changes we have to make in how we view creativity.
Fortunately, House’s story paints us a picture of how it could work, in viewing the AI not as an editor or as the writer, but as a cohort.
The AI Friend and Cohort
Right now, AI editors are fairly limited. They can read your text and make grammar/spelling suggestions, but they don’t comprehend what you’re writing and can’t make suggestions on topics, the flow of the story, etc. They know the words that you’re writing, not what you’re saying.
The reason is that, in order to do this, a prospective AI needs to be able to both comprehend what it’s reading and do its own writing. Otherwise, it really isn’t in a place to make any viable suggestions in those areas.
Still, if we agree that using an AI to write something you take credit for is a form of plagiarism, then the question has to be asked: Are there ways that human beings can use such an AI to ethically improve their writing?
To that end, House may have at least part of the answer. To hear House describe it, the AI became a stand-in for a writing buddy, someone who could make suggestions and bring new ideas to the table but didn’t really write the final story.
That’s a powerful idea. Many writers struggle to get that kind of feedback, to find places where they can present their work, get pushback, suggestions and ideas for improvement. They also can’t easily see how others would approach the same prompt. The internet has made this kind of collaboration easier, but it still can be a challenge, especially for those that are timid about their work.
AI could be a quick and low-risk way to get outside input about your work. There’s no need to drag a friend into it or join a writing group, you can get instant suggestions from a bot.
Is it ideal or a substitute for human editors and interaction? Of course not. Just as grammar checkers can’t replace human copyeditors, there’s no substitute for human help here either.
But, as House found out, it can provide a quick new take on a project and perhaps help writers get past immediate obstacles they are facing.
While that may not justify the potential for misuse such an AI would have, it at least gives it a valid reason to be in a human author’s toolbox.
There’s not much doubt that AI is going to play an increasingly important role in our lives and that includes our writing. That said, finding a balance in how to use it is going to be crucial.
While we all want AI to help reduce some of the tedium in our lives, it’s an uncomfortable thought that AI could, at least theoretically, replace us. However, that is a long way off, especially with creative fields like writing.
That doesn’t mean others won’t use AI unethically. In a moment of what can be described as extreme meta-ness, it appears Sinaj Raval may have used an AI rewriting tool (similar to an automated paraphrasing tool) to plagiarize a paper about AI.
Other misuses will be less funny and more difficult to detect. This is why companies are investing in authorship detection tools that go beyond traditional plagiarism detection.
In the end, it’s important to not get so lost in the unethical uses of AI that we don’t talk about the potential ethical ones. Who hasn’t wanted a 24/7 writing friend they can bounce ideas off of or have them help with a thorny problem? An AI, someday, may be able to provide that and do so ethically.
While that is likely still some time off, I know a lot of writers that would likely be very interested in it.