Medium Sets New Policies on AI-Generated Writing
Yesterday, in an email sent to members of its Partner Program, Medium announced upcoming changes to its policy regarding AI writing.
The new policy, which takes effect on May 1, bans AI-generated writing behind its paywall, regardless of whether it is disclosed or not.
Medium is also updating its restrictions on AI writing outside its paywall. Under the new policy, disclosed AI writing will only be eligible for “General Distribution”, not “Boost Distribution”, where curators highlight selected stories. Undisclosed AI writing will only be given “Network Only” distribution, meaning it will only be available to existing followers of that author or publication.
The policy also states that AI images “must include captioning identifying them as such.”
The new policy also impacts the use of AI-assistive technology. According to the new policy, “AI-assist is often a way to insert small snippets of AI-generated content that often suffers from the same problems as fully AI-generated stories.”
As such, AI-assisted text needs to be disclosed at the beginning of the story, within the first two paragraphs. Though it is unclear if such disclosed use is allowed in the Partner Program, such text that isn’t disclosed will be given “Network Only” distribution similar to undisclosed fully AI-generated text.
The policy does not require the disclosure of AI assisted grammar or spell checking. It only applies to AI-generated text and images.
According to Medium, this change is part of their policy that states, “Medium is for human storytelling, not AI-generated writing.” That said, their policy regarding AI writing has shifted a great deal over the past 15 months.
In January 2023, when they first announced their AI policy, they broadly allowed the use of AI as long as it was disclosed. At the time, undisclosed AI usage would be given “Network Only” distribution, the same as with the latest policy.
In July 2023, they changed their policy to “allow” instead of “welcome” AI writing. They stated that all AI writing, whether disclosed or not, would only be distributed within the author’s network. Neither announcement included any information about their Partner Program.
The new policy change is a major restriction on AI writing, but it also weakens some previous policies. It’s a great example of how companies that host user-generated content, like Medium, are struggling to create AI policies and find a balance that satisfies everyone.
A Moving Target
In many ways, Medium has been on the front lines of the discussion about AI. Medium, as a site that monetizes user-generated content, has been an attractive target for AI “authors” from the beginning.
Wisely, Medium has acknowledged that this is a shifting landscape and that their policies will have to change and adapt over time.
To that end, the most recent policy creates two major changes when it comes to the use of AI writing on the site:
- Creating a clear policy that AI writing is not permitted in the Partner Program, whether disclosed or not.
- Allowing disclosed, non-paywalled AI writing to be available for “General Distribution” instead of just “Network Only” distribution
On one hand, it’s easy to see this as Medium being less restrictive about AI. After limiting disclosed AI to “Network Only” distribution, it now makes it available to a broader audience. However, it’s unclear how big of an impact that will have.
The reason is that “General Distribution” is algorithm driven. If the algorithm doesn’t determine that AI-generated content is worth promoting, then it might as well still be network-only. As anyone who has published content to algorithm-driven sites can attest, being available for distribution does not mean it will be distributed.
The bigger news is the strict ban on AI-generated and AI-assisted content in the Partner Program. Since that program is both how Medium monetizes their site and how authors are paid for using the platform, this basically cuts off all AI writing from monetization.
It’s unclear how much, if any, AI writing was being monetized on Medium before. But having a hard and clear rule about it at least sets the expectations around it for the future.
A Question of Enforcement
In its announcements, Medium dodges one critical question: How it plans to enforce the rules.
In an FAQ about the policy, Medium addresses enforcement by saying, “We use a wide variety of tools and technologies to detect and identify AI-writing and other AI content, combined with human review of any positive results.”
However, the company doesn’t outline what tools it is using and how effective those tools have been proven.
The current landscape of AI detection is rife with challenges. There are a large number of products available and they vary wildly in terms of effectiveness. However, even the best systems are imperfect and require human verification of findings, something that can be difficult to do.
To that end, Medium does appear to have a policy in place for such checks, but only in cases of positive results. There are no spot checks for negative results, at least none mentioned. In cases where AI content is found behind the paywall, Medium asks users to report the post to them using the three dot menu.
However, the onus of catching mistakes shouldn’t lie solely or primarily on readers, especially paying ones. That said, there may not be much of an alternative here. Medium simply doesn’t have an effective way to handle that situation.
Medium, much like Amazon, YouTube, Facebook and other large online platforms, are “too big to police.” They have to rely heavily on automated tools and algorithms, no matter how imperfect those systems are.
Bottom Line
Medium is in a difficult position when it comes to AI. They have to strike a balance between various groups including authors who mistrust AI, other authors that are excited about what AI can do for them, those that want to use AI for spammy purposes and readers who want to the best content for them.
Meanwhile, while trying to balance those interests, they are confronted by the fact that AI detection is, at best, highly imperfect. There is simply no definitive way to know what is and is not generated by AI.
Still, having guidelines is crucial. AI detection will likely improve, but it will always be a cat and mouse game with AI systems and will never likely approach 100% perfection.
In the end, I think that Medium largely has the right idea. Not necessarily with this particular policy, but with the general approach of constantly evaluating and shifting the policy as necessary.
Personally, I would have appreciated an even tougher stance on AI content, especially given their stated focus on human storytelling. However, I also recognize the complexity of the situation and the difficult position that they are in, especially with AI detection.
Admitting that the situation is fluid and revising their AI policy regularly is crucial and that is one thing they have done well.
Want to Reuse or Republish this Content?
If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.