The Wave of AI Lawsuits Have Begun
Earlier this month, I noted that artificial intelligence (AI) was the big story of 2022 and would likely be the big story of 20232. That is regardless of whether you are looking at it from a copyright (legal) perspective or from a plagiarism (ethical) one.
Clearly, that prediction of the future was a safe one, and it is one that is already coming true.
Things began in November of last year when a lawsuit was filed against Microsoft-owned GitHub over allegations that the company violated the rights of programmers by training their new Copilot AI on public repositories on the site.
While that was the first salvo, it seems that the floodgates are opening up. This week, a law firm launched a class action against DeviantArt, Stability AI and Midjourney, three companies that built AI systems using the Stable Diffusion platform.
Getty Images, the popular stock photography site, also filed a lawsuit against Stability AI, alleging copyright violations when they scraped Getty Images content from their site to seed the Stable Diffusion AI. They even pointed to the Getty watermark appearing in some Stable Diffusion created images.
Finally, even though such a lawsuit has not been filed against OpenAI, the makers of the popular ChatGPT bot, the company is clearly anticipating such a case being filed as they have doubled the size of their in-house legal team and has already retained additional outside counsel.
In short, the floodgate have either opened or are in the process of opening. AI will get its day in court and these cases will likely determine the rules around AI, but also the direction that AI will take in the future.
The Issues at Play
As I pointed out in my earlier post. There are two issues that litigation around AI are focusing on.
- The Rights of Human Artists: Basically, what rights do human creators have when their work is used by companies to train an AI system?
- Ownership of AI Works: Can the works created by an AI be protected by copyright?
With these lawsuits, it’s the first issue that is being focused on as, in every case, the allegation is that the companies that created the AI used their work.
According to the class action lawsuit against Stable Diffusion users, this results in direct copyright infringement, violations of the Digital Millennium Copyright Act and a breach of contract with DeviantArt’s terms of service.
However, the artists involved may face an uphill battle. The reason is that they have to prove that not only was their work copied, but that the copying was not a fair use.
This will likely vary wildly from case by case and user by user. For example, in the GitHub case, there are allegations that their AI could “launder” code by “writing” new code based on open-source software and releasing it under a closed-source license.
That, for example, would be a different scenario than an artist whose work was used to train an AI, but the AI never reproduced their work in any way. In short, there are as many potential questions as there are works used.
Most experts seem to think that using copyright-protected works to train an AI is a likely fair use. This is further backed by recent rulings, such as the Google Book Search case, where courts gave tech companies a great deal of leeway when ingesting protected works, as long as the display was limited.
That said, one of the key reasons Google won that case was because Google Book Search was, in the eyes of the court, a positive for the authors and not an attempt to compete with them. With AI, the fear is that the technology could replace human creative creators, leaving questions open even in this space.
However, even if we assume that the training is not an issue, the output can almost certainly be infringing, especially if it too closely mimics the work it was trained upon. And that brings us to a major problem with AI, even the creators of the system can’t tell you where your generated image, text or code came from.
It could be a relatively new work that uses the training material in a wholly legal way. On the other hand, it could be a very close derivative work or a direct copy of one of those source works. There’s simply no way to know for certain.
And that uncertainty is precisely what AI is facing at this time.
A Matter of Law, Not a Matter of Fact
That uncertainty is driven home by another issue: It’s likely that the issue will not be decided upon by a jury.
The reason is fairly simple. In recent decades, courts have largely stopped seeing fair use as an issue of fact for a jury and, instead, have treated it as a matter of law. This mean that judges often make the final decision in this space.
This means that it’s likely that the boundaries of AI when using works of others will likely be decided by a handful of judges and the appeals courts that hear the challenges to those decisions.
While both judges and juries are capable of making questionable decisions, this is a very new legal area and every decision is going to have major repercussions across the tech world and for all the creators who publish online.
The hope is that the judges who render these decisions will be cognizant of that. While the Google Book Search case was certainly impactful for authors of print books, these cases have the potential to impact every type of creative and every tech company.
Early decisions, even if they are eventually overturned, can set the stage for decades to come. This space is moving so fast that it didn’t wait for the legal dust to get kicked up, let alone settle.
To that end, no one knows what happens next. All we can do is watch and wait.
Bottom Line
I wish I had better answers when it comes to copyright and AI. The truth is that this is an unexplored space. It will likely take the law several years to catch up and, when it does, there will likely be some other new thing that creates just as much heartburn.
Technology is always years ahead of the law, and that gap is only widening.
Whether you’re an artist worried about the use of your work in AI systems or a developer building your own AI platform, you’re entering a space where the law is almost completely unsettled.
We have very limited case law of use, there are no industry standards for opting out (though DeviantArt is trying to create one) and there’s no predicting where this is heading.
In short, we have no legal or social norms, and the development of both of those will likely take years. In the meantime, AI is here, and it doesn’t appear to be going away.
That should worry just about everyone involved, regardless of what side of the legal fence they’re on.
Want to Reuse or Republish this Content?
If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.