Why Copyright Won’t Kill AI
The number of lawsuits filed against AI companies is, in a word, staggering.
There are lawsuits for every type of creator. Journalists, novelists, visual artists, musicians, programmers, and actors all have lawsuits in their name. However, all the lawsuits ask one fundamental question: When, if ever, is it acceptable to train an AI on copyright-protected work without permission?
This seems like an existential threat to AI, and in many ways, it is. Sam Altman, the head of OpenAI, famously said that creating useful AI models is “impossible” without copyrighted material.
However, that is not entirely true. As we discussed last year, both Adobe and Nvidia have launched AI systems that are trained on licensed content. Though Adobe faces controversy for underpaying for its work and seeming rights grabs in its terms of service, it proves that such a system is theoretically possible.
As concern grows over AI, many are looking to use copyright to limit or even stop its growth. However, that is not going to work. Though a copyright defeat would significantly impact AI, it wouldn’t kill it.
Generative AI, in some form or another, is here to stay. It’s just a matter of what role it takes.
Surviving Copyright Armageddon
We will likely not know the rules around copyright and AI for some time. Though we will likely see some helpful rulings shortly, they will not be meaningful until they’ve been appealed and the full legal process has played out.
But let’s assume a worst-case scenario for AI companies. Those rulings blanketly bar the training of AI systems using copyright-protected work. What happens next?
Most of the popular AI models right now will likely go offline. ChatGPT, DALL-E and others all rely on such data. They would likely shutter pending a reconfiguration. However, they would likely relaunch later with licensed and public domain data.
AI companies have already begun hedging their bets in that direction. OpenAI has already reached deals with The Atlantic, Axel Springer, Time and the Associated Press.
Though OpenAI contends such licenses aren’t necessary, their existence highlights the uncertainty.
However, the truth is that the internet is siloed enough that a few strategic deals could give AI companies access to all the content they need. For example, Reddit signed an AI content licensing deal with Google, giving the search giant broad access to its millions of pages.
A copyright defeat wouldn’t kill AI. It would make AI more expensive and less useful but not destroy it.
However, this only applies to AI systems developed in the United States. China is already a global leader in Generative AI, and other countries are following suit. Many of these countries are challenging environments in terms of enforcing copyright laws.
As such, AI development may not end but move. But that is only if other issues don’t bring it down first.
Bigger Threats on the Horizon
Though the threat of copyright still looms over AI, there are more pressing issues to consider.
First, AI is woefully unprofitable. Microsoft charges $10 per month for its GitHub Copilot. However, despite that fee, it’s losing an estimated $20 per month per user. Some smaller AI companies have either folded or held layoffs. And the ones remaining are still struggling to find a viable business model.
This has led to a debate about whether AI is another dot-com bubble. Even AI’s biggest supporters admit that the hype is likely to cool. Others, such as Goldman Sachs, have doubts that the money invested into AI will ever pay off.
The reason is very simple: AI is very computation-heavy. It takes a great deal of energy to run a single AI query. Sam Altman said in January that the cost is so high that we would need an energy breakthrough, such as nuclear fusion, to power an AI-driven future.
In the meantime, the focus has been on creating more efficient AI systems. However, those systems are much less effective and generally target fewer tasks.
But even if AI can find a path to financial and ecological stability, it faces other technical challenges. For one, it may run out of human-written training data. That could limit further improvements.
To make matters worse, AI systems are now trained on AI-generated content intentionally and unintentionally. This risks making AI systems significantly worse. AI might not only stop improving but could also start backsliding.
Ultimately, these concerns are much more pressing for AI companies than copyright. Copyright could be a speed bump, but these issues could be a brick wall.
Bottom Line
Copyright isn’t going to kill AI. Most likely, nothing is.
Simply put, there has been too much investment for AI to simply disappear. There will likely be an adjustment period during which we find the spaces AI helps and then build business models around them.
However, that probably won’t start until after this gold rush phase.
But when we look back on these heady days for AI, copyright won’t have killed the excitement. AI’s more inherent costs and limitations will ultimately decide its fate.
Fully-licensed AI systems are possible. They will be more expensive and of lower quality, but they are possible. What may not be possible is overcoming the myriad of other challenges AI technology faces.
Want to Reuse or Republish this Content?
If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.