Can Adobe and Nvidia Fix AI’s Copyright Woes?
Yesterday, tech giants Adobe and Nvidia made dueling announcements about new AI tools that they were unveiling to the public.
For Adobe, they were announcing a new AI image generation service named Firefly that will be built into its established software, including Adobe Photoshop and Adobe Illustrator.
Nvidia, for their part, announced Picasso, their AI image generation service that can also generate videos and 3D applications from text descriptions.
On the surface, these are fairly standard announcements and just two of dozens of similar AI announcements that have been made over the past year.
However, there was something notable about both announcements: Both discussed how their images were licensed and how that made them useful for commercial applications.
Adobe’s Firefly, for example, is trained on a mixture of Adobe’s own image library as well as open-licensed works and public domain works. Nvidia, on the other hand, announced partnerships with Adobe, Getty Images and Shutterstock to help fill out their training collection.
This is in stark contrast to how other image AIs have announced their launch and have been greeted afterward. There is an ongoing wave of AI-related lawsuits. That includes the class action case targeting DeviantArt, Stability AI and Midjourney and Getty Images’ separate case against Stability AI. This says nothing about the user backlash against DeviantArt after they made their announcement.
Both Nvidia and Adobe are trying to get ahead of these issues, by training their respective AI systems only on images that are licensed for such use, and both indicated that they would be paying royalties to those who have their images used in AI-created works.
Furthermore, Adobe also announced it is pushing for a universal “Do Not Train” tag for content to ensure that artists who don’t want AI training on their image can refuse.
All in all, these are very refreshing announcements, but do they actually solve the copyright issues AI is facing?
The Problem with AI
As we looked at when discussing the wave of AI lawsuits, AI has two separate copyright issues:
- The Rights of Human Creators: What rights do human creators have when their work is used to train an AI without their permission.
- Ownership of AI Works: Can the output of an AI be protected by copyright and, if so, who owns that copyright?
Nvidia and Adobe, clearly, are trying to mitigate or eliminate the first issue. By ensuring that they only train their AIs on licensed content, there should be no legal issue either for Adobe, Nvidia or those that use their products.
While this might seem like an obvious and easy solution, it’s important to remember that an image AI needs millions, if not billions, of images and those images need high-quality metadata to be truly useful. Stability AI, for example, used over 2.3 billion images to train its system.
Very few places have image libraries that are both large enough and well-enough cataloged to be useful for an AI. Getty, Adobe and Shutterstock are three such places.
To that end, this is a logical move for both Nvidia and Adobe, to leverage both their own libraries and that of their partners to fill their models. Couple that with a promise to pay royalties for use, and it’s like a system that is serves both the AI developer and the users.
The big question remaining is: How will the artists feel?
How Things Can Still Go Wrong
If we assume that Adobe, Nvidia and all their partners are correct, then we can also assume that the images used in their respective AIs are fully licensed. That eliminates any copyright issues between artists and the AI’s developers, and greatly mitigates any issues between artists and those using the AIs.
However, that’s not to say that everything is all clear. As we’ve seen with GitHub’s Copilot coding AI, it may be possible for a user to generate content that is extremely similar to something not licensed in the AI database. Though it would be much more difficult, it likely is still possible, especially with a determined enough user.
Still, limiting the training library to only licensed images is a good buffer against those potential issues, especially against unwitting infringement.
The bigger issue may be more practical than legal: What will the photographers and artists think?
Even if artists and photographers did sign away these rights in their contracts, that doesn’t mean that they approve of this use or direction. Though they might not have legal recourse, that doesn’t mean they won’t consider removing their work from the library or stop working with them for future work.
The agreement to pay royalties can, at least theoretically, go a long way to soothing potential tensions in this space. But the details are both important and unknown. We don’t know how Adobe or Nvidia will pay royalties, and we don’t know how much those royalties will be worth.
This is going to be a space to watch moving forward, whether the photographers and artists in those libraries feel that they are getting a fair deal.
Bottom Line
Both Nvidia and Adobe’s announcements are different from every other AI announcement we’ve seen so far. They began their respective projects from the position of licensing all the content that they used and, if the participating artists are largely comfortable with it, it could be a very different relationship between an AI and the humans whose work it is trained on.
However, this is almost certainly a bid to create a different kind of AI. This, most likely, will not be an AI meant to be used by the broader public. Having to pay for the training libraries means that, most likely, this AI will be more expensive to use and targeted at commercial users for whom the copyright concerns are much greater.
Still, it’s very disappointing that this isn’t how AI has worked from day one. There are enough ethical and legal questions around the use of AI that adding one regarding the source of the training content seems very unwise.
To that end, it’s worth noting that, at the very least, Nvidia and Adobe haven’t been met with nearly the same backlash DeviantArt was. Some of that is likely because they are fairly late in introducing their AIs, but a lot of it is likely due to the ethical and legal sourcing of their training content.
Want to Reuse or Republish this Content?
If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.