Mitigating Copyright Dangers of Using AI Content

Adobe Logo

Disclaimer: I am not a lawyer, and nothing in this post is intended to function as legal advice. If you have specific questions about your situation, please consult an attorney.

For businesses, generative AIs represent a very interesting opportunity. The ability to generate code, text and images does have a great deal of potential to increase efficiency and help companies do more with less.

However, for all the reasons that companies have to be excited, they also have many more reasons to be cautious.

Even if we set aside the ethical and societal issues raised by heavy use of AI, many early corporate dabbles with AI have not gone well. 

For example, the National Eating Disorder Association (NEDA) fired its human helpline works following a vote to unionize. NEDA replaced them with a chatbot named Tessa, that ended up giving out harmful advice to those suffering from eating disorders

Similarly, CNET used an AI reporter and, after it was caught plagiarizing, the program was put on hiatus as readers complained not just about the quality of the AI reporting, but the lack of transparency

However, on top of those concerns, there are legal risks that come with using AI. An AI is, fundamentally, a black box. Though we know what it was trained on and what it put out, even the developers of AI systems don’t fully understand how the systems go from A to B.

This is compounded by the fact that most AI systems are trained on copyright-protected works that were used without the permission of the creator. This means that there is a perpetual danger of the AI using that material in an infringing matter.

However, companies are now stepping up to try and help mitigate that specific risk and, in doing so, have crafted two very different approaches.

Adobe and Indemnification

Back in March, Adobe announced that it was launching a new AI image generation tool named Firefly. What made the service unique was that it was solely trained on their own image library, as well as open-licensed and public domain works. 

The system is still currently in beta testing, though Adobe announced last week that it will be offering indemnification to enterprise customers of the service when it launches later this year

According to Adobe, this indemnification is similar to the one they already offer for the stock photo and video products. It basically provides an assurance that, as long as the customer doesn’t use the product in a way that violates their license with Adobe, Adobe will compensate them for any intellectual property issues that arise from the use of the images.

As we discussed before, indemnification is crucial for companies using stock photos. Generally, this is part of the package when licensing images for a large company. It provides both legal certainty when using the images and shows that the provider is confident that the images are free of legal problems.

Most AIs, however, can’t do that. In fact, with OpenAI, the company’s indemnification policy goes the other direction, with users agreeing to indemnify OpenAI. Their terms of service further make it clear that the services are provided “as is” and without any warranties at all.

In short, if you use ChatGPT text that turns out to be infringing, you are very much on your own. However, OpenAI has to do it this way because, as we discussed earlier, AIs are black boxes and, though they don’t know how ChatGPT created a specific work, they do know that it was trained one unlicensed copyright-protected works.

To that end, if an AI is going to offer indemnification, it has to be built like Adobe’s (and Nvidia’s) in that it is trailed solely on licensed and public domain materials. That is something that the vast majority of AI systems are not. 

Advanced Detection of Issues 

Last week, the plagiarism  detection service Copyleaks announced a new product, Generative AI Governance, Risk and Compliance (GRC).

The product works similarly to any plagiarism detection tools but with two key differences. According to Eric Bogard, Copyleak’s VP of Marketing, the tool is designed specifically to be used on AI-generated text, and it isn’t meant to simply detect copying, but specifically to look for unlicensed content that could create legal issues.

The tool works both on AI-generated code and text and marks content as either “Secure & Risk Free”, “Protected and Licensed” and “Usage with Permission.”

The tool also provides AI detection for content that was supposedly written by a human and, according to their marketing material, can help companies learn about how their work is being used elsewhere.

However, the big idea is still fairly simple. Using software to check behind an AI, similar to how one might check behind a human employee, can help identify legal and ethical issues in the material. 

That, in turn, gives companies the opportunity to remove those materials before they are released to the public.

That said, it is important to note that we are still early in the creation and testing of these kinds of tools. It is difficult to know exactly how effective that they are, and that makes putting too much trust in them potentially risky.

However, if a company is going to use AI-generated work, performing due diligence on that work makes sense and the fact that such tools already exist may provide a path for mitigating both legal and ethical issues.

Bottom Line

In the end, generative AIs are still very new and there are a large number of unanswered legal questions surrounding both how they are created and how they are used. 

Those legal issues will likely take years (if not decades) to resolve, but many companies are making a push into AI now, feeling that the rewards outweigh the risks.

Still, that doesn’t mean that the risks should not be mitigated. If a company is going to use AI content, it makes sense to either get a legal guarantee, such as what Adobe is offering, or at least check behind the AI’s work, which is what Copyleaks is offering.

However, the big drawback of these mitigation efforts is that they either increase the cost and time required to use AI works or reduce the efficacy of the AI. Mitigating the legal issues around AI means sacrificing some of the benefits that companies seek from AI in the first place.

But that is what companies using AI work need to understand, it’s going to be an issue of finding a balance. This means finding the tasks that AI can actually do, and then finding ways to mitigate the risks around that use.

That is not a simple challenge, but it’s clear that companies are stepping up to provide at least some solutions. 

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free