President Biden Signs Executive Order Calling for AI Watermarking

Earlier today, United States President Joe Biden signed an executive order that directs various government agencies to work with artificial intelligence (AI) companies to ensure that AI is used both safely and ethically.

Though the order pushes forth over a dozen initiatives, the focus is on safety and security when it comes to AI systems. It calls for companies developing AI systems that pose a threat to national security, economic security or health and safety, to share their safety test results with the government.

The order also calls on the National Institute of Standards and Technology to set standards for “red-team testing”, where testers would simulate bad actors attempting to misuse AI systems. Those standards would have to be met before the public launch of an AI system in a bid to ensure that AI systems are “safe, secure and trustworthy”.

Another crucial element is that the Department of Commerce will be required to develop guidance for content authentication and watermarking to make it clear which content is human and which content is AI-generated.

The stated reasons for this are to ensure that official government communications cannot be easily impersonated by AI systems, though the order does mention broadly protecting Americans from “fraud and deception” committed by humans using AI systems.

According to the President, the government moved too slowly on the dangers of social media and is hoping that, by moving more swiftly on AI, that they can enjoy the benefits AI may bring while minimizing the harms that could come with it.

All of the regulations are slated to come into effect over the next 90–365 days, with the security-oriented regulations having the quickest turnaround times.

The executive order comes as Congress has been debating taking up AI legislation, and it further calls on Congress to pass legislation targeting data privacy. Other countries have been weighing rules in this space as well, with the UK scheduled to hold an AI safety summit later this week.

All in all, it’s difficult to say what, if any, impact this will have on the AI landscape. But for those who have been worried about the use and misuse of AI, there are definitely reasons to be optimistic.

Security and Watermarking

The focus of the executive order is, broadly, on security. This includes both national security, such as preventing AIs from being used to make bioweapons, and personal security, preventing AI from being used to scam or trick people.

In that regard, much of this executive order centers around establishing frameworks, calling upon different government branches to develop testing rules and other guidelines that AI companies would have to follow down the road.

As such, there are not many details, as those have to be created by the various government agencies put in charge of them. However, the call for the Department of Commerce to develop guidance for content authentication and watermarking to “clearly label AI-generated content”  is an interesting exception. 

Within the executive order, this is put in the context of security, focusing on bad actors using AI to impersonate the government, scam individuals or otherwise harm others.

However, such a watermarking system would have major implications in the copyright and plagiarism spaces.

Right now, one of the biggest challenges with AI works is that there is no effective way to detect what is and is not AI-generated. For example, though AI detectors are becoming more common in the classroom, they don’t provide enough accuracy to punish suspected plagiarists without some other verification.

That would change if all (legal) AI content were watermarked in a way that machines could reliably determine what is and is not produced by an AI. The implications this could have both in terms of authorship and copyright are difficult to overstate.

That said, my optimism is pretty tampered at this phase. Not because I believe the intent of this executive order isn’t genuine, but because any such effort has a slew of hurdles it has to overcome.

Technological and Practical Hurdles

From a technological standpoint, watermarking visual and audio content is fairly straightforward. There are a myriad of approaches that create invisible or unintelligible watermarks for film, images and audio content. 

The problem is that much of AI’s most popular uses has centered around text generation. 

Though there have been some ways to invisibly watermark text, such as the one Genius used to identify lyrics copied from its site, there really isn’t a system that imperceptibly marks text with metadata.

However, even if the tech hurdles in adding the watermarks can be quickly solved, there may be a bigger issue on the horizon: Getting anyone to use them.

If these watermarks are to be effective, the reader/viewer/listener needs to be notified of the mark in real time as they are accessing the content. Doing that will be a challenge.

The reason is that AI-generated content can be delivered in a myriad of ways. We would have to ensure that, no matter what way that the content is being accessed, that the user is aware they’re engaging with generated content. 

Some of this would be relatively simple, such as requiring browsers to highlight AI content, but we would need similar systems on phones to detect scam AI calls, in TVs to detect AI video and so forth.

This says nothing about attacks that combine AI generation with physical world distribution. Watermarks can’t help much in cases of letters being mailed or content posted to public notice boards. Though these might be fringe cases, physical world vectors are commonly used to scam the most vulnerable, including the elderly.

However, the biggest practical hurdle may simply be enforcement. Right now, AI is heavily concentrated into a handful of large companies, which are disproportionately based in the United States. That likely won’t stay true for long.

AI, like all technology before it, is only going to become more and more accessible. This is going to include smaller companies getting on board, including ones in nations with weaker regulations.

As we’ve seen with copyright, rules and regulations that only apply to US companies are of limited usefulness on a global internet. While such rules may impact how Google, Microsoft, OpenAI and other companies operate, they may not have nay impact on the next generation of AI companies and services.

That, in turn, may be the greatest challenge to regulating AI.

Bottom Line

To be clear, though the executive order is definitely sparse on details, it’s good to see that the government is taking action regarding safety and authenticity when it comes to AI. 

Watermarking AI content would be a positive step, as there is a real need for the ability to separate between what is human-created and what is AI-created. That need may shrink if AI becomes more accepted as a tool of authorship rather than a substitute for authorship, but there will always be potential security and safety issues that need to be addressed.

In short, there will always be times when we need to know the actual source of a work, including whether it was generated by an AI or not. 

Though there are a lot of practical and technological challenges to making such a system work, it’s still a positive sign to see the government thinking about these issues and setting into motion the machinery needed to address them.

Hopefully, through quick action, regulators can find a way to both enable the benefits that AI can bring us while minimizing the risks and harms. It’s a tall order and the track record here is not good, but at least there is some reason to have hope. 

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free