Protecting Your Photos from AI Manipulation with Photoguard

For photographers, AI poses not one, but two threats.

The first is the more broad one. Various AI image generators use millions of images online to train their system to create “new” images based on text prompts. This has prompted multiple lawsuits, including a prominent one by Getty Images

However, then there is the more direct threat. Various AI systems exist to take a single image and then manipulate it to create a deepfake based upon it. This problem is significantly older than the image generation one and has been used to create pornographic deepfakes for years, with Reddit and Twitter handing down bans for it as far back as 2018.

But while bans have been put into place, lawsuits have been elusive, in large part due to the complicated legal situation such content resides in. As such, there’s been little that users can do to defend against it, until now.

A team of researchers led by Aleksander Madry at Massachusetts Institute of Technology (MIT) have developed a program they named Photoguard, which they say can make it impossible for an AI to accurately and believably manipulate the image, according to an article on PetaPixel.

Best of all, the software itself is free, released under the MIT License, making it immediately available to anyone who would seek to use it.

The idea is both simple and genius. However, it has several serious limitations that users need to be aware of before investing resources into it.

How Photoguard Works

The idea behind Photoguard is surprisingly simple. The tool does what its creators describe as “data poisoning” and introduces invisible noise to an image that hinders the AI from accurately reading the content in the image.

The change is invisible to human viewers. However, an AI can no longer manipulate the image in a convincing way, creating obvious visual errors.

Image from Original Post

For those who have been very long-time readers of this site, you might recognize this technique. The idea itself has been around for decades, with it being mostly used to watermark and fingerprint images in order to prove ownership and aid in finding the image online. 

For example, SignMyImage used an invisible watermark to embed a code in the pixels of the image itself, making it searchable online. A similar approach has also been used in movies to help catch leakers. 

Photoguard does something similar, but for the opposite reason. Rather than putting in more machine-readable data, it’s actually trying to make the image more difficult for machines to read.

To that end, the results look promising. In the examples, the noise was applied to the images everywhere but the faces. As a result, the AIs tested struggled to cut out the bodies and the images that were produced were unconvincing, even at a glance.

However, this isn’t likely to be the panacea to prevent AI abuse of images. As the developers acknowledge, it may just be the next stage of cat and mouse.

The Limitations of Photoguard

The biggest limitation of Photoguard is one that the developers openly acknowledge: That it’s a static defense against an adaptive attack.

While the system is effective against the AIs of today, those AIs could, theoretically, be trained to work around Photoguard. This is an especially large threat if the process becomes common enough.

That said, there may be ways to implement an AI to constantly manipulate Photoguard’s output, to continue frustrating malicious users.

Because of this, the developers aren’t thinking that is a tool for individuals. Instead, they are hoping for institutional adopting and for AI providers to offer APIs that will allow Photoguard to continue to protect the image.

In short, this isn’t meant for the average photographer or artist to use. It’s meant to be used by sites like Facebook, Instagram, DeviantArt and so forth. If paired with cooperation from AI companies, this could prevent all the images on those sites from being misused in this way.

But then comes another problem. Though, in an interview with Gizmodo, the developers said it only takes “seconds” to apply the noise to an image, the approach is very difficult to scale. Basically, the process requires a great deal of computing power and is difficult to apply to thousands, let alone millions or billions of images.

As such, the developers emphasize this as a proof of concept rather than a finished product. 

Where This Leaves Us

All this leaves us in a strange position. While the technology is promising and the provided examples show how powerful this approach can be, there are still major obstacles to it becoming a real solution.

First, the process has to scale. While protecting a small number of images might be useful in a niche case, a solution like this only really makes a difference when a significant percentage of images are protected. 

For example, TinEye, an image search engine, tracks 58.8 billion images online. However, even that only represents a fraction of the photos available on the internet. Getting even 2% of those photos protected would be a herculean task.

Second, even if it can scale, there’s no indication of how long-lasting the protection would be without cooperation from the AI developers. AI, by its very nature, is adaptive and would likely find ways around this, or any other, poisoning technique.

As such, the current iteration of this approach probably isn’t the answer. However, like a lot of great research, it may point us in a direction to explore. Others may be able to find ways to solve both of these problems and truly make image protection a practical.

This proves that this approach COULD work, just not in this particular way.

Bottom Line

To be clear, this system is targeted at preventing deep fakes, but it should frustrate any AI that attempts to ingest or manipulate the image. However, it won’t and can’t help those who have already had their images sucked up by AI models without their consent.

But that may not be what matters here. 

Human creators have been stunned to learn how their work is used by AIs without their consent. Whether it’s to create malicious deepfakes or to feed an image-generation AI, human creators are eager to set boundaries between their work and AIs.

However, that’s proving to be a vexing challenge. While opt-out tags and flags for AI art may help with ethical users of the technology, there are almost no tools to stop less ethical ones.

To that end, poisoning may be an approach. However, the approach needs a lot of improvement and will, most likely, be most useful when combined with other systems.

There’s simply not going to be a “silver bullet” here. Rather, it’s going to require a combination of approaches, including technologies, legal efforts and new social norms, for any kind of peace to be made between artists and AI.

However, all those things take time to develop and, much to many an artist’s frustration, AI is here today. 

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free