What Facebook Gets Wrong About Revenge Porn
Some problems can't be solved by tech alone...
Earlier this month, Facebook became the subject of controversy over its new pilot to fight revenge pornography.
The pilot, which is launching in Australia, aims to block revenge pornography images before they are published by having users submit their own nude images, which Facebook would then hash (also known as a digital fingerprint) and block from being uploaded.
The initiative has been very thoroughly mocked, with many noting the irony that, to prevent revenge porn, Facebook is asking you to submit your nude images.
The mocking didn’t subside after Facebook revealed that human beings will be evaluating the nude images to ensure that they violate the facebook terms of service and that the photos will be saved for a certain period of time.
This all begs the question: Is this really the best or only way to fight revenge pornography?
How Facebook’s Revenge Porn System Works
Sadly, Facebook has become ground zero in the fight against revenge pornography.
As the largest and most popular social network, it has become a destination for those seeking to share nonconsensual pornographic images.
Though the site broadly bans pornography, it still can be, and regularly is, shared in private messages and closed groups, where Facebook’s ability to enforce its terms of service is more limited.
However, since so much of the sharing takes place off of public Facebook, victims may not know or be aware of how their images are being shared. If they are, they may not have a URL that they can point Facebook admins to in order to get the images removed or blocked.
This can be a real challenge for victims, who, though they wish to stop further sharing of their images, have no easy way of doing so. Facebook’s new system aims to combat that exact problem.
The idea is fairly simple, if a Facebook user becomes alerted to nonconsensual pornography being shared, rather than pointing to a link or other version, they instead upload a copy of the image(s) to Facebook privately via Facebook Messenger (after filing a report with the Australian eSafety Commissioner’s office).
That photo is then blurred (using automated technology) and viewed by a “specially-trained team” that determines whether or not the image is a violation of the terms of service. This is done to prevent abuse of the system and from having innocuous images banned.
Once the team determines that it is nonconsensual pornography, Facebook then creates a hash of the image, which it uses to detect and block other uploads of it.
The result, in theory, is that the image(s) are effectively blocked from being uploaded on Facebook.
But is this really the best approach? That’s a much more difficult question to answer.
Unique Problems and Bad Optics
Back in April, Facebook announced a new tool to fight against revenge pornography. The idea was, when Facebook gets a report of revenge porn, the image(s) involved is hashed and blocked.
The idea is straightforward but, as we discussed above, not well-suited for how Facebook is used to distribute nonconsensual pornography.
Reporting revenge porn is very difficult when it takes place in private messages or closed groups. A victim may simply not have a URL to share. In those cases, providing a reference image is one of the easiest ways to block it.
But while Facebook’s approach is unique and well-targeted to their specific situation, it’s also horribly out of touch with the needs of revenge pornography victims.
As someone who works regularly with victims of revenge pornography (helping them file notices to get their images and videos removed), it’s important to remember that this is a very significant and personal invasion of their privacy. The mix of emotions that one feels when they learn about this violation varies from person to person, but the important thing to remember, as someone trying to help them, is to try and do no further harm.
That can be a challenge because working to remove the images and videos means viewing them. Even though I’m there to remove the images, I’m still contributing to the invasion of privacy.
Though I work to minimize this both by only getting the information needed and by keeping two female assistants to work with those uncomfortable with a male working the case, there’s little that can change that reality.
Facebook’s approach does not treat these emotions with any delicacy. Asking victims to submit heir photos via Facebook Messenger and not being 100% transparent about what happens to the photos afterward isn’t just bad optics, it’s contributing to the harm that nonconsensual pornography does.
The situation is even worse for those who are simply worried about nonconsensual pornography and are trying to prevent it. For them, this uploading and viewing of their private photos may be the only invasion of privacy they experience.
In short, while it’s often necessary to view nonconsensual pornography to fight it, one should work to minimize the harm they do, which is what Facebook is failing at.
An Alternate Approach
The question I and others have is simple: Why is it required to upload images to generate the hash?
Creating a hash from an image or video is fairly straightforward and there is open source code that enables you to do it on any machine offline.
The solution is is fairly straightforward. After the user files the report with the Australian eSafety Commissioner’s office, have them download software to create the hashes and then upload those hashes, not the images, to Facebook.
Though this system isn’t likely to be abused if an official report is required, if abuse is an issue then a human check can be triggered once a match is detected. This way, those who were proactive but never uploaded never have their images viewed and those who did have them uploaded have their visibility minimized.
Best of all, this solution eliminates the question of whether the images are stored and for how long. Women and men already concerned about whether their images will fall into the wrong hands don’t have to worry about another potential vector for them to appear online.
While this approach isn’t perfect (hashing is still an imperfect technology and there is a potential for abuse) it’s far better than the approach Facebook is trying to take.
Bottom Line
With their current approach, Facebook is asking users to trust them with their most sensitive and personal images.
However, not only have nonconsensual pornography victims already had their trust broken in a very significant way, but Facebook’s history on privacy makes them a company not to be trusted.
Even under the best of circumstances, i would be a major ask to tell a victim of nonconsensual pornography to upload the images involved online, Facebook’s history and their approach to this issue makes it outright horrifying.
Still, in some ways Facebook is on the right path. They are acknowledging that their battle against revenge porn is very different than other, more public sites. The walled garden nature of Facebook calls for a different approach.
But this is not the answer. It not only risks further harming current victims of revenge pornography, but harming those who have not yet and may never be.
Facebook’s solution to this problem may be fine on a technical level, but it is tone deaf and harmful on a human level.
Facebook can and should do better. It’s the least that can be done for the victims of revenge pornography.
Want to Reuse or Republish this Content?
If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.