On February 6, Los Angeles-based activist Sennett Devermont walked into the Beverly Hills Police Department (BHPD) to file a Freedom of Information Act Request Form.
When he went to the counter to ask the question, he began filming the officer working the desk. As per his username @alwaysfilmthepolice, this was routine for him to do.
However, it was then that BHPD Sgt. Billy Fair took out his phone and began playing the Sublime song Santeria. He ignored Devermont’s question for a few seconds and turned up the volume.
In a later encounter between the two outside, Sgt. Fair tried the same trick with a different song.
Devermont speculates that the reason for playing the music was that it might trip Instagram’s copyright filters and cause the livestream to be halted or the uploaded video to be removed. Neither happened.
But all of this raises a simple question, could the police, or anyone else, use copyright-protected music to sabotage a Livestream? The answer depends more on the tech than the law.
Why There Are Copyright Filters and How They Work
Every major livestreaming and video hosting platform, including YouTube, Instagram, Facebook and Twitch all use copyright filters of some kind. However, U.S. law doesn’t mandate the use of such filters.
Still, the filters we currently have are largely in place out of convenience and expediency. YouTube’s Content ID system currently handles more than 98% of their copyright disputes and, without those systems, the site would have to spend much more time and energy dealing with copyright matters.
These filters also help foster relationships with copyright holders. They enable YouTube (and other systems) to forge cooperative relationships with film and music studios.
The way they work is fairly straightforward. As new content comes in, either livestreamed or uploaded, the filters compare the incoming content to a database of copyright-protected work and, if the match crosses a threshold, it takes the action prescribed.
To be clear, that action could be almost anything from simply tracking the use to taking down the video. The response is typically determined by a combination of what the copyright holder wants and what the system is capable of.
That, in turn, brings us back to the original question: Could you weaponize copyright to stop a livestream? The answer is theoretically yes, but the details are especially important.
The Weaponization of Copyright
If someone is being filmed and they don’t wish for the video to be uploaded on to a public site or livestreamed, the thought goes that, if they play copyright-protected music, the filters will catch the video and take it down.
However, as this case showed, that’s not how it usually happens.
First, there’s the question of whether the filter detects the music at all. Given that the music is being played in the background of a noisy environment, it’s unclear how reliably it will be detected.
That said, most filters will likely be able to pick it up. As we saw in the Lenz v. Universal case, an automated detection tool was able to spot Prince’s Let’s Go Crazy playing quietly in the background of a 29-second video of a dancing baby. As such, a minute-long video of Santeria would likely be detected.
However, detection does not mean that action is taken. All filters have thresholds that must be crossed. Playing a second or two of a popular song isn’t likely to get a response but uploading the entire track is. Likewise, the filters look at how clear the matching content is or how prominent it is in the video.
A lot of would-be pirates use these thresholds to circumvent filtering tools. This frustrates rightsholders but also makes it less likely that one can interrupt a livestream by playing Sublime on their phone.
But even if the use is detected and does cross the threshold for action, that doesn’t mean the video is going to be taken down. Rightsholders have worked out a myriad of agreements and, often, it just means that the rightsholder will get any royalties from the video or that they’ll be attributed.
That said, Instagram, Devermont’s platform of choice, does not have such policies and will remove livestreams and videos if the filters detect it. However, Instagram will warn users before that action is taken on a livestream and give users a chance to address the issue.
To be clear, this is something that CAN work and likely has worked before. However, making it work reliably is a separate matter. The filtering and licensing systems are simply too complex for this to be a predictable weapon.
If this is a particular concern for you, there are things that you can do.
If livestreaming, the most important thing is to pay attention to any warnings you get in the app. If the music (or video) is prominent enough and goes on long enough, it may trip the filters even if you don’t intend it. However, with most systems, you get warnings before action is taken.
If you’re uploading recorded video, consider editing the content out if possible or uploading it to a platform like YouTube that has arrangements with rightsholders. Though the video may get monetized by the rightsholder or blocked in certain countries, it’s much less likely to be blocked completely.
Though weaponizing copyright to stop livestreaming can theoretically work there are a lot of layers between the livestream and the copyright bot that makes it an unreliable tool.
For those seeking to sabotage livestreams, there are likely more effective tools to hinder the streamer than simply playing Sublime on your phone.