Why Impersonation is Difficult to Stop
Many years ago, I had a minor issue with impersonation. Someone on Twitter, likely a bot, was using my profile picture and a similar-ish name to sell “essay writing” services.
I caught it quickly thanks to their use of hashtags I track. The account had only posted a few tweets and had almost no followers. I’m pretty sure no one was actually fooled.
I didn’t mention it at the time, it seemed like a minor thing that sometimes webmasters with any audience of size have to deal with (like the endless offers for a guest post).
However, when Twitter started offering verification, I applied for it. I didn’t actually expect to get it, but I did, and I think it was largely due to that incident. Since then, no one that I’ve seen has tried a similar approach, but I’ve kept an eye out nonetheless.
Still, I began thinking about that minor saga over the past few weeks due to the ongoing debacle over verification at Twitter.
First, Twitter announced that they would be offering verification as part of an $20 Twitter Blue package, with those that had obtained it previously, including myself, obligated to pay or lose the mark.
After a back and forth with Stephen King, Twitter’s new owner and CEO, Elon Musk, lowered the price to $8. The marks debuted at that price point a few days later, and the system was almost immediately beset by trolls that used the verification to impersonate other accounts.
Though much of the trolling was fairly harmless, the medical company Eli Lilly saw their stock dip following a fake announcement that they were offering insulin for free. In the world of Sports, several mainstream media accounts picked up on a fake story about the firing of the Oakland Raiders coach after a 19-year-old managed to impersonate sports journalist Adam Schefter.
Twitter briefly experimented with using gray checkmarks to indicate celebrities and corporate accounts, but that plan was shuttered within a day. Less than a week after they launched it, Twitter shuttered the paid verification system.
The results of this experiment were wholly predictable. Countless people warned that such a system would become a haven for trolls.
However, but whether you’re rooting for or against Musk in this story, the story does highlight one of the key challenges online: Verifying that the person you’re seeing or reading is the person they say they are.
It’s an ancient problem on the internet and, unfortunately, it has only been getting worse…
An Ancient Problem
Back in 1991, Phill Zimmermann released an encryption program named Pretty Good Privacy or PGP. The goal of PGP was simple, to make it easy for individuals to both encrypt and verify emails.
The system worked by having users distribute keys from a reliable source that others would then use to either decrypt the contents, if it was sent encrypted, or verify the authorship of the email.
However, the system, never really caught on outside of enthusiasts. Between the complexity of use and the fact it was of limited functionality without a critical mass of users, it never caught on.
Though email would get other verification tools, most notably DKIM, SPF and DMARC, they all operated on the server/domain name side and required nothing from the actual user. Furthermore, those would only verify the domain, not the individual.
Over time, email became just one of the ways people communicated online. Online chats, social media, forums and other means of communication rose in popularity and all dealt with impersonation in various ways.
However, one thing that has changed is that impersonation, over time, has impacted more and more people. What was once a problem mainly for celebrities and highly-visible individuals, can now impact anyone.
For example, a common scam is to impersonate regular people on Facebook and use that trust to trick their friends. To be clear, this isn’t a new problem, news reports about it go back to at least 2011.
As such, it isn’t just celebrities, journalists and officials that have a need for a way to fight impersonation. Yes, they are still the biggest targets, but every day people are targets as well.
This puts us in a difficult position. On an internet rightly worried about privacy, how do we verify who is who? Is it even possible?
The problem can be split neatly into two parts.
Part 1: Verification Costs Money and Privacy
Verification, simply put, isn’t free. Someone has to take the time and effort to look at the documentation, confirm that it’s authentic, and approve it.
Even if that time is relatively short for a single person, when you try to verify millions or even billions of people, it quickly becomes onerous.
Bots may be able to take off some of the heavy lifting, but, as we’ve seen with Content ID, bots aren’t always great at making nuanced decisions and even a small percent of errors will erode trust in the system.
However, the person who is being verified also makes sacrifices. They are giving up a piece of their privacy. While we all sacrifice a certain amount of privacy just being online, that ramps up to a new level when you’re handing over official documents to be verified.
This cost, both on the system and user, is why Twitter initially restricted the system to celebrities, officials and so forth. It was simply too costly to verify everyone and too big of an ask for the average user anyway.
Musk’s solution, verifying everyone who sought it for $8, solves the cost to Twitter issue, but it also invited the trolls and pranksters. Simply put, it wasn’t an actual verification, just a blue checkmark that trolls could pass off as one.
Part 2: People Are Sometimes Dumb
If part one could be solved and a perfect verification system could be created. It likely wouldn’t be ubiquitous. Simply put, requiring verification for a platform, especially a public one, has serious dystopian implications. This means that there will always, most likely, need to be room for unverified accounts.
To that end, people sometimes very dumb. They ignore clear warning signs that something is a scam. Studies show that people are biased to not verify information that aligns with their existing beliefs.
However, even if 99.9% don’t fall for the impersonation, a scammer only needs that .1% to make it worthwhile.
As humans, we are just not smart enough reliably enough to avoid falling for it. That goes for everyone. We come to the internet with our biases, we come here tired and unfocused, we come here with our emotions high and our logic low. Even the best and brightest among us can (and likely will) get caught on a bad day.
That’s what scammers, trolls and others count on, and creating a verification system that prevents that is nearly impossible.
While that doesn’t mean that verification shouldn’t be tried, especially in a limited capacity, it does mean that even a perfect verification system wouldn’t make the problem go away.
Bottom Line
The last few weeks on Twitter have shined a bright light on one of the internet’s longest-running problems. Though the flame out of the $8 checkmark has been amusing, it belies a bigger issue.
There’s plenty of reason why regular users might want and legitimately need verification. However, opening the floodgates to anyone with $8 was clearly not the right move.
However, there really is no simple solution here. Between the costs of verifying people, the privacy issues that come with verification and the fact even a complete verification system wouldn’t solve the problem, the issue is more about finding a balancing act.
Though we all love broad strokes and simple answers to complex problems, this isn’t a case where such answers are going to arise. This has been an ongoing battle for more than three decades, and we seem to be no closer to a solution today than we were then.
But if there is a positive from the past few weeks, it’s that people are talking and thinking about this issue again. That is something that is long overdue.
Want to Reuse or Republish this Content?
If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.