The āblue checkā ā a silly colloquialism for an icon thatās not actually blue for the at least 50% of users using dark mode ā has become a core aspect of the Twitter experience. Itās caught on other places too; YouTube and Twitch have both borrowed elements from it. It seems like it should be simple. Itās a binary badge; some users have it and others donāt. And the users who have it are designated as⦠something.
In reality itās massively confused. The first problem is that āsomethingā: itās fundamentally unclear what the significance of verification is. What does it mean? What are the criteria for getting it? Itās totally opaque who actually makes the decision and what that process looks like. And what does āthe algorithmā think about it; what effects does it actually have on your accountās discoverability?
This mess is due to a number of fundamental issues, but the biggest one is Twitterās overloading the symbol with many conflicting meanings, resulting in a complete failure to convey anything useful.
History of twitter verificationš
Twitter first introduced verification in 2009, when baseball man Tony La Russa sued Twitter for letting someone set up a parody account using his name. It was a frivolous lawsuit by a frivolous man who has since decided heās happy using Twitter to market himself, but Twitter used the attention to announce their own approach to combating impersonation on Twitter: Verified accounts.