blogs by Gio

Fake Twitter Accounts

  • Posted in 🖱 cyber

Remember when Elon Musk was trying to weasel out of overpaying for Twitter? During this very specific May 2022-Jul 2022 period, there was a very artificial discourse manufactured over the problem of “fake accounts” on Twitter.

The reason it was being brought up was very stupid, but the topic stuck with me, because it’s deeply interesting in a way that the conversation at the time never really addressed.

So this is a ramble on it. I think this is all really worth thinking about, just don’t get your hopes up that it’s building to a carefully-constructed conclusion. ;)

Argument is stupid🔗

First, to be clear, what was actually being argued at the time was exceedingly stupid. I’m not giving that any credit.

After committing to significantly overpay to purchase Twitter with no requirements that they do due diligence (yes, really!) Elon Musk tried to call off the deal.

This was a pretty transparent attempt to get out of the purchase agreement after manipulating the price, and it was correctly and widely reported as such.

Scott Nover, “Inside Elon Musk’s legal strategy for ditching his Twitter deal”

Elon Musk has buyer’s remorse. On April 25, the billionaire Tesla and SpaceX CEO agreed to buy Twitter for $44 billion, but since then the stock market has tanked. Twitter agreed to sell to Musk at $54.20 per share, a 38% premium at the time; today it’s trading around $40.

That’s probably the real reason Musk is spending so much time talking about bots.

I don’t want to get too bogged down in the details of why Elon was using this tactic, but fortunately other people wrote pages and pages about it, so I don’t have to.

Twitter v. Musk Complaint, July 12 2022 In April 2022, Elon Musk entered into a binding merger agreement with Twitter, promising to use his best efforts to get the deal done. Now, less than three months later, Musk refuses to honor his obligations to Twitter and its stockholders because the deal he signed no longer serves his personal interests. Having mounted a public spectacle to put Twitter in play, and having proposed and then signed a seller-friendly merger agreement, Musk apparently believes that he — unlike every other party subject to Delaware contract law — is free to change his mind, trash the company, disrupt its operations, destroy stockholder value, and walk away.

Musk’s exit strategy is a model of hypocrisy. One of the chief reasons Musk cited on March 31, 2022 for wanting to buy Twitter was to rid it of the “[c]rypto spam” he viewed as a “major blight on the user experience.” Musk said he needed to take the company private because, according to him, purging spam would otherwise be commercially impractical. In his press release announcing the deal on April 25, 2022, Musk raised a clarion call to “defeat the spam bots.” But when the market declined and the fixed-price deal became less attractive, Musk shifted his narrative, suddenly demanding “verification” that spam was not a serious problem on Twitter’s platform, and claiming a burning need to conduct “diligence” he had expressly forsworn.

…But Musk exhibited little interest in understanding Twitter’s process for estimating spam accounts that went into the company’s disclosures. Indeed, in a June 30 conversation with Segal, Musk acknowledged he had not read the detailed summary of Twitter’s sampling process provided back in May. Once again, Segal offered to spend time with Musk and review the detailed summary of Twitter’s sampling process as the Twitter team had done with Musk’s advisors. That meeting never occurred despite multiple attempts by Twitter.

Mike Masnick, “Musk’s Attempt To Get Out Of The Twitter Deal Proceeding Exactly As Predicted; What Happens Next?” There is no actual escape hatch… Musk made a legal agreement to pay $44 billion for the company and can’t just walk away.

As we noted back in June, [Elon] appeared to have hired some very expensive lawyers to come up with some sort of pretext for walking away, and it’s playing out exactly in the manner described. Musk had specifically waived his rights to due diligence prior to the deal, but the merger agreement did include a promise to provide Musk with necessary data to conclude the deal.

…his second attempt to come up with an excuse was to claim that Twitter publicly lied to the SEC in its filings regarding how much spam was counted among its monetizable daily active users. This also seemed ridiculous, as Twitter had been publicly reporting those numbers for quite some time, and Musk could have explored those prior to the deal itself but, again, deliberately chose to waive those rights. You can’t do a deal in which you agree not to explore the data, and then complain that you hadn’t seen the data.

On the whole, it seems fairly blatantly obvious that all of Musk’s excuses here are pretextual, and plotted out by his lawyers to try to get him out of a deal that didn’t actually have an escape hatch. The question before the court, really, is whether or not it matters that he’s obviously trying to escape a deal that he agreed to.

Only convinced the worst people in the world🔗

Suffice it to say, this was all bullshit and transparently so. Elon’s argument in 2022 that Twitter’s value hinged on its ability to block crypto spam was never seriously believable, by anyone, then or now. It was the thinnest possible pretense for arguing he shouldn’t have to pay his bills.

So the worst people in the world all got behind it…

AG Paxton Launches Investigation Against Twitter for Potentially Deceiving Texas Consumers, Texas Businesses Over Fake Bot Accounts
On Twitter, “bots” are automated, non-human accounts that can do virtually the same things as real people: send tweets, follow other users, and like and retweet others’ posts. Spam accounts like these inflate followers and reach, and often push deceptive and annoying activity. Bot accounts can not only reduce the quality of users’ experience on the platform but may also inflate the value of the company and the costs of doing business with it, thus directly harming Texas consumers and businesses.

“Texans rely on Twitter’s public statements that nearly all its users are real people. It matters not only for regular Twitter users, but also Texas businesses and advertisers who use Twitter for their livelihoods,” said Attorney General Paxton. “If Twitter is misrepresenting how many accounts are fake to drive up their revenue, I have a duty to protect Texans.”

…but not really anybody else.

Elon Musk tells Twitter he wants out of his deal to buy it (CNN, Jul 9 2022) Musk has for weeks expressed concerns, without any apparent evidence, that there are a greater number of bots and spam accounts on the platform than Twitter has said publicly. Analysts have speculated that the concerns may be an attempt to create a pretext to get out of a deal he may now see as overpriced, after Twitter shares and the broader tech market have declined in recent weeks.

Twitter’s stock is trading around $36, down nearly 30% since its price the day Musk and Twitter announced the acquisition and well below the $54.20 per share Musk offered, suggesting deep skepticism among investors about the deal going through at the agreed upon price. The declining value may also be among the reasons Musk is no longer interested in the deal, analysts have said.

“The way these things usually work is that if there’s a billion-dollar breakup fee and you’re the one trying to acquire, then that is enforced against you,” Tobias said, “unless there’s some kind of material breach or some kind of reason that can be offered up that persuades a court that Twitter, for example, is not making good on the deal.”

Argument is interestingly stupid🔗

OK, but that’s all the boring, first-level stupid context. Under all that, there’s a question being begged that’s actually interesting to contemplate: what are these “fake accounts” that are being complained about?

“Fakeness” is a weird idea🔗

Fake means a few things: something false or misleading, or something that is inauthentic or counterfeit. So if I present something as being something it isn’t, it’s a fake [one of those].

“Fakeness” is an ontological parasite, like a hole: it only exists in relation to something else. Fakeness is only meaningful in relation to some criteria. There is no fakeness attribute. Fool’s gold isn’t somehow intrinsically fake, it’s a real substance that exists. (In fact, people are getting excited about it in its own right.) But we can call it fake gold, or fake in relation to gold, because someone might think it’s gold even though it doesn’t meet the criteria for what it means to be gold. A picture of a frog is a real image but a fake frog. The treachery of images comes from the viewer.

Ceci n'est pas une pipe.

That “fakeness” relationship is defined by the context in which a thing is presented. A fish is not a Twitter account, but it’s not a fake Twitter account unless someone claims it’s a real one.

So in order to ascribe “fakeness” to something, we need criteria to compare it against. Simple enough. So, before we know if an account is fake or not, we need the criteria for a Twitter account to count as “real”.

“Account” is a weird idea🔗

Serious question: what are Twitter accounts even supposed to represent?

People? No, accounts like @bbc are companies, and are even labelled as “official organizations”.

Legal entities? No, there are lots of subdivisions and topic accounts, like @verified, that correspond to an idea that doesn’t have its own legal representation.

There are a whole lot of things a Twitter account can represent. Here are a few I can think of off the top of my head:

  • A person
    • Legal name
    • Pseudonymous
  • A person’s thoughts on a specific topic
    • Fandom accounts
    • Personal/private accounts
    • Art accounts
  • A person acting in a specific role
    • Government officials often have multiple accounts, one personal and one for the role
  • A corporation
  • A brand owned by a corporation
  • A brand or project not owned by a corporation
  • A specific event or effort
  • A news topic
  • An update log
    • @elonjet
  • An interface to a program or automation
    • Automated support chatbots
    • Media downloaders

What coherent definition is there that cleanly encapsulates all these different use cases? The answer is to make the definition the same way people choose to use accounts: by functionality. An account is an organizational unit that can be discretely labelled, followed, blocked, and interacted with via any other platform features. It’s appropriate to have a separate account any time you want those functions.

Why bother?🔗

If we put those two conclusions together naively, we end up with the statement that it’s not possible for something to be a “fake account”. Our criteria say that an account is an account if it usefully fulfills the functions of the account, and all accounts only exist if someone created them, so they’re all serving their function.

That’s doesn’t seem to be useful, though, because there are plenty of accounts I can pull up that nearly everyone will all agree are “fake”.

But we can again find that there’s no easy definition that captures this.

It’s not automated behavior: Ars posted this story automatically, on behalf of an organization, and it’s not fake. Is it not being “real people”? No, organizations and other kinds of entities have not-fake accounts. It’s not coordinated political behavior: most sincerely political action comes in the form of people coordinating. It’s not impersonation: parody accounts aren’t “fake”, and most “fake” accounts aren’t even attempting this It’s not just lying. It’s not just scamming.

Is it alternate accounts for the same person under the same name, like Elon Musk said he didn’t have until his court deposition proved that was a lie? That feels closer, but it’s still not quite right.

If there’s no concrete definition for what a “true” account should be, is there any limiting principle on it?

Sure there is. We just have to back to the point that showed fake accounts exist in the first place: there’s a “fake” label that’s useful.

That’s an answer. The determining factor for whether it’s a “valid” account is if it’s worthwhile to have something that provides the functionality of an account. It’s not objective, and it’s not easy criteria to adjudicate at scale, but it’s a perfectly coherent one.

Note how we’ve wrapped back around to how “fakeness” has to be in relation with something else. Here, it’s in contrast to “genuine”, worthwhile social behavior.

There’s a good thread from ex-Reddit-man Yishan that makes this point about moderation. Moderation is usually not about mechanical functions, but utility, what he describes as socially desirable behavior:

Why is it necessary to police this?🔗

That’s a question it’s always good to ask before going in guns blazing to regulate behavior. Is that appropriate to do in the first place?

User agency🔗

Whether or not an account is “fake” or not is poorly defined and hard-if-not-impossible to correctly measure at scale. But the “fake account” topic has a more fatal problem than that. All accounts are agents of people. For every account, fake or not, someone set the account up intentionally and is using it for their own ends.

This is always true. Digital person-like entities are user agents, agents of people that perform tasks. I have a lot to write about this subject, but I’ll touch on it briefly here. Whether someone makes a tweet from their personal account, or a side account, or if they set up some automation to do it, the tweet exists because of the will of a person.

When we’re already talking about digital entities, making a distinction between “automated” action and “real” actions is not only impossible, it’s fundamentally wrongheaded.

Every action represents a human directing a computer to perform an action, and trying to distinguish what modes of input are legitimate or not always excludes people from participation. This is the API problem. Your system is defined by its interaction with human-prompted entities. Whether it’s automatic or semi-automatic, a human is willing each interaction to happen1.

So we have to be very hesitant to cripple any actor, because that’s crippling a person.

So let’s take another step back. Why is it important to be able to label accounts as “fake” or not. Creating the theoretical tool that identifies and blocks fake accounts is creating new power that can cripple people. A tool which, even if used in good faith, will exclude people due to false positives. What reasons are there to make this, and do they justify the danger?


The reason most salient to Elon’s original objection is the completely pragmatic one: advertising.

Advertisers don’t want to buy clicks, they want to buy attention. And that attention needs to come from “real” people, people who might actually act on the advertisement and buy the product. Being able to show that real people are looking at advertisements is what gives them value, and is ultimately what gives Twitter as a whole most of its monetary value.

This really only concerns automated bots, though. And we know we want some of those!


Twitter’s real power isn’t in its advertisements, but its ability to shape public discourse. If people talk on a platform, the design of that platform is going to affect what and how they communicate, about everything.

If you use Twitter like a social space, you’re going to pick up information about the general disposition of people, just like any other social space. What ideas are liked? What ideas are hated? What politicians are supported and how strong is their mandate?

Bots can contribute to this problem, but they’re not the only problem. Actors as banal as individual trolls or as powerful as nation-states can set up alternate accounts and flood specific topics with specific ideas to create the effect of public support where none exists. Fake accounts can misrepresent public perception — or what ideas come from what demographics.

What actually makes sense🔗

So, how does all that inform how we relate to fake accounts?

Well, it turns out that moderation is hard. That much shouldn’t be a surprise.

And yes, if you have to reduce what people currently mean by fake accounts to a definition, you can.

An account is fake if it’s used by an automated system to send unwanted messages OR if it reports falsified advertising metrics that don’t correspond to human attention OR presents itself as a specific person or role that it is not for the purpose of deceiving users OR ascribes a role to a person that they do not have OR persistently posts false information.

Good luck measuring that.

But that’s not to say fake accounts aren’t a problem. There are some behaviors we do want to block, and while the human behind them is responsible, their existence in no way makes the behavior acceptable. Spam is bad. Political manipulation is a problem. Scams, especially automated scams, can be extremely harmful. So what is it about the “account” that makes these human behaviors so much worse?

I think the most obvious problem with fake accounts is they represent a power imbalance. Spam bots exert power over your viewing experience to force you into interactions you don’t want, and they do it at a scale a human could not. Botnets and astroturf campaigns aren’t dangerous because of the input method, they’re dangerous because they unduly multiply someone’s political power.

The thing you need to combat isn’t the agent, it isn’t the input method, it isn’t a measurable quality of any given profile: it’s power.

It seems like what’s really objectionable is the case where someone’s power over your life is proportional on their resources instead of your consent.

But it’s really hard to admit that.

  1. Unless you have a user-agent (browser, app, etc) that’s not fulfilling its user-agent responsibility and performing tasks the user doesn’t want to happen. I’m counting bad design in this too. 

Howdy! If you found my writing worthwhile, the best thing you can do to support me is to share an article you found interesting somewhere you think people will appreciate it. Thanks as always for reading!