Tagged: security

Identity Verification is as Bad as It Can Be

  • Posted in cyber

This is an addendum to OS-Level Age Attestation is the Good One, where I talk about the potential of legal standards for age attestation as an alternative to age verification. Not already convinced of the dangers of age verification? The extent of the evil waiting behind identification systems and deanonymization is unspeakably vast, and fortunately it’s getting extensive coverage. Here’s a quick look to get you up to speed.

Direct digital censorship

A lot of the energy behind age verification comes from authoritarians eager to censor political dissent, promote propaganda and retaliate against critics. This is a power grab, with bills designed to seize power over specific content the government objects to:

Governments are, of course, trying to claim control over “public discourse”. Like all seizing of arbitrary power, the risks associated with this are volatile and unbounded, because they depend on who holds power at any given moment in a political system where power is expected to rotate.

Discord

As a case study, let’s take a look at one of the latest major services to attempt age verification: Discord. At time of writing, Discord is in the process of trying to switch to a “Teen Default” system, where every user is assumed to be a minor unless they can prove their age to Discord. Discord is a communications platform used widely by adults, and during COVID Discord very intentionally expanded their market domain beyond gaming to focus on being a global platform, so the assumption that all spaces are for kids is clearly incorrect.1 But Discord is sometimes used by children, and since it’s a communications platform people can use it to communicate horrible things. Boomers have learned they can be insane about this, so Discord is under significant pressure to balance its goal of being a universal communications platform with child safety.

OS-Level Age Attestation is the Good One

  • Posted in cyber

There’s a coordinated effort to use the “child safety” euphemism to cripple the internet with identity verification mandates. That’s bad. But buried in the mix there’s a genuinely good idea with enough political capital that it might stick around and do some good.

Every time I’ve tried to write an article on the topic of child internet safety my energy has fizzled into depression, because as one researches the topic it becomes obvious that everyone with any relevant power is refusing to solve the problem on purpose. It’s demoralizing and it’s been mostly useless for me to do any thought work in this area.

But California’s age attestation bill might be an exception to this. Because it’s age attestation, not age verification, it looks like a significant political step in the right direction, and with the right focus it could do a lot of good. A lot of people have (fairly!) assumed attestation was age verification or at least lays the groundwork, but I think this isn’t the case. There is always the danger of future bad legislation, but OS attestation doesn’t pave the way for it, it provides a strong defense against it. We need a good idea to win the child safety war, not because we’re in dire need of more online child safety, but because addressing the real concerns correctly blocks a whole slew of impossibly dangerous policies.

My ideal age filtering tool is a system of client attestation with trust rooted in the adult administrator, provided by an OS-level API provided as preemptive verification, enforced by compliant browsers and application stores. And we’re shockingly close to that.

There is room for improvement

People on the privacy side of the age verification war — my side — will argue that parents already have everything they need for comprehensive web filtering if they want to use it. I think this isn’t quite true; there’s one notable architectural gap that a technical solution could meaningfully fill.

There are many existing content filtering tools geared toward child safety but their weakness is that they’re reactive. Traffic filters can identify and block traffic from known websites and on-device content filters can try to detect and block specific content. But this requires the user reacting and defending against every possible source and behavior. It’s the same cat-and-mouse game as adblockers. And like adblockers, the more closed down the system is — like iOS or gaming consoles — the harder it is for developers to make exactly the right product.

The internet sometimes assumes minors are supervised — since they have parental consent to have the device in the first place — but this often isn’t the case. It’s very common for minors to have their own phones or tablets with unsupervised access. When they’re online or downloading apps, they’re not sitting with a parent, they’re unsupervised, roaming children. Parents are dropping their kids off in the city.

This isn’t inherently bad; it seems like parents and children both want children to be able to exist independently without granular supervision, and so there’s a desire to make that situation safer. That shouldn’t come at the cost of any adult liberty or even the liberty of children with parental consent; it just means we want an ecosystem that allows for unsupervised children to exist within it.

Right now the burden is on parents to be active defenders protecting their children from a vast ecosystem of companies investing research and capital into optimizing how efficiently they can exploit money and data out of everyone in the world. It would be a meaningful improvement if there were a safe way to prevent some of this exploitation by putting reasonable requirements on providers, so long as this can be done in a way that doesn’t cause more problems.

Political pressure for “child safety” is exploitable

But the lack of a perfect parental control system isn’t the main problem here. The real danger is the push for online identity verification using child safety as a justification.

Smart and privacy conscious people demand “No age verification” (quite reasonably!), but that doesn’t offer the quick fix people are looking for. More importantly, it doesn’t relieve the political pressure and so doesn’t take away the excuses of tyrants.

Normally “do nothing” would be the safest option here, but the danger of uninformed and reactionary voters means there is a great deal to gain by satisfying the concerns safely instead of letting the solution be evil. A technical standard for parents to somehow identify their children as children is the relief valve for dangerous political pressure. This doesn’t appease the fascists and censors. This doesn’t cede them any ground and it’d be wrong to try to; there’s no satisfying that hunger and it’s a dangerous mistake to feed it. What it does is actually improve the material conditions for the people they’re trying to trick.

A proactive system that puts some of the burden for protecting children on those companies is a real relief to this, and it would be a meaningful improvement if something could address this without causing bigger problems.

Taxonomy

There are three basic categories of age filtering: nothing, client attestation, and client verification. These provide services varying levels of confidence in their knowledge of users. (It’s tempting to simplify confidence to labels like “strong” or “weak” but it’s important to think about what’s actually being secured, and from who.) Different people call these different things, but here’s my taxonomy with the labels I’ll use.

Anthropic and The Authoritarian Ethic

This has been a wild weekend for the fields of tech policy and AI safety. As a writer I am not normally a news guy, but this moment has felt like kind of a perfect microcosm of both the AI industry and the Trump administration’s flavor of petulant authoritarianism.

The AI company Anthropic — known for their engineering-focused chatbot Claude — was founded by former OpenAI employees who left to form their own company because they weren’t satisfied with OpenAI’s safety standards. Anthropic’s prioritizing of ethics and care have not been a handicap for them; they’ve led to Claude, the best LLM product on the market today. In July 2025 Anthropic was awarded a two-year $200 million contract with the Department of Defense to support AI for use in classified government environments, mirroring similar contracts the government made with other companies. Despite the internal competition with ChatGPT and Llama, Claude was the highest-quality product and the only one approved for use in classified military systems.

But Anthropic’s culture of (relative) corporate responsibility set it up to be the target of a frenzy the Trump people had already worked themselves into: the specter of “woke AI.” The presidential order “Preventing Woke AI in the Federal Government (July 2025)” was an ideological rant typical of Trump’s presidential orders filled with false and foolish assertions to justify banning LLMs involved in federal workflows from “incorporating concepts” like “DEI”, “intersectionality”, and “transgenderism”.

Client CSAM scanning: a disaster already

  • Posted in cyber

On August 5, 2021, Apple presented their grand new Child Safety plan. They promised “expanded protections for children” by way of a new system of global phone surveillance, where every iPhone would constantly scan all your photos and sometimes forward them to local law enforcement if it identifies one as containing contraband. Yes, really.

August 5 was a Thursday. This wasn’t dumped on a Friday night in order to avoid scrutiny, this was published with fanfare. Apple really thought they had a great idea here and expected to be applauded for it. They really, really didn’t. There are almost too many reasons this is a terrible idea to count. But people still try things like this, so as much as I wish it were, my work is not done. God has cursed me for my hubris, et cetera. Let’s go all the way through this, yet again.

The architectural problem this is trying to solve

Believe it or not, Apple actually does address a real architectural issue here. Half-heartedly addressing one architectural problem of many doesn’t mean your product is good, or even remotely okay, but they do at least do it. Apple published a 14 page summary of the problem model (starting on page 5). It’s a good read if you’re interested in that kind of thing, but I’ll summarize it here.

5G's standard patents wound it

I remember seeing a whole kerfuffle about 5G around this time last year. Not the mind-control vaccine, the actual wireless technology. People (senators, mostly) were worried about national security, because Huawei (the state-controlled Chinese tech company, who is a threat, actually) was getting its 5G patents through and making its claim on the next-gen tech IP landscape. Maybe Trump even needed to seize the technology and nationalize 5G? Everybody sure had a lot to say about it, but I didn’t see a single person address the core conflict.

Format Wars

Before we get to 5G, let’s go way back to VHS for a minute.

The basic idea of the “format war” is this: one company invents a format (VHS, SD cards, etc) and make a push to make their format the standard way of doing things. Everybody gets a VHS player instead of BetaMax, so there’s a market for the former but not for the latter. Now everyone uses VHS. If you’re selling video, you sell VHS tapes, and if you’re buying video, you’re buying VHS. If you invented VHS, this is great for you, because you own the concept of VHS and get to charge everyone whatever you want at every step in the process. And, since everyone uses VHS now, you’ve achieved lock-in.

Now, this creates an obvious perverse incentive. Companies like Sony are famous for writing and patenting enormous quantities of formats that never needed to exist in the first place because owning the de factor standard means you can collect rent from the entire market. That’s a powerful lure.

And that’s just talking about de facto standards. This gets even worse when you mix in formal standards setting bodies, which get together and formally declare which formats should be considered “standard” for professional and international use. If you could get your IP written into those standards, it turns your temporary development time into a reliable cash stream.

Enter SEPs

“5G” is one of these standards set by standard setting bodies, and it’s a standard packed with proprietary technology. The most important slice of those is called SEPs, or “Standard Essential Patents.” These are the Patents that are Essential to (implementing) the Standard. In other words, these technologies are core and inextricable to 5G itself. This figure represents only the SEPs: