Tagged: tech-culture

Identity Verification is as Bad as It Can Be

  • Posted in cyber

This is an addendum to OS-Level Age Attestation is the Good One, where I talk about the potential of legal standards for age attestation as an alternative to age verification. Not already convinced of the dangers of age verification? The extent of the evil waiting behind identification systems and deanonymization is unspeakably vast, and fortunately it’s getting extensive coverage. Here’s a quick look to get you up to speed.

Direct digital censorship

A lot of the energy behind age verification comes from authoritarians eager to censor political dissent, promote propaganda and retaliate against critics. This is a power grab, with bills designed to seize power over specific content the government objects to:

Governments are, of course, trying to claim control over “public discourse”. Like all seizing of arbitrary power, the risks associated with this are volatile and unbounded, because they depend on who holds power at any given moment in a political system where power is expected to rotate.

Discord

As a case study, let’s take a look at one of the latest major services to attempt age verification: Discord. At time of writing, Discord is in the process of trying to switch to a “Teen Default” system, where every user is assumed to be a minor unless they can prove their age to Discord. Discord is a communications platform used widely by adults, and during COVID Discord very intentionally expanded their market domain beyond gaming to focus on being a global platform, so the assumption that all spaces are for kids is clearly incorrect.1 But Discord is sometimes used by children, and since it’s a communications platform people can use it to communicate horrible things. Boomers have learned they can be insane about this, so Discord is under significant pressure to balance its goal of being a universal communications platform with child safety.

OS-Level Age Attestation is the Good One

  • Posted in cyber

There’s a coordinated effort to use the “child safety” euphemism to cripple the internet with identity verification mandates. That’s bad. But buried in the mix there’s a genuinely good idea with enough political capital that it might stick around and do some good.

Every time I’ve tried to write an article on the topic of child internet safety my energy has fizzled into depression, because as one researches the topic it becomes obvious that everyone with any relevant power is refusing to solve the problem on purpose. It’s demoralizing and it’s been mostly useless for me to do any thought work in this area.

But California’s age attestation bill might be an exception to this. Because it’s age attestation, not age verification, it looks like a significant political step in the right direction, and with the right focus it could do a lot of good. A lot of people have (fairly!) assumed attestation was age verification or at least lays the groundwork, but I think this isn’t the case. There is always the danger of future bad legislation, but OS attestation doesn’t pave the way for it, it provides a strong defense against it. We need a good idea to win the child safety war, not because we’re in dire need of more online child safety, but because addressing the real concerns correctly blocks a whole slew of impossibly dangerous policies.

My ideal age filtering tool is a system of client attestation with trust rooted in the adult administrator, provided by an OS-level API provided as preemptive verification, enforced by compliant browsers and application stores. And we’re shockingly close to that.

There is room for improvement

People on the privacy side of the age verification war — my side — will argue that parents already have everything they need for comprehensive web filtering if they want to use it. I think this isn’t quite true; there’s one notable architectural gap that a technical solution could meaningfully fill.

There are many existing content filtering tools geared toward child safety but their weakness is that they’re reactive. Traffic filters can identify and block traffic from known websites and on-device content filters can try to detect and block specific content. But this requires the user reacting and defending against every possible source and behavior. It’s the same cat-and-mouse game as adblockers. And like adblockers, the more closed down the system is — like iOS or gaming consoles — the harder it is for developers to make exactly the right product.

The internet sometimes assumes minors are supervised — since they have parental consent to have the device in the first place — but this often isn’t the case. It’s very common for minors to have their own phones or tablets with unsupervised access. When they’re online or downloading apps, they’re not sitting with a parent, they’re unsupervised, roaming children. Parents are dropping their kids off in the city.

This isn’t inherently bad; it seems like parents and children both want children to be able to exist independently without granular supervision, and so there’s a desire to make that situation safer. That shouldn’t come at the cost of any adult liberty or even the liberty of children with parental consent; it just means we want an ecosystem that allows for unsupervised children to exist within it.

Right now the burden is on parents to be active defenders protecting their children from a vast ecosystem of companies investing research and capital into optimizing how efficiently they can exploit money and data out of everyone in the world. It would be a meaningful improvement if there were a safe way to prevent some of this exploitation by putting reasonable requirements on providers, so long as this can be done in a way that doesn’t cause more problems.

Political pressure for “child safety” is exploitable

But the lack of a perfect parental control system isn’t the main problem here. The real danger is the push for online identity verification using child safety as a justification.

Smart and privacy conscious people demand “No age verification” (quite reasonably!), but that doesn’t offer the quick fix people are looking for. More importantly, it doesn’t relieve the political pressure and so doesn’t take away the excuses of tyrants.

Normally “do nothing” would be the safest option here, but the danger of uninformed and reactionary voters means there is a great deal to gain by satisfying the concerns safely instead of letting the solution be evil. A technical standard for parents to somehow identify their children as children is the relief valve for dangerous political pressure. This doesn’t appease the fascists and censors. This doesn’t cede them any ground and it’d be wrong to try to; there’s no satisfying that hunger and it’s a dangerous mistake to feed it. What it does is actually improve the material conditions for the people they’re trying to trick.

A proactive system that puts some of the burden for protecting children on those companies is a real relief to this, and it would be a meaningful improvement if something could address this without causing bigger problems.

Taxonomy

There are three basic categories of age filtering: nothing, client attestation, and client verification. These provide services varying levels of confidence in their knowledge of users. (It’s tempting to simplify confidence to labels like “strong” or “weak” but it’s important to think about what’s actually being secured, and from who.) Different people call these different things, but here’s my taxonomy with the labels I’ll use.

Anthropic and The Authoritarian Ethic

This has been a wild weekend for the fields of tech policy and AI safety. As a writer I am not normally a news guy, but this moment has felt like kind of a perfect microcosm of both the AI industry and the Trump administration’s flavor of petulant authoritarianism.

The AI company Anthropic — known for their engineering-focused chatbot Claude — was founded by former OpenAI employees who left to form their own company because they weren’t satisfied with OpenAI’s safety standards. Anthropic’s prioritizing of ethics and care have not been a handicap for them; they’ve led to Claude, the best LLM product on the market today. In July 2025 Anthropic was awarded a two-year $200 million contract with the Department of Defense to support AI for use in classified government environments, mirroring similar contracts the government made with other companies. Despite the internal competition with ChatGPT and Llama, Claude was the highest-quality product and the only one approved for use in classified military systems.

But Anthropic’s culture of (relative) corporate responsibility set it up to be the target of a frenzy the Trump people had already worked themselves into: the specter of “woke AI.” The presidential order “Preventing Woke AI in the Federal Government (July 2025)” was an ideological rant typical of Trump’s presidential orders filled with false and foolish assertions to justify banning LLMs involved in federal workflows from “incorporating concepts” like “DEI”, “intersectionality”, and “transgenderism”.

A Hack is Not Enough

  • Posted in cyber

Recently we’ve seen sweeping attempts to censor the internet. The UK’s “Online Safety Act” imposes sweeping restrictions on speech and expression. It’s disguised a child safety measure, but its true purpose is (avowedly!) intentional control over “services that have a significant influence over public discourse”. And similar trends threaten the US, especially as lawmakers race to more aggressively categorize more speech as broadly harmful.

A common response to these restrictions has been to dismiss them as unenforceable: that’s not how the internet works, governments are foolish for thinking they can do this, and you can just use a VPN to get around crude attempts at content blocking.

But this “just use a workaround” dismissal is a dangerous, reductive mistake. Even if you can easily defeat an attempt to impose a restriction right now, you can’t take that for granted.

Dismissing technical restrictions as unenforceable

There is a tendency, especially among technically competent people, to use the ability to work around a requirement as an excuse to avoid dealing with it. When there is a political push to enforce a particular pattern of behavior — discourage or ban something, or make something socially unacceptable — there is an instinct for clever people with workarounds to respond with “you can just use my workaround”.

I see this a lot, in a lot of different forms:

  • “Geographic restrictions don’t matter, just use a VPN.”
  • “Media preservation by the industry doesn’t matter, just use pirated copies.”
  • “The application removing this feature doesn’t matter, just use this tool to do it for you.”
  • “Don’t pay for this feature, you can just do it yourself for free.1”
  • “It’s “inevitable” that people will use their technology as they please regardless of the EULA.”
  • “Issues with digital ownership? Doesn’t affect me, I just pirate.”

You can Google it

  • Posted in cyber

The other day I had a quick medical question (“if I don’t rinse my mouth out enough at night will I die”), so I googled the topic as I was going to bed. Google showed a couple search results, but it also showed Answers in a little dedicated capsule. This was right on the heels of the Yahoo Answers shutdown, so I poked around to see what Google’s answers were like. And those… went in an unexpected direction.

Should I rince my mouth after using mouthwash? Why is it bad to swallow blood? Can a fly live in your body? What do vampires hate? Can you become a vampire? How do you kill a vampire?

So, Google went down a little rabbit trail. Obviously these answers were scraped from the web, and included sources like exemplore.com/paranormal/ which is, apparently, a Wiccan resource for information that is “astrological, metaphysical, or paranormal in nature.” So possibly not the best place to go for medical advice. (If you missed it, the context clue for that one was the guide on vampire killing.)

There are lots of funny little stories like this where some AI misunderstood a question. Like this case where a porn parody got mixed in the bio for a fictional character, or that time novelist John Boyne used Google and accidently wrote a video recipe into his book. (And yes, it was a Google snippet.) These are always good for a laugh.

Wait, what’s that? That last one wasn’t funny, you say? Did we just run face-first toward the cold brick wall of reality, where bad information means people die?

Well, sorry. Because it’s not the first time Google gave out fatal advice, nor the last. Nor is there any end in sight. Whoops!

Is (git) master a dirty word?

  • Posted in cyber

Git is changing. GitHub, GitLab, and the core git team have a made a series of changes to phase out the use of the word “master” in the development tool, after a few years of heated (heated) discussion. Proponents of the change argue “slavery is bad”, while opponents inevitably end up complaining about the question itself being “overly political”. Mostly. And, with the tendency of people in the computer science demographic to… let’s call it “conservatism”, this is an issue that gets very heated, very quickly. I have… thoughts on this, in both directions.

Formal concerns about problematic terminology in computing (master, slave, blacklist) go back as early as 2003, at the latest; this is not a new conversation. The push for this in git specifically started circa 2020. There was a long thread on the git mailing list that went back and forth for several months with no clear resolution. It cited Python’s choice to move away from master/slave terminology, which was formally decided on as a principle in 2018. In June of 2020, the Software Freedom Conservancy issued an open letter decrying the term “master” as “offensive to some people.” In July 2020 github began constructing guidance to change the default branch name and in 2021 GitLab announced it would do the same.


First, what role did master/slave terminology have in git, anyway? Also, real quick, what’s git? Put very simply, git is change tracking software. Repositories are folders of stuff, and branches are versions of those folders. If you want to make a change, you copy the file, modify it, and slot it back in. Git helps you do that and also does some witchery to allow multiple people to make changes at the same time without breaking things, but that’s not super relevant here.

That master version that changes are based is called the master branch, and is just a branch named master. Changes are made on new branches (that start as copies of the master branch) which can be named anything. When the change is final, it’s merged back into the master branch. Branches are often deleted after they’re merged.

Tabs or Spaces?

  • Posted in dev

“Tabs or spaces” is one of these age-old computer science preference issues (like whether you use Vim or Emacs1) that gives people a binary preference they get to pick and then get very attached to, due more to sunk costs and personal identity than anything else. (Good thing that only happens with unimportant stuff.)

I was thinking about this the other day, and I realized that I have an opinion about this, but it’s actually the opposite of what I do. And it’s not because of filesize, or encoding, or anything like that.