There’s a coordinated effort to use the “child safety” euphemism to cripple the internet with identity verification mandates. That’s bad. But buried in the mix there’s a genuinely good idea with enough political capital that it might stick around and do some good.
Every time I’ve tried to write an article on the topic of child internet safety my energy has fizzled into depression, because as one researches the topic it becomes obvious that everyone with any relevant power is refusing to solve the problem on purpose. It’s demoralizing and it’s been mostly useless for me to do any thought work in this area.
But California’s age attestation bill might be an exception to this. Because it’s age attestation, not age verification, it looks like a significant political step in the right direction, and with the right focus it could do a lot of good. A lot of people have (fairly!) assumed attestation was age verification or at least lays the groundwork, but I think this isn’t the case. There is always the danger of future bad legislation, but OS attestation doesn’t pave the way for it, it provides a strong defense against it. We need a good idea to win the child safety war, not because we’re in dire need of more online child safety, but because addressing the real concerns correctly blocks a whole slew of impossibly dangerous policies.
My ideal age filtering tool is a system of client attestation with trust rooted in the adult administrator, provided by an OS-level API provided as preemptive verification, enforced by compliant browsers and application stores. And we’re shockingly close to that.
There is room for improvement
People on the privacy side of the age verification war — my side — will argue that parents already have everything they need for comprehensive web filtering if they want to use it. I think this isn’t quite true; there’s one notable architectural gap that a technical solution could meaningfully fill.
There are many existing content filtering tools geared toward child safety but their weakness is that they’re reactive. Traffic filters can identify and block traffic from known websites and on-device content filters can try to detect and block specific content. But this requires the user reacting and defending against every possible source and behavior. It’s the same cat-and-mouse game as adblockers. And like adblockers, the more closed down the system is — like iOS or gaming consoles — the harder it is for developers to make exactly the right product.
The internet sometimes assumes minors are supervised — since they have parental consent to have the device in the first place — but this often isn’t the case. It’s very common for minors to have their own phones or tablets with unsupervised access. When they’re online or downloading apps, they’re not sitting with a parent, they’re unsupervised, roaming children. Parents are dropping their kids off in the city.
This isn’t inherently bad; it seems like parents and children both want children to be able to exist independently without granular supervision, and so there’s a desire to make that situation safer. That shouldn’t come at the cost of any adult liberty or even the liberty of children with parental consent; it just means we want an ecosystem that allows for unsupervised children to exist within it.
Right now the burden is on parents to be active defenders protecting their children from a vast ecosystem of companies investing research and capital into optimizing how efficiently they can exploit money and data out of everyone in the world. It would be a meaningful improvement if there were a safe way to prevent some of this exploitation by putting reasonable requirements on providers, so long as this can be done in a way that doesn’t cause more problems.
Political pressure for “child safety” is exploitable
But the lack of a perfect parental control system isn’t the main problem here. The real danger is the push for online identity verification using child safety as a justification.
Smart and privacy conscious people demand “No age verification” (quite reasonably!), but that doesn’t offer the quick fix people are looking for. More importantly, it doesn’t relieve the political pressure and so doesn’t take away the excuses of tyrants.
Normally “do nothing” would be the safest option here, but the danger of uninformed and reactionary voters means there is a great deal to gain by satisfying the concerns safely instead of letting the solution be evil. A technical standard for parents to somehow identify their children as children is the relief valve for dangerous political pressure. This doesn’t appease the fascists and censors. This doesn’t cede them any ground and it’d be wrong to try to; there’s no satisfying that hunger and it’s a dangerous mistake to feed it. What it does is actually improve the material conditions for the people they’re trying to trick.
A proactive system that puts some of the burden for protecting children on those companies is a real relief to this, and it would be a meaningful improvement if something could address this without causing bigger problems.
Taxonomy
There are three basic categories of age filtering: nothing, client attestation, and client verification. These provide services varying levels of confidence in their knowledge of users. (It’s tempting to simplify confidence to labels like “strong” or “weak” but it’s important to think about what’s actually being secured, and from who.) Different people call these different things, but here’s my taxonomy with the labels I’ll use.
The simplest case is nothing. One often doesn’t need to have age filtering at all. There may not be any system of authentication at all, or the same basic product may be available to authenticated and anonymous users alike. This is the public internet: all the same information is available to everyone. This provides the lowest level of confidence, but it’s usually not necessary for websites to have any information about their users in the first place. This includes sites like Wikipedia. Anyone can look up sexual reproduction without an account and there aren’t mechanisms to restrict information, according to their own principles of non-censorship. It’s an encyclopedia!
Age Attestation
The next case is client attestation. In attestation systems the client asserts their age in some sort of persistent way, and this is usually required for use. This is a trust-based system: the client attests or declares their own age and the service respects their statement. Depending on the mechanism this can provide widely varying levels of confidence. This includes any site that requires you to provide your age during account setup and only shows certain material to appropriately configured accounts. This has become an extremely common design pattern, especially for social media sites with a mix of child-friendly and adult content.
With attestation, services start with no knowledge about the user until they get an age signal. The service decides how it wants to handle this; they can provide anonymous users a limited view (requiring an affirmative signal before showing adult content, for instance) or simply require age information to use the service at all. When attestation is required for use it’s a form of preventative control rather than reactive content filtering.
Client attestation also includes any account system with a parental control feature where an account can be registered as a child managed by an adult administrator. This is omnipresent in the operating system space: it’s supported by Apple and Microsoft for their general purpose computers, and even Linux distributions like Ubuntu. It’s also supported on gaming systems including Xbox, PlayStation, Nintendo, Steam, etc.
This is ubiquitous and when people talk about existing controls being available and sufficient, this is why. It’s a myth that any of this is missing, or that “the off button doesn’t exist” for operating systems.
The people pointing out that parental controls exist already are right and that’s why. Where this is missing is within services: many social media platforms are a binary in-or-out without integrated parental control systems. The parent has control over whether any given app is installed or not, but within the app ecosystem parents are often not in control. This is intentional, not because the platforms are discriminating against parents, but because they’re aggressively against user configurability in the first place. They want to control how their platform works and they want to collect data and serve ads. The tech companies are bad, actually. They are trying to create addictive products without regard to psychological harms. That part’s true.
I will count the “confirmation box” design pattern as falling into the “nothing” category, not client attestation. The “are you 18, click yes or no” dialog, the Steam screen that requires you to enter your birth date every time you open a mature page, etc. The user is supplying their age here, but not in any meaningful way. There is no semi-permanent configuration and no tracking of user information. It’s essentially just giving the user an option to opt-out.
At the OS level though, while this usually provides sufficient control to parents, there is an asterisk here from the perspective of the service providers: this all still trusts the client. A user can enter a false date during account setup, a parent can allow their child to register as an adult without setting up parental controls, and an intentionally devious child could even register themselves under a parental account they control. It’s not obvious to the service whether the confirmation is being done by a third-party (the parent) or a first-party (an adult user). The service provider doesn’t have full confidence or “actual knowledge” of the age of the user, and they don’t know if the child is supervised or not.
Age Verification
Age verification pokes its ugly nose in when you refuse to trust the client. If implicit parental consent isn’t enough, if you absolutely positively need to know the real age of the user. Age verification requires some sort of third-party or technical verification of the user’s actual characteristics. This means not trusting the user and instead trusting… something else. This is usually either technical guesswork or confirmation with a third-party identity issuer. This is the most dangerous and most invasive form of age filtering that provides a level of confidence that should be rarely required, if ever.
Because this can’t trust the user and can’t fail-open, this means identifying every user who uses the service at all. Since age verification has to do new research on people this introduces topics of accuracy, handling false positives, false negatives, etc. This is categorically distinct from age attestation, where the decision has already been made by an authority and only needs to be communicated to a service.
Technical approaches are things like biometrics (face scanning), looking at account age, or analytic-based categorization. Think Discord, which began requiring biometric identification by confirming a “video selfie.”
Third party verification includes scanning and verifying government identification, but is also sometimes done by confirming a separate account like a credit card. Remember Sam Altman’s World organization and their terrible identification Orb? They want to be a third-party identity provider and license out their “World ID” service for identity verification. They do the biometrics, then you log in through them. The scheme here doesn’t actually look like data harvesting from individuals; they’re trying to become critical infrastructure so they can charge every website a license.
Identity verification is as bad as it can possibly be
Not already convinced of the dangers of age verification? The extent of the evil waiting behind identification systems and deanonymization is unspeakably vast, and fortunately it’s getting extensive coverage. If you want my quick summary, I’ve written a brief addendum about this. The stories are wild and scary, and I’m summarized a few of them.
What can and cannot be allowed in a solution
So our two real players are client attestation and client verification. Let’s take those frameworks, shove in the desire for better content filtering tools, and see what happens. How do we think about the problem?
Anything universal needs to have the maximization of liberty as its top priority. Whatever the global solution is, concerned parents can build additional systems on top of it according to their preference. But for any universal system everyone has to deal with, it’s imperative it has minimal or no impact on other lawful behavior. There must be minimal impact on legal behavior, or you have de facto government censorship of speech.
A responsible approach should have a minimal impact on adults if any, or else it infringes on the right to speech, expression, and free association by attaching pressures and risk to legal behavior. Regulation of speech must be “fail-open”, not “fail-closed”. If something goes wrong it’s imperative that most conduct be allowed by default, not banned by default.
Age filtering also can’t create a new data privacy risk, especially one that specifically endangers the identity of children. That means we can’t collect and store government ID or biometric information. Can we use identity documents or biometric verification ephemerally, so that it never leaves the device and we delete any information as soon as some on-device algorithm finishes processing it? Usually, no. All systems which the government audits for compliance have to store and report that data somehow in order for the company to actually demonstrate compliance.
We can’t require deanonymizing general online conduct and we can’t require storing any unnecessary data that could be used. Whatever we build has to be intensely structurally resistant to potential future abuse, because we can already see people eager to exploit and abuse these systems.
OS-level age attestation is the good one
Put this all together and you get the outline of a system I’ve been envisioning since I was a child.
My ideal age filtering tool is a system of client attestation with trust rooted in the adult administrator, provided by an OS-level API provided as preemptive verification, enforced by compliant browsers and application stores.
The device owner and administrator, the parent, can configure child-facing devices (phones, PCs, gaming consoles, etc.) as child accounts at the operating system level. The root of trust for this is the device owner. It’s not verifying a government ID or biometrics or registering with any kind of third party, it’s just a configuration option a non-administrative user can’t change.
These child accounts send age signals in the appropriate contexts (web browsing, app stores, etc.) that give service providers the necessary information to handle the request as appropriate. This may mean locking options, leaving out algorithmic feed sources, and handling the traffic in ways that don’t collect unnecessary user data.
This gives parents a simple setting, moves some responsibility for data handling to the companies, and doesn’t affect adults. With a minimal one-step setup parents can let their children on the internet unsupervised, and tech companies — who now have actual knowledge of children’s ages — have the responsibility to keep those particular users safe. But this only needs to ever affect children. Any adult with their own device can tick the “adult user” box, never identify themselves with any third party, and be treated as an adult on the internet.
Parental supervision makes self identification sufficient
At first anything falling under the category of “self-identification” seems like a mistake. You can’t rely on minors to accurately identify themselves (especially when it restricts them). I already said the user just confirming their age is equivalent to no security at all. The temptation here is to treat everyone as minors by default, and positively identify adults. This is when security people reach for facial recognition and AI and government ID validation and cryptographic protocols and identity verification services start getting rolled out.
But for this specific use case — children with parental supervision — none of this is required. For the parental control case, the assumption about “self-identification” is incorrect. Minor-owned devices self-identify on the authority of the parent without any sensitive data ever moving anywhere. You couldn’t use this for something like voting, but you absolutely can use it for child protection.
For this subject — parental controls for minors — the parent owns the device, not the child. When a minor has a personal smartphone it’s because a parent bought it and is letting their child use it. The child didn’t pay for it and doesn’t own it; their guardian can make using it conditional on whatever controls and restrictions they choose to require. As long as the child doesn’t have administrative access on a device, how it behaves can ultimately be supervised by management policy. (And if parents want to give their children administrative access, that’s their prerogative!) This means a simple OS-level solution is enough to handle the entire problem. All that’s required to distinguish adults and minors are the user account systems that already exist.
Preemptive verification, not filtering
The internet is a fetch medium. Whenever you visit a web page your computer sends a request to a web server. The server responds to the request, sends the information back, and your computer shows it. You don’t get any data unless the remote machine explicitly chooses to send it to you. This means if child-owned devices proactively identify themselves as such, services can make the relevant processing decisions and curate their responses as appropriate.
With the user age information provided to the servers, each website can respond with whatever modifications are appropriate for that age range. Any requests without the header should be assumed to come from legal adults or otherwise intentionally unlocked devices. Maybe that means changing very little, maybe that means blocking an entire site. The service itself regulates this. It’s highly dependent on the content, and the services themselves are in the best place to understand what compliance is required for their particular business.
OS level
“OS-level” sounds scary to tech people, and it should. This is the realm of Secure Boot, the Trusted Platform Module, attacks on user ownership of computing devices, ring zero control, etc. OS level age verification would be disastrous. You don’t want the operating system to have an obligation to proactively identify users correctly or have an obligation to gatekeep basic computing functionality behind age checks. This also isn’t secure boot or hardware vendor-deployed user-facing anti-tamper, just normal user space permissions. You never want a device defending itself against an owner. But age attestation is not that.
Doing this at the OS level is the right move for two reasons: that’s the best way to expose an interface for applications to use, and that’s where the enforcement power already is.
You don’t need individual services collecting and maintaining identifying information. It’s been proven to be a recipe for disaster and it is completely, utterly avoidable. It’s better to do age attestation once per device than once per service. A legal standard for OS level attestation removes the need for services to collect and store sensitive profiling data in the name of verification.
But it’s also structurally correct for the age identification part of this to be the responsibility of the parent. We should use the existing agency of parents over devices and send a signal that proactively informs sites not to serve devices that are voluntarily excluded. Not only is it much harder for services to verify a users’ age than it is for their parents to, that responsibility already lies with the parent.
App store and browser enforcement
The OS is where meaningful enforcement can be implemented between the owner and non-owner users. This is already incredibly normal in the computer world. Multi-user computers where different users have different sets of permissions are completely standard in home and enterprise environments and have been for decades.
All that’s required is some system — any system — to have an administrator identify a child user. Windows has had this since at least XP with administrative and plain user accounts, and while recent Microsoft account shenanigans have made that system more complex, all the systems for permission management are still there.
Normally I would say the hardest part of this problem is the OS communicating the relevant information to applications and web services, but infrastructure for this actually exists already too. Requirements from the business world have ensured operating systems already have in-depth systems for external permission management.
The Chrome Managed Browser system — widely used in corporate environments and on school Chromebooks — allows system-level control over browser policy. Managed browsers on Windows can read settings from Group Policy, Microsoft’s system for letting remote administrators manage operating system settings. I’m mostly familiar with the Google and Microsoft ecosystems here, but any comparable product is going to have an equivalent system.
Platforms like iOS — where each user is bound to an online identity — make it easy to identify an account as a child managed by a parent, and the operating system can do the rest. This is also trivial to implement on non-cloud systems with an administrator/user account scheme, or traditional “parental lock” mechanisms.
You could even manufacture tablets for children like they do now and hard-wire the setting into the system, throwing out any need for an administrator at all.
Having this permission set in a way the user can’t directly control solves another major problem with age attestation. Currently, because signals are optional, sites have to provide a “lowest common denominator” experience to anonymous users. If you have a child on an art site, for example, it’s not enough to only hide adult works from logged-in, self-identified users. Anyone can log out, or switch to incognito mode and browse the site anonymously. But controlling age signals with a permission system removes this escape hatch. This allows sites to add meaningful restrictions for self-attested minors (like reducing data collection) without having to bring the lowest-common denominator experience down to match it. You don’t need “teen by default” anymore.
Preventing problems
As I’ve said, age verification tech is extremely dangerous, especially in the current reactionary political climate. It’s not enough to just have good intentions, you also have to realistically understand the environment this will all exist in.
The political age verification movements are a tug-of-war between value systems. There are factions in the government trying to seize policing power, factions in tech looking to use their legal weight to secure permanent monopolies over social life, factions trying to capture identifying information, and worse.
It is of the utmost importance that any age filtering system not only be designed in a way that’s strongly opinionated towards liberty and privacy, but also be designed in a way that strongly resists future abuse or co-option of that intent. And I think this approach does that.
Providing flat, low-entropy category information
A great danger of any content filtering or censorship system is inappropriate violations of freedom that come from expanding it to additional topics and scopes. Even outside a specific political context where it’s already evident different factions want to do this, there’s a natural risk baked into the technology itself.
As long as adults control their own devices (which they need to!) it’s difficult or impossible to compel them to misidentify themselves as children. Child privacy laws like COPPA already strictly regulate how companies are allowed to sell and advertise to children, so companies are already incentivized to make adults identify as adults whenever possible. This same dynamic makes it much less prone to being used in domestic abuse or as a way for one person to cut another off from support and resources. This provides a way to enforce existing law governing how businesses serve children; it doesn’t provide a stalkerware mechanism. This only works for actual age checks.
This is why we want to send what’s called low entropy information: we want to reveal the minimal amount of information required without providing any additional detail that could be used to identify or fingerprint a user. We would not want to broadcast users’ birthdates since that’s personally identifiable information — and often used as a credential!
The ideal solution for this is using age categorization buckets. For example, if users are divided into categories like “0-13”, “13-18”, and “18+”, that reveals the information required for most age-based enforcement in US jurisdictions.
Unfortunately this isn’t a perfectly universal system to bake into some kind of technical standard, since the age ranges are designed around current US law.
A better solution would be an API like this: The operating system knows the user’s birthdate, but that information never leaves the device. If one is the device administrator they can change this value, and if they’re a child user they can’t.
An app store or website can craft a request requesting the specific bucket information it needs: a question like “Are you under 13?” or “Are you over 21?” and send that question to the operating system via an API. The OS would use the actual date information it has to craft a yes/no answer to the question, which the user may or may not be allowed to edit based on their permissions. The OS would then prompt a permission window, like they already do with other identifiable information like geographical location. The device would send the user a pop-up — “Service {NAME} wants to confirm your age as >13. Provide this information?” — before replying, which would alert the user to the privacy impacts and prevent sites from maliciously sending multiple requests to fingerprint an exact age.
Regardless of the exact approach, flattening the age parameter as much as possible makes it difficult to abuse it for universal censorship of other topics. While there are groups that argue for things like global eradication of pornography and anti-government sentiment, a flat “age” parameter makes it prohibitively difficult to try to force age restrictions on adult populations.
Keeping the device owner as the root of trust
I know I keep pointing this out, but it’s a critical distinction that the device owner is the root of trust with attestation. It’s not tracking an identity persistently, it’s not even verifying an identity up front, it’s only providing a mechanism for owners to exercise control over their own property. The minimalism of the age parameter intentionally and proactively cripples attempts to misuse it.
It’s far better and far safer than a mandatory universal identification, or every service with adult material anywhere on their platform transferring or maintaining a separate copy of personally identifying information.
Legal impact
So where’s the regulation part of this? If we’re not requiring people prove their identity to a third party, where’s the action needed to make this work?
This necessitates a new requirement, but it’s not a requirement for adults to provide identifying documents to their OS. It’s a requirement on two relatively constrained categories of tech companies.
The law would need to ensure availability of this system with the devices and services children actually use. Major operating system providers who provide operating systems used by children — Microsoft, Apple, Google, Nintendo, Sony — would need to implement this system in order to make it available to parents.
As I discussed earlier the basic framework for this already exists in all these major operating systems, even game consoles and some Linux distributions. The main missing piece is expanding these beyond those companies’ internal systems applications into an API that other programs (mostly web browsers) can query.
Enterprise or industrial equipment shouldn’t have this requirement, only devices that are actually provided to and used by children. This requirement that platforms provide this feature shouldn’t to apply to calculators, Linux distributions, etc., only the actually relevant systems children use. This shouldn’t prevent any adults from using old software that doesn’t support this particular feature, or require calculators and machinery to be reflashed, or anything like that. The goal here is for the mainstream providers to provide this feature to ensure it’s available for parents to use.
The other change is on compliance requirements for web companies, which would constrict in some ways and loosen in others. Specifically, getting age information from the OS needs to count as having what’s called actual knowledge of the user’s age. In other words, if a parent tells you someone is a minor, you have to believe them. This is the most drastic change because this is where the liability is added: if services are explicitly told a child is connecting they are required to treat the request appropriately and not ignore the signal.
For COPPA compliance purposes many websites are currently encouraged not to obtain real knowledge of users’ ages — sites can explicitly confirm the age of underage users and continue to allow them to use their services, but they aren’t able to collect all the same data and serve the same advertisements to those users. This encourages a “don’t ask, don’t tell” attitude where, if children don’t provide the information or are allowed to easily lie, the companies can exploit them more and the children can access more features. Age information provided by a standardized system would need to qualify as the site gaining actual knowledge of age. That would require them to comply with existing data protection law in ways they currently often don’t, claiming ignorance.
Since this counts as real knowledge of a user’s age status, this means sites would need to use this knowledge to comply with the existing law. If you’re a porn site, why would you choose to check for an age header and adjust your response accordingly? Because otherwise you’d be serving porn to self-identified minors, which is already very illegal.
Any service that intentionally operates in violation of the law is irrelevant to the policy discussion, since they’d ignore more restrictive policies that require affirmative adult verification too. This is a conversation about, y’know, Facebook, not Ukrainian pirate sites. For these cases, there’s filtering.
But what has me excited is the way this cuts in the other direction. Once this parental control protocol was ubiquitous, low-entropy age affirmation would also supply sites with actual knowledge of adulthood. If sites comply by collecting and acting on age affirmation data, that creates a safe haven for them to treat self-affirmed adults as adults.
I want services to be able to use age-specific attestation from the user to confirm users’ adulthood, and for the availability of attestation to make this a legally sound age confirmation method. This could make it much easier for adults to anonymously self-identity online and bypass obscenity regulation designed for minors. Platforms could more safely run adult spaces, and it would become far easier and safer for adults to safely access adult spaces online. Throw YouTube for Kids in the trash, replace it all with this.
This preempts much worse verification systems: if there’s a safe and built-in adulthood check, services can safely use those to confirm adult users and serve adult content without requiring invasive identification checks. That information they need — that someone isn’t an unsupervised child — would be provided without the data privacy risks involved with the current age verification technologies. Actual knowledge provides legal confidence to services that might otherwise be pressured into requiring worse and more invasive forms of identification, or purge adult material from their site completely.
You wouldn’t want to ban sweeping categories of existing services under age attestation law though. Companies can keep using any existing services or replace them entirely, so long as they’re also respecting the header. But if they require more than the age bucket, that’s evidence to the government and to the customer that they’re demanding private information that isn’t legally necessary. As with filtering, this just means any parent worried about child safety will have a push-button solution in addition to all the existing parental control products.
The requirement for downstream services to respect this data is the most important piece. There might need to be a regulatory requirement for OS providers to make this interface available, but once app stores, websites, and social media platforms are required to respect the age signals, mainstream OS providers are already incentivized to support this without needing any regulatory prodding. The operating systems don’t need to be forced; they’ll want to support the feature and they’ll want to lead the charge to design the industry-standard technical specifications.
And here’s the cool thing: we are astonishingly close to having good law on the books to do this already.
California AB 1043, Age verification signals: software applications and online services.
AB 1043 is an age attestation bill introduced and passed in the 2025-2026 session. Here’s how it’s described in comments:
Buffy Wicks California has enacted or proposed several laws to better protect minors in digital spaces, but enforcement and implementation remain stymied by a basic infrastructure gap: there is no standardized, privacy-preserving method for determining whether a user is a child. AB 1043, the Digital Age Assurance Act, seeks to fill that gap by establishing a secure signaling framework at the device and app store level. This framework allows developers to receive a tamper-resistant digital signal reflecting a user’s age bracket—without requiring the collection of personal data or documents—and to treat that signal as the authoritative indicator of a user’s age for compliance purposes under California law.
Striking a balance between parental control and children’s privacy. In protecting children from the potential harms on the internet, like those discussed previously, there must be a careful balance between appropriate parental control and the rights of older teens to access certain platforms. At the core of this bill is a conceptually elegant solution for establishing the age of the user. By sending an age assurance signal that developers are required to rely on for having actual knowledge of the age of the user, provides a number of significant benefits:
It alleviates concerns from privacy advocates that age verification would necessarily require everyone to provide developers and platforms with even greater sensitive personal information by having to upload official identification documents in order to prove that they are old enough to access the application or the content. It potentially removes the argument from the technology industry that have no definitive way of knowing the age of their users, thus allowing them to avoid responsibility for allowing children to access harmful content. As an example, applications that are restricted to adults generally simply ask the user to attest to whether or not they are old enough to access the site. With an age assurance signal, the platforms would be provided with actual knowledge of the age or age range of the user that they could then rely on to grant or deny access.
And here’s the relevant text of the bill:
1.81.9. Digital Age Assurance Act For the purposes of this title:
(a)(1)Account holder means an individual who is at least 18 years of age or a parent or legal guardian of a user who is under 18 years of age in the state.
…
(b)Age bracket data means nonpersonally identifiable data derived from a users birth date or age for the purpose of sharing with developers of applications that indicates the users age range, including, at a minimum, the following:
(1)Whether a user is under 13 years of age.
(2)Whether the user is at least 13 years of age and under 16 years of age.
(3)Whether the user is at least 16 years of age and under 18 years of age.
(4)Whether the user is at least 18 years of age.(c)Application means a software application that may be run or directed by a user on a computer, a mobile device, or any other general purpose computing device that can access a covered application store or download an application.
(d)Child means a natural person who is under 18 years of age.
(e)(1)Covered application store means a publicly available internet website, software application, online service, or platform that distributes and facilitates the download of applications from third-party developers to users of a computer, a mobile device, or any other general purpose computing that can access a covered application store or can download an application.
…
(f)Developer means a person that owns, maintains, or controls an application.(g)Operating system provider means a person or entity that develops, licenses, or controls the operating system software on a computer, mobile device, or any other general purpose computing device.
(h)Signal means age bracket data sent by a real-time secure application programming interface or operating system to an application.
(i)User means a child that is the primary user of the device.
1798.501.
(a)An operating system provider shall do all of the following:
(1)Provide an accessible interface at account setup that requires an account holder to indicate the birth date, age, or both, of the user of that device for the purpose of providing a signal regarding the users age bracket to applications available in a covered application store.
(2)Provide a developer who has requested a signal with respect to a particular user with a digital signal via a reasonably consistent real-time application programming interface that identifies, at a minimum, which of the following categories pertains to the user:
…
(3)Send only the minimum amount of information necessary to comply with this title and shall not share the digital signal information with a third party for a purpose not required by this title.(b)(1)A developer shall request a signal with respect to a particular user from an operating system provider or a covered application store when the application is downloaded and launched.
(2)(A)A developer that receives a signal pursuant to this title shall be deemed to have actual knowledge of the age range of the user to whom that signal pertains across all platforms of the application and points of access of the application even if the developer willfully disregards the signal.
…
(3)(A)Except as provided in subparagraph (B), a developer shall treat a signal received pursuant to this title as the primary indicator of a users age range for purposes of determining the users age.
…
(4)A developer that receives a signal pursuant to this title shall use that signal to comply with applicable law but shall not do either of the following:
(A)Request more information from an operating system provider or a covered application store than the minimum amount of information necessary to comply with this title.
(B)Share the signal with a third party for a purpose not required by this title.
…
(c)An operating system provider or a covered application store shall comply with this title in a nondiscriminatory manner, including, but not limited to, by complying with both of the following:(1)An operating system provider or a covered application store shall impose at least the same restrictions and obligations on its own applications and application distribution as it does on those from third-party applications or application distributors.
…
(g)This title does not impose liability on an operating system provider, a covered application store, or a developer that arises from the use of a device or application by a person who is not the user to whom a signal pertains.1798.505. This title shall become operative on January 1, 2027.
This is honestly very good. I am never going to say “this law is safe, it creates a system that can only do good” because bad prosecutors and bad courts can always destroy these things. This is enforced by the Attorney General, and an AG could abuse this language to target political enemies and enforce beyond the intended design. California is not good about this — see the age appropriate design code saga — but this particular piece is good. Good law is hard to find so we should make the most of what we get.
It ticks most of my boxes. The bracket system means the data provided is non-identifying, low-entropy information. Users are required to identify themselves to the device, not a third party. It’s not based on biometrics or identity verification, only self-identification. There are privacy protections attached, and even a requirement for neutrality between first-party and third-party applications. Hugely important is that the age bracket signal constitutes “actual knowledge” of a users’ age, for the reasons discussed earlier. This makes it an authoritative signal. It’s both legally authoritative and minimal, without identity verification, and with only “indication” as the requirement.
I’ve seen this thoughtful concurrence from David Chisnall, focusing on the unix-y side of things:
david_chisnall@infosec.exchange So, I have actually read the text of California law CA AB1043 and, honestly, I don’t hate it. It requires operating systems to let you enter a date when you create a user account and requires a way for software to get a coarse-grained approximation of this that says either ‘over 18’ or one of three age ranges of under-18s. Importantly, it doesn’t require:
- Remote attestation.
- Tamper-proof storage of the age.
- Any validation in the age.
In short, it’s a tool for parents: it allows you to set the age of a child’s account so that apps (including web browsers, which can then expose via JavaScript or whatever) can ask questions about what features they should expose.
In a UNIX-like system, this is easy to do, with a tiny amount of new userspace things:
- Define four groups for the four age ranges (ideally, standardise their names!).
- Add a
/etc/user_birthdaysfile (or whatever name it is) that stores pairs of username (or uid) and birthdays.- Add a daily cron job that checks the above file and updates group membership.
- Modify user-add scripts / GUIs to create an entry in the above file.
- Add a tool to create an entry in the above file for existing user accounts.
This doesn’t require any kernel changes. Any process can query the set of groups that the user is in already.
If a parent wants to give their child root, they can update the file and bypass the check. And that’s fine, that’s a parent’s choice. And that’s what I want.
I like this approach far more than things that require users to provide scans of passports and other toxically personal information to be able to use services. If we had this feature, then the Online Safety Act could simply require that web browsers provide a JavaScript API to query the age bracket and didn’t work unless it returned ‘over 18’.
There are some remaining problems with the intent and implementation though.
The way “operating system provider” is defined is too broad. Only the major providers whose products are used by children need to be compelled to provide this feature. The current definition covers enterprise, industrial, or applications like enterprise Linux servers and specialty computers where there is no need or demand for parental control systems. As written, this could include a calculator or a smart thermostat. That’s silly. This could be addressed by an intentional reading of the law. Does installing Linux make you the operating system provider because you “control” the operating system software on a computer? An Attorney General could argue the point either way.
Similarly over broad is the requirement on users to identify themselves in a bucket to use a computer. See: “…requires an account holder to indicate the birth date, age, or both, of the user of that device for the purpose of providing a signal regarding the user’s age bracket to applications available in a covered application store.” You really only need a requirement on the operating system to provide this functionality. A better, more conservative system would be for users to be treated as adults implicitly unless they choose to configure parental controls, as they are now. For most specific or industrial purposes, a clause during setup indicating all accounts must be adults should be sufficient as identification — but again, you need a good Attorney General with the will to enforce this conservatively and not to weaponize it.
We need a protocol for everything, not just “apps”
The biggest problem is the focus on the modern Apple-style “app store” model. Unlike the other two issues this isn’t the law causing a problem, it’s a failure in scope. App stores are the least relevant domain because they already have parental systems policing exactly this kind of behavior, as described in the overview earlier. Where this functionality is really missing is within apps and online, something the language here doesn’t touch. On general purpose computers, “app” is a useful fiction for describing and packaging behavior, and plenty of behavior falls outside that taxonomy.
What I see as the crucial missing piece here is a bridge between the signal the OS provides to app stores and the public internet. It’s not meaningful to just lock off certain apps from being installed at all. Programs and browsers need to be able to query this signal to tailor specific behavior within their programs. Are web browsers “application stores” under this language? Again, it’s not clear, and the lack of clarity makes it difficult for websites and browsers to treat this as authoritative.
What we need now are protocols to apply this same information to web services so the entire internet can use device-reported configuration as actual knowledge age verification. Attestation law is a step in the right direction (especially compared to others), but this doesn’t yet provide everything it needs to.
Colorado SB26-051, Age Attestation on Computing Devices
Colorado has SB26-051 which is almost the same as California’s model bill word for word.
There are a few minor categorical distinctions. California categorizes it as consumer privacy, Colorado categorizes it as consumer protection. California’s takes effect in 2027, Colorado’s in 2028. The technical architecture is the same in both bills.
But Colorado includes this noteworthy exception:
(6) Notwithstanding any provision of this article 30 to the contrary, this article 30 does not apply to a developer if the predominant or exclusive function of the application that the developer writes, creates, maintains, or controls is:
(a) facilitating communication within a business or an enterprise among employees or affiliates of the business or enterprise, so long as access to the application is restricted to employees or affiliates of the business or enterprise;
(b) selling enterprise software to businesses, governments, or nonprofit organizations; or
(c) providing or obtaining technical support for a software platform, product, or service.
This is language designed to narrow the domain of “operating system provider”, which is good. This shouldn’t apply to enterprise software, internal systems, etc.
Arguments against
California and Colorado’s bills have “made the rounds” and gathered some criticism already. A lot of this is criticism of the age verification movement, in my opinion misdirected when aimed at CA/CO. There are a few arguments I want to respond to, though.
Confusing attestation with verification
Elorm Daniel A law that says Linux must perform age verification during account setup sounds reasonable at first… until you realize it completely misunderstands what Linux actually is.
Because who exactly is supposed to verify their age?
The server?
The router?
The fridge?
…
There’s nothing to verify against.
People very commonly simply confuse attestation with verification like this. The key point here is that it’s not verification. The parents attest to the age of the child, and the adults attests to the age of themselves.
Guardrails to make it clear that this is attestation and not validation would be helpful, but the CA/CO bills don’t prompt an open-ended question of “who validates it?”. The answer is — and should be — the device owner.
“Operating System Provider” is overbroad
The db48x calculator firmware project is why I have “calculator” in my head as an example of an operating system that shouldn’t be covered by this legislation.
A few weeks ago, the maintainer c3d added this notice to the project:
LEGAL-NOTICE.md …I, as the primary author of the software, do not have the legal resources to clarify what these laws mean by “operating system”, “mobile device”, “programming interface”, or any other weakly defined terminology in the legal text. I am clearly not alone having trouble with these texts. …
Consequently, any user who decides to install and run the software will need to consider that they became, in application of the license, the local distributor of the software and will need to bear any legal consequences, however unlikely, that would derive from this exercise of their freedom. If there are consequences and they don’t like them, I invite them to enter a fight to improve and fix the local laws.
Linux-oriented electronics company System76 also mentioned this in their press release about age verification laws:
System76 on Age Verification Laws In a bizarre twist, under its current wording, a Linux distribution downloaded from the internet could technically make the downloader the “device manufacturer”. They are the entity responsible for providing a freely distributed operating system to the device. In practice, this type of language is rarely enforced. Nonetheless, it highlights how laws written for centralized platforms like iOS and Android struggle to define who is responsible in open computing ecosystems where anyone can install or distribute the operating system.
In both cases, the criticism is the one I’ve already made: the design of the law is only considering the app store model. It doesn’t handle the much more common (and much more important) case of general-purpose computers administered by adults that are in no way related with children or social media. It feels like an obvious cognitive error; the text of the bill was written by people who are primarily exposed to the surface-level consumerist tech environment. They’re failing to grapple with the technical implications because the underlying systems don’t’ match their mental model.
Of course, I agree with this objection. The California bill is over-inclusive with its definition of “operating system provider”, and it’s unclear how it handles normal software and general-purpose computing projects outside a walled-garden app store environment. It would be an improvement to see the definition of “operating system provider” narrowed to only capture the relevant, mainstream operating systems. Or widened, to clarify that users who install non-covered operating systems act as their own administrator and provider.
Colorado’s bill is stronger since it has carve-outs for systems that aren’t expected to have (or care about) child users. I would want to see that exception to be even larger: since this is a parental control system, any adult should have a mechanism to self-exempt themselves or their own children from the requirement, so long as that doesn’t deny protections to others. Ideally OS providers would sort themselves into two categories: mainstream providers who want a widely-applicable product they can sell to adults and children, and industrial or hobbyist systems that aren’t expected to be used by children at all.
Ambiguity in law is always dangerous. This is why I emphasize the outsized importance of the Attorney General, who is the enforcement agent for this. They shouldn’t be able to stretch the law to criminalize behavior the law didn’t intend to regulate. If edge cases can be argued to be in violation of this bill that gives ammunition for the government to pressure and prosecute arbitrarily. So yes, this should have been refined, and hopefully it still will be.
The California Age-Appropriate Design Code Act
Prior to AB 1043, California attempted to enforce The California Age-Appropriate Design Code Act, a highly restrictive bill modeled after UK regulation. This was enjoined by federal court, repeatedly, for being an overbroad, unconstitutional attempt to restrict speech. There’s a strong argument that — in context — AB 1043 is an attempt to try to achieve some of the same aims through a different, more constitutional mechanism.
Aakash Gupta …
AB-1043 shifts the burden from app developers to operating system providers. Instead of every app asking your age, your OS sends a “signal” to apps telling them whether you’re under 13, 13-16, 16-18, or 18+. Four age brackets, transmitted via API every time you launch an app.The theory is clever. Courts struck down CAADCA because requiring every business to assess content harm to children was a content-based speech regulation that couldn’t survive strict scrutiny. AB-1043 sidesteps this by saying “we’re not regulating content, we’re just making the OS collect a birthday.”
…
The same trade group (NetChoice) that killed CAADCA will almost certainly challenge AB-1043. The First Amendment problem didn’t disappear because you moved the compliance obligation from the app layer to the kernel layer. You just added a step.
I think the answer to this is simple: It’s the content regulation that’s the problem, and AB-1043 doesn’t regulate content yet. It’s the part of the system that’s inoffensive. CAADCA was an unconstitutional attempt to put an obligation for content-based speech regulation on businesses. AB 1043 is not. It doesn’t do the same things and it doesn’t make the same people happy, and it doesn’t provide a strong foothold to build bad law on top of it. Maybe California will try to build more unconstitutional speech law on top of AB 1043, maybe it won’t. If so, that’s a separate offense.
Can’t enforce a global dragnet
A lot of criticism has come from people who see that this doesn’t form a “perfect seal”, and conclude it’s either unenforceable or a huge regulatory apparatus will be required to create a seal.
But this misses the basic picture of the thing: what’s enforced and in what direction. Age attestation should not be a secure, perfectly-enforced dragnet. It’s a feature tech companies should be required to provide to users, and web services should be required to respect, but it’s not designed to use the full weight of the law to compel individual parents or children.
This objection was also raised on the assembly floor by Samantha Corbin, who objected on the basis that this didn’t create a comprehensive, tamper-proof dragnet:
Samantha Corbin The greatest risk to children online comes not from the existence of platforms, but from millions of unregulated app developers, many of whom push unsafe, exploitative and sometimes predatory content. …
Children can and often do also misrepresent their ages during device setup. Devices are often shared across users or passed down, and burner phones are easily accessible. Nothing in 1043 prevents circumvention. It risks creating a false sense of security without actually reducing harm. And in fact, as written, 1043 undermines California’s privacy leadership and child protection laws.
The idea that this removes liability from developers is factually incorrect; services would be completely liable for properly handling user data and serving minors appriately. They’re still completely liable for misconduct, they just don’t have a responsibility to positively identify every user using personal information.
1043 does undermine some of California’s other privacy initatives. But those policies it preempts are bad, and that’s why 1043 is so good. The state of California simply doesn’t have the authority to police speech on the basis of “reducing harm”, as if the state is the party with a primary interest in how peoples’ children are raised. It gives parents access to easy, accessible, legally-required harm reduction but it doesn’t put the state in charge of policing the parenting of every child.
This focus on the possibility of preventing “circumvention” is a fundamental objection to the purpose of the effort. Rejecting this on the basis that it isn’t automatically flawless enforcement is like objecting to the existence of gun safes just because if a parent gives their child a loaded gun instead of locking it in the safe, it can still fire. The perfect becomes the enemy of the good: providing material safety improvements isn’t a “false sense of security”. It addresses the vast majority of cases effectively, even if it doesn’t create a magical, impossible layer of enforcement that ignores mechanical reality and only allows good things to ever happen.
The fact that speech is allowed if this system is intentionally not applied is a feature, not a bug. Age attestation needs to be a fail-open system. If the direction of this flips — if this becomes a system for absolute top-down control over which people can use which software, or use what communication platforms — that’s not age attestation. That’s a different system with different goals serving different people. There’s no central enforcement point. You can’t require this to be present in all software that exists, nor should you. The space we care about is commercial products for children, not all possible computing.
This backwards understanding also comes from users who object to enforceability. Quoting this one Reddit post that was quoted and amplified by PC Gamer’s coverage,
CatoDomine What really scares me is that we have lawmakers stupid enough to propose a law like this. This is basically impossible for California to enforce. Worst case, they are too stupid to know that. Best case, it is performative. Even if Linux Mint decides to add some kind of age verification, to comply with CA law, there’s no reason anyone would choose that version. There are hundreds of other jurisdictions in which Mint operates that don’t require this kind of stupidity. It’s more likely that they will put a disclaimer on their website “not for use in California”.
Again, enforcement. But this misses who age attestation is for and what it does. It provides universal parental controls to the parent. If Linux Mint added age attestation, it would add an annoyance for this user, and they’d be right to pick a distribution without that annoyance. But the feature provided here isn’t for the benefit of the state of California, it’s for parents. A parent — or school — might very well choose to use an attestation-compliant version to regulate their children. Or they might not! California has an interest in ensuring the availability of the protocol, but the owner must always have the right to choose.
System76 on Age Verification Laws …There is no actual age verification. Whoever installed the operating system or created the account simply says what age they are. They can lie. They will lie. They’re being encouraged to lie for fear of being restricted to a nerfed internet.
A parent that creates a non-admin account on a computer, sets the age for a child account they create, and hands the computer over is in no different state. The child can install a virtual machine, create an account on the virtual machine and set the age to 18 or over. It’s a similar technique to installing a VPN to get around the Great Firewall of China (just consider that for a moment). Or the child can simply re-install the OS and not tell their parents.
I think these contrived examples misunderstand enforcement. Yes, a dedicated enough child can find a way to get around an onerous requirement if you give them a general-purpose computer that’s allowed to download and install a virtual machine. (Which means the parent already choose not to enforce strict security.) But this requirement isn’t onerous, not yet. Maybe a parent can be onerous about it and generate conflict that way. But the goal here isn’t to create a perfect seal with perfect enforcement in the first place.
OS-level age verification is a hand-out to Meta, etc
some bluecheck hustler account on x.com Mark Zuckerberg keeps telling lawmakers and jurors that Apple and Google should verify everyone’s age at the operating system level.
➡️ He said it under oath last month in Los Angeles.
➡️ Meta, X, and Snap sent a joint letter to South Dakota legislators saying the same thing.
➡️ Meta’s youth safety policy director has testified in multiple state hearings pushing this approach.The framing is always about protecting kids. But look at what OS-level age verification actually builds.
First, it moves legal liability off Meta. Zuckerberg is facing 1,600+ lawsuits alleging Instagram harmed minors. If Apple and Google own age enforcement, Meta’s lawyers get to point at Cupertino and Mountain View when enforcement fails.
…
Right now Meta relies on self-reported birthdates for age data. Their own internal documents showed millions of underage users slipping through.An OS-verified age signal, potentially backed by government ID or biometrics, gives Meta a high-confidence demographic data point for every user, on every device, delivered via API, at zero implementation cost to Meta.
They don’t build the system. They don’t store the IDs. They don’t take the PR hit. They just read the signal and feed it into the ad targeting machine that generates $130B+ in annual revenue.
Meta gets identity infrastructure without the surveillance optics.
…
So when Zuckerberg says age verification at the phone level is “just a lot cleaner,” he’s right. It’s very clean.For him.
First off, I don’t think this is a huge win for Meta. Having age signals which count as actual knowledge severely limits their ability to harvest data on minors. They don’t get the identity infrastructure because they don’t get the identities. Those “millions of underage users slipping through” are profits that this would cut off.
It’s true that this removes the responsibility for age verification from Meta. That’s a good thing. We don’t want Meta to be in charge of that. Meta shouldn’t “store the IDs”. Parents are the ones who should be authoritatively identifying minors. Meta shouldn’t be guessing and it certainly shouldn’t have access to the information that would allow it to make conclusive decisions. The age signal “does the work for them”.
Meta may want this for the wrong reason, but that doesn’t make it the wrong move. Objecting to anything that benefits Meta may be a decent rule of thumb but that’s not a perfect metric. This benefits them compared to other policies, but that’s because it doesn’t require them to do work they shouldn’t be doing.
Privacy
But ultimately this does expose new information.
Some additional data will be exposed to some new parties. The goal here is, necessarily, to identify users by age and discriminate against specific age categories, so this is unavoidable. There are privacy concerns inherent to any discrimination, but the current proposals and industry standards already carry extreme privacy risks and tend to expose much more personal information than simple age categories.
There’s a lot of detail I’m skimming over here. For privacy protection this can be treated as something similar to a high-entropy client hint that’s only sent if the server requests it and the client is configured to send it. There are other relevant security settings for headers to prevent this from leaking or being used in other fingerprinting. I’m simplifying the technical side here.
But theoretically this could be abused by malicious querying to give slightly more granular data:
Tue Mar 03 18:57:19 +0000 2026If yesterday I queried your age and it said bracket < 17, and today I queried and it says >= 18.
CONGRATULATIONS! You've leaked the user's date of birth. Instead of protecting the user (specially children!), you've harmed them.
Malicious apps *will* query age every day.
There are ways to avoid this. Earlier I suggested some techniques for implementing low-entropy age bracket information, and subject-matter experts can design something even better, I’m sure.
But at a high level, this complaint doesn’t make sense. Introducing age bracket signals doesn’t introduce new harm because every alternative to this reveals much more information. If a malicious service abused a poorly-designed API for this, maybe they could get a more precise date of birth out of you. But the alternative — the system we have today — is an account setup process that requires people to enter their date of birth up front. Bracket information gives up far less data than this.
The bad ones (are worse)
Have you ever been in a room and realized you were trying to solve a completely different problem than everyone else? That the disconnect wasn’t just that you have different ideas about implementation, but that others were operating with a fundamentally different set of values and priorities than was appropriate?
Almost everywhere in the “age filter” world, people are working on the wrong problem. Everyone at every level of authority is incentivized against building systems that adequately protect privacy, even when that’s explicitly their job. Age attestation’s private, parent-centered approach is a rare gem in a mire of bad ideas.
I won’t be diving deep here, but here’s a quick overview of some of the other age filtering proposals. There’s a lot more wrong with these, but I’ll be focusing on the age filtering parts here.
New York’s Stop Addictive Feeds Exploitation (SAFE) for Kids Act
SAFE does a few things that are sort of normal, like categorizing personalized feeds as a prohibited form of collecting data from minors, COPPA style. But the primary function of the law is to require age verification based on identification:
Proposal …The term Age Verification means to use generally accepted identification, including government-provided identification, or validation against an official records source, to confirm an individual’s age or age status.
The proposal hinges on requiring this high level of verification. It makes it unlawful for service providers to communicate with minors without age verification — meaning they’re required to verify everyone’s age authoritatively. Verification is judged by accuracy and verification requirements that do not use the authority and consent of the guardian as the root of trust. The law suggests using government-provided identification and later encourages biometric or AI identification so long as companies can prove accuracy.
There are steep penalties for failing to accurately verify this data and there is no remedy for mishandling or exposure of data. This incentivizes over-aggressive data collection and storage without doing anything to prevent or remedy data breaches.
It’s got more nastiness bundled in it too. It encodes the “social media addiction” myth into law. It has some truly absurd ideas, like time of day restrictions policing when social media is allowed to operate. Obvious freedom of speech issues aside, “compliant” services are exempted from this, so it’s just an obvious attempt at coercing sites into implementing age verification.
TX SB2420 App Store Accountability Act
Texas’s App Store Accountability Act is another mandatory identification law requiring verification, not attestation. It doesn’t just require services to correctly react to information provided, it imposes a duty on app stores to actively collect and verify identities.
It’s mandatory identification for the internet. It’s not narrowly tailored, it’s not content-neutral, and the liability it imposes forces invasive data collection.
The App Association — a global trade association for tech companies — released a press release debunking this effort, which summaries the problems well:
The App Store Accountability Act: Myths vs. Facts …
…because ASAA would impose a mandate, the app stores would be compelled to show compliance—and that means maintaining records of all of it. Moreover, ASAA doesn’t just ask app stores to verify age and parental consent status—it mandates that they share flags signifying that information with every single app developer, whether the app in question is TikTok or a weather widget built by a solo developer in Ohio.It also replaces the existing parental consent mechanism with a new version with additional failure points. Currently (without ASAA), if a parent declines a download by their child, the download is stopped at the operating system level. But ASAA would require a flag indicating parental consent status to be sent to the developer, which may or may not be received or properly adhered to, depending on whether the developer has updated their app. This leaves developers holding the bag and robs parents of a consent mechanism that actually works.
…the bill’s strict liability standards make invasive ID collection a practical inevitability. ASAA demands that app stores verify users into four highly granular age categories—”young child” (under 13), “child” (13-15), “teenager” (16-17), and “adult” (18+)—using methods “reasonably designed to ensure accuracy.” Platforms face FTC enforcement and crushing financial penalties if they misclassify a 12-year-old as a 13-year-old or a 17-year-old as an 18-year-old.
Distinguishing between adjacent age groups is technically impossible using privacy preserving age assurance methods. As Graham Dufault, General Counsel at ACT | The App Association, noted in a recent FTC workshop on age verification, the accuracy demands of the ASAA proposals push platforms toward “direct evidence” of age and identity—government IDs, birth certificates—to avoid liability. Even the majority’s example of Apple Pay requires users to verify their identity with hard credentials before Apple Pay can be used for downstream verification. …
Experts know that age assurance is on a spectrum, with the most accurate (age verification) also posing the highest risks because it requires “direct evidence” of age and identity (government-issued IDs). Because verification is the highest-risk form of assurance, it is used only sparingly—in order to block access to goods or services that themselves pose especially severe age-related risks—and IDs that are checked in real life are usually not collected and stored. Creation of a credential, however—especially if doing so is required by law—necessitates the retention of ID information. ASAA demands more than just a quick check of an ID at the door. By requiring absolute “verification” instead of encouraging innovation in privacy-preserving age assurance sensitive to the risks it presents (and the risks it must address), ASAA fails the risk-based approach test age assurance requires.
The courts have already thrown out the ASAA as not narrowly-tailored, more restrictive than existing alternatives, filled with unevidenced assertions with no evidentiary support, and ultimately an unlawful regulation of speech.
SCREEN act
The Screen (Shielding Children’s Retinas from Egregious Exposure on the Net) Act is federal obscenity law dressed up as a child protection effort.
It’s designed with the intent to globally lock minors from accessing “obscene” or “pornographic” content. It’s not parental empowerment, it ignores the ability of a parent to regulate what they consider “obscene” and acts as a blanket ban on specific categories of information.
This has identity verification requirements bundled in:
(a) Covered platform requirements.—Beginning on the date that is 1 year after the date of enactment of this Act, a covered platform shall adopt and utilize technology verification measures on the platform to ensure that—
(1) users of the covered platform are not minors; and
(2) minors are prevented from accessing any content on the covered platform that is harmful to minors.(b) Requirements for age verification measures.—In order to comply with the requirement of subsection (a), the technology verification measures adopted and utilized by a covered platform shall do the following:
(1) Use a technology verification measure in order to verify a user’s age.
(2) Provide that requiring a user to confirm that the user is not a minor shall not be sufficient to satisfy the requirement of subsection (a).
(3) Make publicly available the verification process that the covered platform is employing to comply with the requirements under this Act.
Not only is this third-party identity and age verification, it actually explicitly preempts user attestation. It’s looking for technical verification measures to prevent any minor — regardless of context or parental consent — from interacting with broad categories of information the government finds objectionable. See the root of trust shift: now it’s the companies who are obligated to verify identity, not the client.
As the SCREEN act says in its own preamble,
…the Supreme Court of the United States has struck down the previous efforts of Congress to shield children from pornographic content, finding that such legislation constituted a “compelling government interest” but that it was not the least restrictive means to achieve such interest.
And they’re correct! This isn’t legal, they know it’s not legal, we’re done here.
KIDS act
The KIDS Act is another age bill introduced at the federal level. With the full name “Kids Internet and Digital Safety Act”, it has the best backronym so far. But it’s another federal age verification law based on “obscenity” control very similar to SCREEN:
(4) Technology verification measure—
The term technology verification measure means technology that employs a system or process to determine whether it is more likely than not that a user of a covered platform is a minor.(a) Covered platform requirements Beginning on the date that is 1 year after the date of the enactment of this Act, a provider of a covered platform shall— (1) adopt and utilize commercially available technology verification measures, reasonably designed to ensure accuracy, with respect to the covered platform of such provider to identify minors; and (2) prevent minors from accessing any sexual material harmful to minors on the covered platform.
(b) Additional requirements for compliance In order to comply with subsection (a), a provider of a covered platform (or a third party contracted by a provider of a covered platform with respect to such covered platform) shall, with respect to a covered platform of the provider, carry out the following:
(1) Use a technology verification measure in order to verify the age of a user.
(2) Provide that a user confirming that the user is not a minor is not sufficient to verify age.
(3) Provide clear and conspicuous notice containing information on the technology verification measures and other policies and procedures related to the technology verification measure data used to comply with this title.
(4) Take reasonable measures to address circumvention of technology verification measures.
KIDS ticks the usual bad boxes. Using commercially available verification measures designed ensure accuracy, inviting ID and biometric based identity management while ruling out parental age attestation as a vector.
KIDS also has a federal preemption clause:
No State, or political subdivision of a State, may prescribe, maintain, enforce, or continue in effect any law, rule, regulation, requirement, standard, or other provision having the force and effect of law to the extent that such law, rule, regulation, requirement, standard, or other provision requires a provider of a covered platform to use technology verification measures to prevent minors from accessing any sexual material harmful to minors on a covered platform of such provider.
Federal preemption is normal for interstate commerce regulation — it’s correct for this kind of thing to be regulated at the federal level rather than have many unique compliance requirements — but KIDS would be a bad law preempting a good one. Better age assurance methods, like the protocol established by California and Colorado, are “any law… that requires… a provider of a covered platform to use technology verification measures to prevent minors from accessing [material]”, and would be overturned by this clause, replacing good law with bad.
KOSA
In reviewing KOSA, I made a note that it has “lots of normal stuff”, more than I was expecting. Among the normal stuff are some inoffensive clauses requiring parental control tools which essentially describe technology already widespread today:
(1) Tools A covered platform shall provide readily accessible and easy-to-use parental tools for parents to support a user that the platform knows is a minor with respect to the use of the platform by that user.
(2) Requirements The parental tools provided by a covered platform under paragraph (1) shall include—
(A) the ability to manage a minor’s privacy and account settings, including the safeguards and options established under subsection (a), in a manner that allows parents to—
(i) view the privacy and account settings; and
(ii) in the case of a user that the platform knows is a child, change and control the privacy and account settings;
(B) the ability to restrict purchases and financial transactions by the minor, where applicable; and
(C) the ability to view metrics of total time spent on the covered platform and restrict time spent on the covered platform by the minor.
The KOSA draft works very hard to keep the exact language of age verification out of the bill, but it’s still designed to encourage sites to implement identity verification and places the burden on accurately determining this data on services, not parents.
The real problem is the push to expand the definition of what constitutes “reasonable effort” under COPPA to include verification (not attestation), using accurate commercial tools rather than prioritizing privacy or freedom of speech:
(B)Reasonable effort
A covered platform shall be deemed to have satisfied the requirement described in subparagraph (A) if the covered platform is in compliance with the requirements of the Children’s Online Privacy Protection Act of 1998 (15 U.S.C. 6501 et seq.) to use reasonable efforts (taking into consideration available technology) to provide a parent with the information described in subparagraph (A) and to obtain verifiable consent as required.
From the EFF’s research summary,
Jason Kelley, The Kids Online Safety Act is Still A Huge Danger to Our Rights Online …there is essentially no outcome where sites don’t implement age verification. There’s no way for platforms to block nebulous categories of content for minors without explicitly requiring age verification. If a 16-year-old user truthfully identifies herself, the law will hold platforms liable, unless they filter and block content. If a 16-year-old user identifies herself as an adult, and the platform does not use age verification, then it will still be held liable, because it should have “reasonably known” the user’s age.
A platform could, alternatively, skip age verification and simply institute blocking and filtering of certain types of content for all users regardless of age—which would be a terrible blow for speech online for everyone. So despite these bandaids on the bill, it still leaves platforms with no choices except to institute heavy-handed censorship and age verification requirements. These impacts would affect not just young people, but every user of the platform.
I spoke about the danger of the Attorney General’s power to selectively enforce attestation law on edge cases. Obscenity law is categorically worse. The Attorney General doesn’t just have the power to choose who to enforce the law on, they have the power to argue for what content they believe is obscene or objectionable.
Quoting the same EFF article again:
Jason Kelley, The Kids Online Safety Act is Still A Huge Danger to Our Rights Online KOSA’s co-author, Sen. Blackburn of Tennessee, has referred to education about race discrimination as “dangerous for kids.” Many states have agreed, and recently moved to limit public education about the history of race, gender, and sexuality discrimination. If KOSA passes, platforms are likely to preemptively block conversations that discuss these topics, as well as discussions about substance use, suicide, and eating disorders. As we’ve written in our previous commentary on the bill, KOSA could result in loss of access to information that a majority of people would agree is not dangerous. Again, issues like substance abuse, eating disorders, and depression are complex societal issues, and there is not clear agreement on their causes or their solutions. To pick just one example: in some communities, safe injection sites are seen as part of a solution to substance abuse; in others, they are seen as part of the problem. Under KOSA, could a platform be sued for displaying content about them—or about needle exchanges, naloxone, or other harm reduction techniques?
…
The same issue exists on both sides of the political spectrum. KOSA is ambiguous enough that an Attorney General who wanted to censor content regarding gun ownership, or Christianity, could argue that it has harmful effects on young people.
COPPA 2.0
S. 836: Children and Teens’ Online Privacy Protection Act or “COPPA 2.0” expands COPPA with new invasive requirements:
…(iii) to obtain verifiable consent from a parent of a child or from a teen before using or disclosing personal information of the child or teen for any purpose that is a material change from the original purposes and disclosure practices specified to the parent of the child or the teen under clause (i);
This “verifiable consent” is a massive new requirement. The “verification” required here applies to all users — not just children — to confirm their age. Everyone will have to prove their adulthood not just for adult material but for any data use.
This is the alternative to digital identification, it’s great, let’s do it
Age verification very, very, very bad. Age attestation is better.
What makes them different isn’t just the text of a bill or a degree of enforcement, it’s their radically different purposes. Age verification requires platforms and services to antagonistically profile and restrict their users. Age attestation provides the device owner a tool to assert their own identity and make demands of the services they use.
Age attestation at the operating system level is a powerful tool that doesn’t just help parents, it protects the internet from a dangerous, sweeping censorship movement. The current age attestation proposals have their problems but they’re still categorical improvements over everything else out there.
When you see the movement for “user accounts set up with age brackets at the operating system level”, that’s not a euphemism for digital identity or backdoored chips, that is a ray of sunshine to a world falling into darkness. So long as it stays limited we need to push for it. It’s the good one.
We cannot (and will not) win the “the internet should be an unregulated anarchy” war. We can (and must!) win the “no personal identification needed to speak” war.


