--- Title: A Hack is Not Enough Status: published Date: 2025-10-14 Category: cyber Tags: - enforcement - services - archival - piracy - user-agent - encryption - publication - tech-culture - design-patterns - software-architecture - antitrust ad: Twisted Sister's "there ain't no way we'll lose it" political theory has not held up --- Recently we've seen sweeping attempts to censor the internet. The UK's "Online Safety Act" [imposes sweeping restrictions on speech and expression](https://www.usermag.co/p/the-uks-censorship-catastrophe-is). It's disguised a child safety measure, but its true purpose is (avowedly!) intentional control over ["services that have a significant influence over public discourse"](https://archive.ph/2025.08.13-190800/https://www.thetimes.com/comment/columnists/article/online-safety-act-botched-2xk8xwlps). And [similar trends threaten the US](https://www.rollingstone.com/culture/culture-features/age-verification-legislation-united-states-online-safety-1235419895/), especially as lawmakers race to [more aggressively categorize more speech as broadly harmful](https://www.404media.co/wyoming-and-south-dakota-age-verification-laws/). A common response to these restrictions has been to dismiss them as unenforceable: that's not how the internet works, governments are foolish for thinking they can do this, and you can just use a VPN to get around crude attempts at content blocking. But this "just use a workaround" dismissal is a dangerous, reductive mistake. Even if you can easily defeat an attempt to impose a restriction right now, you can't take that for granted. ## Dismissing technical restrictions as unenforceable There is a tendency, especially among technically competent people, to use the ability to work around a requirement as an excuse to avoid dealing with it. When there is a political push to enforce a particular pattern of behavior -- discourage or ban something, or make something socially unacceptable -- there is an instinct for clever people with workarounds to respond with "you can just use my workaround". I see this a *lot*, in a lot of different forms: - "Geographic restrictions don't matter, just use a VPN." - "Media preservation by the industry doesn't matter, just use pirated copies." - "The application removing this feature doesn't matter, just use this tool to do it for you." - "Don't pay for this feature, you can just do it yourself for free.[^free]" - "It's "[inevitable](https://www.vox.com/the-goods/22387601/smart-fridge-car-personal-ownership-internet-things)" that people will use their technology as they please regardless of the EULA." - "Issues with digital ownership? [Doesn't affect me, I just pirate](https://www.reddit.com/r/Piracy/comments/suosam/i_havent_paid_for_a_3ds_game_since/)." [^free]: The "don't pay" "just"s are the most dangerous ones for obvious reasons. ::: aside tangent You see this used as a framing device used to introduce a "whatabout" even where it makes no sense. Someone makes a restoration of a discontinued Apple device, and somehow the headline is "[If you're upset that Apple has dashed your hopes of a 27-inch iMac with Apple Silicon, just remember: there's always modding.](https://x.com/arstechnica/status/1505049949448331264)" How is this a counterpoint? They're arguing you can always safely rely on the ability to modify Apple hardware? Consumers who want a missing product should be expected to fabricate their own? What are you talking about?! All of these to one degree or another trivialize some sort of imposition based on the presumed ability to circumvent it. Sometimes this is because it seems genuinely impossible for the imposition to stick, but sometimes it's a deflection used to avoid feeling helpless in the face of an issue that one can't directly control. It's very tempting to dismiss restrictions as mechanically unenforceable. It feels great. It feels safe, feels powerful. How exciting, that the individual is technologically empowered to the point where skill and savvy can overcome unjust institutions? [I don't have to worry about atomic war if I just buy the right products.](https://flashbak.com/you-can-survive-atomic-fallout-a-mid-century-survival-catalog-17019/) This story isn't about a world that's hostile and terrifying, it's about me and my merit. ## Policy is often out of line with technical realities I fully understand this temptation. I'm willing to be particularly harsh in condemning this because it's a mistake I tend toward myself. It's often the case that policy is out of line with technical realities; people often attempt to toothlessly impose restrictions that far exceed what they can actually enforce. ### Web document controls You see this a lot on the internet. Someone decides their website needs to restrict the way visitors can use the page, and they implement this by politely forwarding the demand to your web browser, [except that's you](https://blog.giovanh.com/tag/user-agent/). The architecture of the internet was designed to opinionatedly prioritize the end user.[^tracking] Browsers interpret web documents, but exactly what content a web browser displays and how it does is up to the browser, not the server. Browsers offer websites many ways to give hints on how their controls work because the site is expected to be cooperating to create a good user experience. When sites try to abuse this communication channel to do something obnoxious, the user can (rightfully!) opt not to take the suggestion. [^tracking]: Tracking is a notable exception to this, something I chalk up as having a lot to do with user touch points. People are immediately invested in how pages display and how they can interact with them, but tracking is usually invisible, which makes it harder to configure and control. Sometimes these wars have already been fought and so sensible mitigations for abuse are already built into most browsers. Pop-up blocking is an easy example here: all web browsers give users extensive control over when and why sites are allowed to open new windows because of the way this "feature" was historically abused. But there are also many web nuisance behaviors that don't yet have common remedies built into the browser, but can still be addressed with extensions. Some websites abuse semantic hinting to prevent people from right-clicking at all in an attempt to deny users the ability to interact with the page, but extensions like [Enable right click](https://chrome.google.com/webstore/detail/enable-right-click/hhojmcideegachlhfgfdhailpfhgknjm?hl=en) easily circumvent this. Likewise [DontFuckWithPaste](https://github.com/jswanner/DontFuckWithPaste) tells the browser to ignore a site and enable pasting text, and [Right Click Borescope](https://chromewebstore.google.com/detail/right-click-borescope/mmdokamaalplkfiddbkhpfjmkhecbcnh) lets you find and open images even if the option isn't in the context menu. And other generic programmable extensions like [Greasemonkey](https://www.greasespot.net) and [stylus](https://github.com/openstyles/stylus) make it easy for people to write their own tweaks to fix behaviors on specific websites. On the web there are a lot of restrictions you can just hack around, and you will probably still be able to until the internet becomes something fundamentally different. ### Internet archival Unenforceable restrictive demands are something I run into a lot in archival. Not just in a copyright "I control my content, come to me and see my advertisements" way either; it's often about controlling how speech can be recorded and cited in the public record. Quoting from the comments in Quora's `robots.txt` file: > {: .cite} > People share a lot of sensitive material on Quora - controversial political > views, workplace gossip and compensation, and negative opinions held of > companies. Over many years, as they change jobs or change their views, it is > important that they can delete or anonymize their previously-written answers. > > We opt out of the wayback machine because inclusion would allow people to > discover the identity of authors who had written sensitive answers publicly and > later had made them anonymous, and because it would prevent authors from being > able to remove their content from the internet if they change their mind about > publishing it. ... > > Meanwhile, if you are looking for an older version of any content on Quora, we > have full edit history tracked and accessible in product (with the exception of > content that has been removed by the author). ... Of course it's not true that omitting a rule in a robots.txt file "allows" people to reference old material. Everyone is able to do that already. It's understandable why both Quora and its users would want this, especially given Quora's culture of signing your full name and employment history to all your posts like it's Linkedin. But sending people a `robots.txt` file isn't something magic you can use to control their behavior, it's a way of politely indicating how people ought to navigate. It's a system we use in a civilized society to be nice to each other. And it's a good thing there's no effective restriction here, because what Quora wants to do is wrong. A historical archive of published material separate from a live "product" serves a specific purpose, but it's not to make Quora.com more money today. That's reason enough for them to make these sweeping, toothless demands to control the way people cite work published on their site. Thankfully, text file or no, there's nothing to stop anyone from making records of Quora posts. [Here's one now.](./quora.html) Archivists have long understood that the consent of the companies involved is not the deciding factor in whether material should be archived, and haven't been shy in expressing this.[^robots] [^robots]: See [Mark Graham, "Robots.txt meant for search engines don't work well for web archives"](https://blog.archive.org/2017/04/17/robots-txt-meant-for-search-engines-dont-work-well-for-web-archives/), [Archiveteam, "Robots.txt"](https://wiki.archiveteam.org/index.php/Robots.txt) ## Social acceptability determines technical abilities Absurd demands are often unenforceable. It's therefore very tempting to generalize this to the rule "*all* absurd demands are unenforceable." There is a hacker mythology that the righteousness of the libertarian cause serves as an inevitable structural defense against overreach of institutional power. Moreover, the myth implies that technology creates a self-enforcing meritocracy of *systems*; that overreach will fail because it deserves to fail; that the ease of copying digital files means both that copying files is intrinsically right and that it will remain possible forever as the result of a natural law. But this is false. While this is a noble cause it can’t be taken for granted as a conclusion or else we fall into complacency. **Just because something is morally right doesn't mean that institutional power will never be able to kill it, and just because enforcing regulation would be technically difficult doesn't mean it won't be done.** We sometimes like to think computing is an exception to this: since its structure has been so individualistic it can feel as if it has a hard defense against regulation. It is not the case that technical realities flow directly from the divine order and the demands of would-be tyrants is futile against an ineffable good. The moral arc of the universe does not bend towards righteousness, even when righteousness is cheap and easy. Even when — as with computing — there is a “natural order of things”, you can’t rely on this as a hard rule; the structure the technology tends toward is not necessarily the same structure it gets shaped into. The user-empowering internet is unfortunately an outlier. Relying on hacks and workarounds is a kind of “normalization of deviance”: a reliance on an unrecommended or unsafe practice that becomes standard operating procedure. The deviation here isn’t the workaround, it’s the reliance on the workaround. Trusting in the ability to hack around a requirement introduces risk: the potential for the workaround to be prevented without any alternative. When we succeed in avoiding trouble we feel prideful and superior instead of understanding that we’re still in constant danger. It's true that you shouldn't *have* to fight a forever-war against corporations orders of magnitude more powerful than you are just to maintain your way of life. But even if you wanted to, you can't. As hard and narrow as the path may be there's nothing keeping it from being completely destroyed. In reality, technical abilities are very much determined by social acceptability and political structures, even when the underlying technical structure suggests it shouldn't be.[^code] While technical implementations carry tendencies[^tendencies] towards political structures, these can be overcome. The inviolable laws of physics encourage man not to fly, yet we do. But the laws of biology tell us there are no racial hierarchies, yet we construct them. It takes effort and resources to work against the structure technology tends towards, but when doing so allows powerful groups to consolidate and expand their power, they’re often willing to expend those resources. [^tendencies]: Artifacts have politics, etc. [^code]: This is one of the main conclusions of [Code 2.0](https://lessig.org/product/codev2/): that computers can be **regulable**. And I argue that they indeed now are. > **Lawrence Lessig, "Code 2.0"**{: .cite} > The claim for cyberspace was not just that government would not regulate cyberspace—it was that government *could not* regulate cyberspace. Cyberspace was, by nature, unavoidably free. Governments could threaten, but behavior could not be controlled; laws could be passed, but they would have no real effect. ... > But what was never made clear in the midst of this celebration was why. > Why was cyberspace incapable of regulation? What made it so? > ... > > The original architecture of the Internet made regulation extremely difficult. But that original architecture can change. And there is all the evidence in the world that it is changing. Indeed, under the architecture that I believe will emerge, cyberspace will be the most regulable space humans have ever known. The "nature" of the Net might once have been its unregulability; that "nature" is about to flip. But I didn't realize how closely connected Code 2.0 and this article were until it was already almost finished. Whoops! ## Hard restrictions that already exist People point out that trying to ban specific technologies (like encryption) is as impossible as regulating math. But people who care more about their constructed imposition than anything else keep trying to do it anyway. The question is how much people *want* to enforce it, and what all they'll do in order to try. Unfortunately, it sometimes works. Things *can* be banned outright using technical measures, even if the history of the technology suggests people "shouldn't be able to." And this technical enforcement can be much stronger than traditional attempts at regulating behavior. The policeman is as strong as meat, encryption is as strong as math. Sometimes this is obvious because the restriction is already present and effective. ### HDCP Let's start with cables. Video cables exist to pipe video from one place to another. Whether analog or digital, the video information is *necessarily* sent in a standard format from a source to a destination, which has to be able to decode the image in order to display it. The information to display video is, by definition, the same information you need to create a copy. This is how VCRs worked. The recorder sits between the source and destination, forwarding the signal on like the cable does but also making a copy of the signal. Modern day capture cards work the same way: they act like a cable in that they receive and transmit a video signal, but they also process the signal while they have it. Obviously media companies don't like that people have this ability (and *really* didn't like being surprised by VHS), but this seems like a hard limitation. If you're trusting users to pipe data around, they're going to have access to that data. Like the "[analog hole](https://en.wikipedia.org/wiki/Analog_hole)", if you're sending people video they have the video. It doesn't matter how much money people throw at that, it's just a technical reality that you can't control the signal. But then we got High-bandwidth Digital Content Protection, which is designed to do just that. HDCP is a content protection protocol that encrypts video signals in order to prevent man-in-the-middle processing like capture cards. A device (or individual applications within a device) can choose to output HDCP-protected encrypted video instead of a freely decodable signal. Since HDCP signals are encrypted at the source any receiver needs the decryption key in order to handle the signal. The only way to legally decrypt HDCP is to license the right from Intel.[^intel-license] In addition to a fee (which Intel reserves the right to set and raise), the HDCP license requires devices meet standards designed to protect copyright holders from the device's users. Compliant devices must be designed so that they cannot copy signals, must always re-encrypt any HDCP signals they output, and generally "effectively frustrate attempts to defeat the content protection requirements of the HDCP Specification"[^hdcp-frustrate]. Devices like capture cards obviously can't obtain these licenses, and so [fail to capture HDCP signals](https://help.elgato.com/hc/en-us/articles/360040482032-HDCP-and-Elgato-Game-Capture-devices). [^intel-license]: More specifically from Digital-CP, an Intel subsidiary. See [^hdcp-frustrate]: > [HDCP License Agreement (2007 revision)](https://web.archive.org/web/20090419204233/http://www.digital-cp.com/files/static_page_files/D6724AFD-9B02-A253-D8D2FE5B1A10F7F7/HDCP_License_Agreement_082207.pdf){: .cite} > **ROBUSTNESS RULES** > > 1 **Construction.** Licensed Products as shipped shall comply with the Compliance Rules and shall be designed and manufactured in a manner that is clearly designed to effectively frustrate attempts to modify such Licensed Products to defeat the content protection requirements of the HDCP Specification and the Compliance Rules. > > 1.1 **Functions Defeating the HDCP Specification.** Licensed Products shall not include: > (a) switches, buttons, jumpers, or software equivalents thereof; > (b) specific traces that can be cut ; or > (c) functions (including service menus and remote-control functions); > in each case, by which the content protection requirements of the HDCP Specification or the Compliance Rules can be defeated or by which Decrypted HDCP Content can be exposed to unauthorized interception, re-distribution or copying. > > 1.2 **Keep Secrets.** Licensed Products shall be designed and manufactured in a manner that is clearly intended to effectively frustrate attempts to discover or reveal Device Keys or other Highly Confidential Information. If you try to output video through a non-HDCP connection (like analog video) the source can choose to intentionally downgrade the video or refuse to play content outright. Even HDCP devices that read antiquated and broken DRM formats like CSS are required to output the signal using HDCP. Having to go through this system is demonstrably worse than the alternative. No one would make the informed choice to use it, and it would be outcompeted in a heartbeat. Except you use it anyway. The HDMI[^hdmi] specification is [designed around HDCP support.](https://ia903403.us.archive.org/8/items/manualzz-id-1203779/1203779.pdf) The wires themselves are designed to police the user. As ludicrous as it sounds that's the only way to do it, so that's what [they](https://hdmiforum.org/members/) made happen. The most widely used video cable is designed to prioritize maximizing profit for media companies first and only reliably communicates data if it doesn't get in the way of that. And it's not just copyright enforcement, because copyright comes with exceptions that the cables won't honor. [^hdmi]: And DisplayPort too, yes. Also DVI. Corporations have been trying to keep video from being video since it was first digitized. If any of this is news to you, it's probably because of the main thing HDCP has going for it: it mostly works. The big danger of any DRM system is the risk of denying authorized users access to material they're entitled to, which HDCP mostly doesn't. Applications are usually conservative about using HDCP at all, and so unless you're trying to record something you shouldn't, it usually "just works" in the background.[^hdcpps] [^hdcpps]: With some notable exceptions. I'm reminded of PlayStation applying HDCP as a blanket measure to all video output. See [PS4 HDCP must be off to record gameplay but on to watch video apps](https://www.kitguru.net/gaming/console-desktop-pc/matthew-wilson/ps4-hdcp-must-be-off-to-record-gameplay-but-on-to-watch-video-apps/), etc. It takes a *lot* for this kind of system to exist. It needs to use strong encryption, needs widespread international industry buy-in, and needs full use of an aggressive legal infrastructure to enforce the license. But it has all of that, and so here we are, "technical reality" be damned. ### iOS Meaningful restrictions can also be sprung on people without a generational technology upgrade or standards consortium. As a long-time jailbroken iPhone user I have seen and felt this crunch happen with iOS, which has all the same problems as any locked-down platform. Up through ~2012 the ability to jailbreak your iPhone could be taken for granted. It took a little planning but you could reliably load your own OS software and even downgrade a device. Apple's security -- security to protect their device against the user -- became much better in the following years. The ability to jailbreak a device quickly became rare, and when they worked they were worse. True untethered jailbreaks fully died out in 2016 with iOS 9. Looking at [the table](https://theapplewiki.com/wiki/Jailbreak), it looks like every device released after 2018 can't be reliably jailbroken. The window used to be open. If you needed something Apple refused to allow you could just jailbreak your device. Then that window closed and you couldn't anymore. This was never a *dependable* institution, as its failure shows. Apple never honored a right, jailbreaking was just a reality you could rely on until you couldn't anymore. Workarounds fail, especially when people care to attack them. Jailbreaking wasn't done with a switch, not done with any right, not done with any piece of hardware guaranteed to exist. It was done with exploits. When people do manage to mod modern consoles or dump games, they’re not using a mechanism necessarily available to them or anything they're legally entitled to. They’re exploiting hardware and software vulnerabilities, errors the vendors missed. And these exploitable errors are a rare commodity, made all the rarer by manufacturers willing to pay to get them first. For most locked-down devices security is getting better every generation. Software and hardware updates both close security holes. Many updates don't fix bugs or add features, but instead close "vulnerabilities" in order to actively keep functionality out of the hands of users. The existence and health of any "modding" scene cannot be taken as a given, especially when the manufacturer is working to prevent it. It's playing pool against all the smarts money can buy, and they go first. You’re not guaranteed a turn at all. ### Console Modding Video game consoles are, of course, computers. Unlike personal computers, game consoles are specifically designed to treat their user as a threat. Like iPhones, they're locked-down. They're designed to be more concerned about being cops than with being computers. Computer security is about protecting users from potentially unwanted software behavior. Console security is about protecting software from potentially unwanted user behavior. This is a hobbyhorse of mine. I cannot stand how quickly people accept the ecosystem of locked-down machines just because one of the effects of that security is discouraging piracy. I've talked about my disdain for the alienation of "homebrew" before: > [How Nintendo Misuses Copyright](https://blog.giovanh.com/blog/2023/11/21/how-nintendo-misuses-copyright/){: .cite} > "Homebrew" is a weird category. The word "homebrew" is used to box off normal software that isn't provided by the manufacturer themselves into its own special category, instead of treating it as the default state that it is. It's like if "cooking" only ever meant restaurant meals, and "home cooking" was treated as a frowned-upon edge case. > > The only reason someone who wanted to develop homebrew games for a computer they own would care about console *security* is if something had gone very wrong already. > Which, of course, it had: Nintendo wants to lock developers into partnering with them contractually in order to be able to develop (crippling the hobby development scene), and locks down all the general-purpose computing functionality of their consoles so they only run Nintendo-approved code. > > Using the "homebrew" metaphor, being unable to run "homebrew" without authorization from the manufacturer is like being mandated to buy bottled tea and being physically prevented from taking tea leaves and brewing your own tea at home. > It's letting manufacturers wield an *unconscionable* level of control, especially for a category of tech that's not just an entertainment product, but the means of production for an entire entertainment economy. I won’t get into the history of game console hacking here, but here’s the short version. Computers used to be simpler, but the rollout of cheap, strong encryption has been very effective at locking things down, even hardware you’ve purchased and have complete access to. The more powerful computers get the more resources they have available to spend policing the user instead of working as desired. ::: aside tangent The Steam Deck is a game console that doesn't hide the fact that it's a computer. You can just run software on it without having to defeat any encryption, without having to license any special right to use your own device. [People treat this like something special](https://www.gamingonlinux.com/2022/03/valve-open-sources-steamos-devkit-client-for-steam-deck/) -- which it is -- but I can't get over the fact that people generally accept the restrictions in the first place. Because of the incredible importance designers place on preventing piracy, every facet of a modern console is encrypted and tamper-resistant, from the operating system to the bootloader to the graphics card. Every step of the way is designed to tie the functionality of the device to its ability to cryptographically verify the identity and entitlements of the user. Since identities are centrally managed, policy enforcement can easily cross scopes because companies are free to set up any logic they want. Cheat detection from one game can ban you from others, a console suspected of piracy can be bricked entirely, and licenses for other games you purchased separately could be revoked too. And piracy itself serves as a kludge to work around larger problems. Piracy currently acts as a relief valve[^piracy-relief] for the [game industry's utter failure at responsible preservation.](https://gamehistory.org/dmca-2024-statement/) But advocacy around the issue of preservation and historical accessibility is hindered by the fact that the success of game piracy[^nin-emu] has resulted in extensive preservation libraries despite the industry's attempts to prevent this. [^piracy-relief]: You can generalize this: piracy acts as a relief valve for a lot. For instance the ability to pirate Adobe software, limited as it is, is the only thing keeping Adobe from effectively trying to do the harm it wants to do to the entire creative industry and thus being the actual devil. [^nin-emu]: The ease of Nintendo Switch emulation in particular is a true aberration: it's extremely rare that effective emulation software exists concurrently with official support for hardware. Game preservation -- something we depend on as a matter of cultural and historical record -- currently depends on piracy, an underground institution facing meaningful attacks. Policy failures -- both at the corporate and governmental level -- have the harms they do to preservation covered up by their inefficacy. But at the same time game companies are aggressively working to stamp out that ability, with both technical and legal controls. Shrugging off the preservation issue now creates the risk that game companies will get their way, effectively mitigate piracy, and doom preservation in the process. This is not in any way limited to game consoles. The assumption that piracy will always exist as a matter of natural law is wishful thinking at best and delusion at worst. This is frustrating within the context of gaming, but it has a much larger implication: there is no technical limitation preventing any other system from being secured in the same way. Mobile devices and personal computers can be designed to be incapable of running software that isn't authorized by corporate or governmental authorities. Devices can be designed so the most basic functions — including the ability to start up at all — require identity verification. This is the promise of "zero-trust" architecture: every individual action can be linked to authentication and authorization, so central management systems are able to police behavior at an extremely granular level. The technical ability is already there, and every power structure is incentivized to use it to seize as much power as they can get away with. ## Friction and marginalization Unfortunately we can see the same problems of effective restriction even when workarounds *aren't* effectively prevented. There's a shortcut: marginalization via design friction. Design -- even "soft" design -- matters. There's an infamous story: when Drew Houston pitched Dropbox as a startup in 2007, a commenter replied skeptically: > [BrandonM:](https://news.ycombinator.com/item?id=8863){: .cite} > ...For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software. This is a classic structured way engineers think about the world: analyze something by deconstructing it into its functional components. But here this misses the point, clearly. You can understand a system in terms of strict functional components, but that perspective doesn't capture the totality of the system itself. The reality of the thing is not only the function, it is also the construction. Dropbox is worth $2.5 billion not because it invented the idea of network storage, but because of its design and usability. "What it does" isn't just connecting to a fileserver, it's enabling **patterns of human behavior.** ### Soft Friction But just as "soft" design can serve as enablement, it can also serve as discouragement. This makes it an important tool in our conversation about controlling behavior: before behavior is banned it can be made non-trivial and discouraged via design. And even though it's "soft" enforcement, **this friction matters**. Interfaces can be designed to encourage and discourage specific behavior. This can be done through structure and organization -- adding steps, hiding menus, etc -- but it can also be done through pure visual language. In design we call this "affordance": the visual language used in the design of a thing to suggest how it can and should be used. ::: aside tangent Prioritizing and deprioritizing actions is not inherently sinister or manipulative; interface design is meant to be a collaborative affair. Not all restrictions are an imposition; you want a UI that lets you do tasks well. If there's something you don't care about, don't want to do, and never want to do, [including the controls takes space and draws attention away from your actual end.](https://designdevelopdiscuss.wordpress.com/2014/02/25/flexibility-usability-tradeoff/) The problem is when a design choice is made in order to impose a restriction. It's the eternal question, "What does it do and who is it for?" There are already whole fields where what *should* be trivial is made legitimately difficult, just by platform-controlled UIs. Alternate clients can do it easily. Minor modifications make it simple. There's no technical (or even legal) reason it shouldn't be trivial. But it's been successfully made non-trivial in practice. #### Copying Text Let's start by looking at one of the most basic ways people interact with computers: copying text. This is kind of a toy problem: no big clash of political interests yet, just noting how effective design can be at controlling behavior. Copying text is one of the most basic ways people are expected to interact with web documents, and websites display text by sending it to your computer where it should be completely available to the user. But simple design choices mean it's often not. I already listed some ways websites can try to prevent a user from interacting with documents -- clicking, selecting, pasting, opening menus, etc -- and extensions that easily bypass them. But of course most people don't know those extensions exist at all. Of the people that do, many won't bother installing them. And even then, that only works because browsers are programmable: people can code behavior and share it as an extension. But this is often not the case! Extension support barely exists on mobile platforms, even for third-party browsers. And people often use computers that are *not* programmable -- organization-managed or kiosk-style devices (like school Chromebooks) are usually locked down to prevent modification, including installing extensions. The net result of this is that websites can, in fact, often block text copying. Even though workarounds exist (and, as long as the internet works the way it does now will always exist), websites still try to get in the user's way *and the websites usually win.* The removal of an affordance to discourage specific behavior isn't necessarily exploitative. Consider password fields: input elements that don't show what you type into them. Here's one now: /// html | input[type="password"] /// [^browser-design]: This does seem to be a browser design choice. The [html specification](https://html.spec.whatwg.org/multipage/input.html) says "The user agent should obscure the value so that people other than the user cannot see it", which describes the asterisk behavior but doesn't speak directly to the user's ability to exfiltrate data. For further research: I wonder if there are Chromium discussions about this design choice? Copying text out of password fields is almost always disabled not by the website but as a design choice made by the browser.[^browser-design] The value in the field is text the *user* entered and is completely available for programmatic use, but desktop[^mobile-password] browsers usually prevent actual humans from doing anything but entering text. This is a design choice primarily designed to prevent attackers with physical access from extracting plain-text passwords from a browser's autofill feature which puts the text directly into the field. If you want to recover a password *that's already in the field*, you can't. Well, you can: open developer tools. You can grab the `value` from the console. You can change the input type from `password` and the text is readily available. Once the information is available to the website it's available to the client. Getting the text is completely technically possible, but the design discourages people from doing it and so it usually doesn't happen. Because of the way the interface is designed, most people don't consider that to be a tool available to them at all. [^mobile-password]: It's in style for *mobile* browsers to have a "show password" button. This is *arguably* off-spec, actually. You can extend this beyond web browsers and look at UI design. There's text everywhere and built into all of it are assumptions about what text should be copied and what shouldn't *which govern how people use it.* You can't copy text from buttons and labels not due to a technical limitation, but due to the authorial intent. There's a lack of affordances there. You can "just work around this" with a tool like [PowerToys Text Extractor](https://learn.microsoft.com/en-us/windows/powertoys/text-extractor), a specialized screenshot OCR tool that lets you copy any text on the screen. ([It's kind of magic to watch.](https://www.youtube.com/watch?v=_7sbzMcwCiI)) The design pushes you one way but with effort and authority you can go another. #### Removing video affordances A lack of user affordances when it comes to audio and video has become a consistent norm online. Right-click an image and by default there's an option to save it. Right-click a video and by default there isn't. Web videos normally prevent people from downloading them, even though browsers can do it natively just like they do images. And there's even less of an expectation for a video download action on mobile platforms. There's a whole story there in how expectations for computer usage changed in the period of time between the rollout of image transmission and video transmission. Unlike images, by the time people had the internet speed and storage capacity to download video it was already considered suspicious to do so. It's not hard to see how this norm arises from a sort of mythical thinking about piracy. The lack of intentionally designed affordances for this has opened the door to various workarounds. The hacks most commonly take the form of downloader bots that live directly on sites like [Twitter](https://github.com/shalvah/DownloadThisVideo) and [Reddit](https://www.reddit.com/user/savevideobot/), where they live in another churn of avoiding bans and updating to handle new security. More on video in a bit. #### Twitter blocking In October 2024 Twitter made a design change to *remove* a soft design restriction by changing the behavior of the block function. Prior to this change, blocking a user meant not only that they couldn't interact with your tweets, but that the blocked user wouldn't see you organically at all. No tweets in the feed, no profiles in search, nothing. Unless your Twitter account was fully private, this was a soft restriction. You wouldn't see their posts in your feed, and if you were linked directly to a tweet or profile Twitter would only show a placeholder. But only if you were logged in as someone that user had blocked. Profiles and tweets are still public, so if someone actually wanted to hear from you they could just look at your profile while logged out and see everything, blocked or not. Twitter, now "blaze your glory" 𝕏, reversed this: ::: thread ![XEng: Soon we'll be launching a change to how the block function works. - If your posts are set to public, accounts you have blocked will be able to view them, but they will not be able to engage (like, reply, repost, etc.).](https://twitter.com/XEng/status/1846605254864888180) ![XEng: Today, block can be used by users to share and hide harmful or private information about those they've blocked. Users will be able to see if such behavior occurs with this update, allowing for greater transparency.](https://twitter.com/XEng/status/1846605255926010030) ![elonmusk: @nima_owji High time this happened. - The block function will block that account from engaging with, but not block seeing, public post.](https://twitter.com/elonmusk/status/1838285460440862861) For me it's easy to look at this as a security problem, do some quick mental math, and say "this is nothing." This just makes it slightly easier to do something you could already do. If you're publishing posts publicly anyone is able to see them just like before. If you're blocked by a user you can see their posts, but that was already the case. The sets are the same; there's no change in who is able to see what. Blocking people was never a way to hide yourself from someone, and arguably it gave people an unfounded sense of security. As Twitter staff noted in justifying the change, ["preventing an account from seeing your posts does not work in practice. Anyone with any intent can find out what you post by simply creating another account or logging out"](https://twitter.com/AqueelMiq/status/1692594263618396225) If anything, I'd tend to assume a change like this was a good thing. To someone like me whose guiding light is accessibility and ease-of-access, a change that makes it easier for people to do what they were already doing is categorically good. By default, trying to manipulate people by intentionally making it unnecessarily difficult for them to work is wicked. Without extenuating circumstances, making it more onerous for people to do what they're doing is inflicting harm on others. But thinking about what sets of people are able to see what material misses the importance of design in influencing real behavior patterns. Blocking someone hadn't been hard-and-fast security that the blocked person couldn't see your posts, but it did affect how people actually acted by guiding organic interactions. The friction blocking added didn't matter to set-theory security, but it did *matter*. It mattered to the site's culture, its norms, its society. The design didn't add a meaningful layer of security, but it did directly impact how people actually interacted with each other in the normal course of business. Blocking went from encouraging disengagement to encouraging engagement. Instead of trying to diffuse conflict by separating people from people they objected to, Twitter is baiting negative engagement by chumming the waters. This also denies people a way to set a soft boundary. Now reading posts from an account which had blocked you isn't a workaround, it's not devious or sneaky behavior, it's using the platform as intended. There was a behavior -- in this case, often a negative behavior -- being effectively discouraged though design, even though that design didn't provide any meaningful security. ### Marginalizing the Holdouts The Twitter blocking example shows friction (formerly) being used with largely prosocial ends. But this same kind of friction is often deployed in service of censorship and control. In the "friction" scenario the workaround exists, definitively; you can still fight the design. But the danger isn't just having to fight the design: **friction manipulates the Overton window.** If you're trying to impose a restriction, you need a social environment that will allow you to do that without blowback. But once you've already gotten people in the habit of acting within a restriction, it's easy to portray protesters as cheats and criminals once you ramp up the demands. The removal of design affordances is a form of systematic disenfranchisement. Design affects how people use the product and what motions they usually perform. Even if the product technically supports a feature, if the user doesn't use it they don't consider it part of their toolset. Once people aren't in the habit of making the motion -- once they don't consider something to be an ability of theirs that can be taken away -- that's nearly the whole game won. Sometimes bans are sweeping and broadly effective from the start. Going back to our original censorship/VPN example, [Michigan H.B. 4938](https://www.documentcloud.org/documents/26098568-2025-hib-4938/) takes a shotgun approach by simultaneously attacking "pornographic material" while also preemptively demanding that "An internet service provider providing internet service in this state shall actively monitor and block known circumvention tools" "including virtual private networks, proxy servers, and encrypted tunneling methods to evade content restrictions." They are indeed already going after the workarounds. It sometimes seems like people trying to impose restrictions are playing catch-up, like someone one move behind in chess, trying to create sufficient enforcement structure only after making the threats. I think this diffusion of impact can actually be more dangerous, not just because of the Overton window shifting norms, but because **having workarounds makes the most important people in the conversation complacent.** The techies people with functional workarounds are less likely to push back against a new requirement if it seems it can't be enforced against them. But once the requirement is in place, attacking the workarounds is mere enforcement of existing policy. Oppressive technical systems start with the marginalized: people who don't see the extent of the danger or who don't have the political standing to push back. Once the practice takes hold among the marginalized it metastasizes. Changing the norms and expectations weakens the position of the hold-outs who were able to ignore and dismiss the restriction before. Discouraging something means fewer people are in the habit of doing it, and eventually the people being inconvenienced are a small enough population that they don't have the political power to fight an effective ban. Once it's being inflicted on everyone you lose the right to complain. It's a form of **policy drift**, or the **ratchet effect**: the scope and effect of policy changing over time with no formal expansion. The proliferation of workarounds meant that competent people were excluded from the effects of the policy. But on paper the policy always applied to them too, even if it wasn't enforced. This allows an imposition to be rolled out in stages. After the written policy is normalized, increase security and enforcement until it includes everyone. Even though it's a complete reversal of the *effect* for some people, the action passes itself off as a change in scale, not a change in kind. ::: aside tangent I once bought an Amazon Fire tablet at a cheaper price because it came preloaded with ads. I figured I'd buy the cheap version for the hardware and replace the software with some generic OS. No dice! It's sealed hard. I don't get any sympathy for this not because Amazon ought to control the hardware it sold me, but because Amazon manufactured an expectation for it to behave one way. And it's enough to make it so that my inability to work around something I should be able to work around isn't an argument that holds water. The Smartbrains are not exempt from society. Once a policy is being enforced on the 80% of people who can't fight or don't care, it's easy to categorize the deviants as cheats and criminals. All of a sudden it will be the case that you can't adjust, and by then there will be systems to prevent any complaining from changing that fact. Best-case scenario the hackers are the last ones to fold which means by the time it affects them everyone else has already given up. This is why I'm making the complaint I'm making. This is why it's so dangerous for the technically enfranchised to be falsely confident. The people who care enough and understand the thing won't raise as much of a fuss if they are given a way to work around the restriction. But this lack of pushback from experts encourages the shift to happen. The more normalized something is the harder it is to take action against it, and the more discouraged something is the easier it is to ban it and write off people who complain as fussy outliers. You can't let a workaround be the last bastion standing between you and harm, because it will fall. #### Sideloading Earlier I talked about iOS jailbreaking and the way Apple has deployed increasingly strict security campaigns against their users. The counterargument to the need for jailbreaking has always been competition. If you want an open phone ecosystem, use an Android. There can't be a problem because there are competing options. This has always been mealy-mouthed -- why should the existence of an alternative mean I can't use *my* phone? If both companies are competing to maximize profit why should I expect a different one to treat me any better? Apple's locked-down approach to iOS was a problem for which Android acted as a workaround. The workaround acted like a pressure-release valve: the demand was onerous, but the people who objected the most had an outlet they could take without Apple actually accommodating their legitimate needs. This is the "clean your room" fallacy: the idea that the ability for a person to take some self-soothing action obligates them to do that instead of anyone ever addressing the root problem. It's masturbatory. But my whole point is that workarounds can't be trusted even when they do exist. And now the Android workaround is dead too, as [Google is going to block sideloaded apps too.](https://arstechnica.com/gadgets/2025/08/google-will-block-sideloading-of-unverified-android-apps-starting-next-year/) This is all extremely predictable behavior that antitrust anticipates. The duopoly doesn't matter when they both pick the policy that gives themselves the most power. People always needed the right to own their own phones. The fact that you could switch to a phone with [a different kind of malware](https://rewterz.com/threat-advisory/privacy-concerns-over-israeli-appcloud-on-galaxy-devices) never addressed the underlying problem, it only diffused the outrage. But people getting used to the restrictions of iOS shifted expectations enough that Google believes it's able to squeeze its customers in exactly the same way. ## Conflicts There are also places where an imposed restriction *hasn't* fully bitten in yet, where wars are actively being fought in a meaningful way now. ### YouTube forever war Most content platforms have this conflict: they're in the business of sending information to people so they can see it, but at the same time they desperately want to be in control of how people interact with them. Let's specifically look at YouTube, a video hosting service that badly wants to prevent people from downloading its videos. There are a number of reasons for this -- YouTube wants to keep users on its platform, track their behavior, serve them recommendations -- but the most obvious is advertising. If you've downloaded a real video file you can play it on any device and with any software you want. That means not going through their proprietary video player and not letting it serve you advertisements. Of course the user doesn't want advertisements and wouldn't choose to watch them, so YouTube's video hosting service depends on them restricting users' access to video. That means blocking downloading and third-party clients to make sure their interface is a chokepoint. There are good reasons to download videos, just like there are good reasons to tape shows. But YouTube isn't in the business of making sure you're able to do things you have good reason to do. YouTube only wants to send video if it trusts the client to act in YouTube's best interest over the users. That means playing ads YouTube wants to play, collecting analytics YouTube wants to collect, pushing recommendations YouTube wants pushed. Anything to keep you hooked into a platform instead of treating YouTube as the infrastructure it is. This looks like another impossible problem for YouTube; sending users playable video means giving them everything they need to record it. YouTube is already sending you a copy of the video for you to play; it's the video cable problem all over again. But that hasn't stopped YouTube from really, really trying. There is a constant struggle back-and-forth as YouTube does what they can to break alternate clients and downloading tools. Cobalt, a web-based downloading tool, has faced [particularly harsh pushback](https://github.com/imputnet/cobalt/issues/1356) from YouTube, as the requests aren't made by individual clients which has allowed YouTube to deny access to Cobalt's servers. Paid, professional downloading software that used to be able to easily download video [now has to piggyback off your personal YouTube account to work](https://dvdvideosoft.zendesk.com/hc/en-us/articles/21807947153693-Why-do-I-need-to-sign-in-to-YouTube-to-download), putting you at risk if Google ever decides to retaliate against unauthorized use. There was a significant escalation discovered just last week; [YouTube is changing its delivery format to prevent tools from extracting data, and extractors are having to be more computationally expensive to parse the information.](https://github.com/yt-dlp/yt-dlp/issues/14404#issuecomment-3330980464) Google is working on various user attestation techniques like [BotGuard](https://github.com/LuanRT/BgUtils) and [Proof of Origin tokens](https://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide) that all exist to prevent user behavior YouTube dislikes. You can't just download the file YouTube is supposed to be sending you, you have to run a whole VM to solve the challenges they're throwing out to prove you're really watching their advertisements. And they've tested the waters in taking this even further. Back in March, YouTube ran a test where they'd [wrap all videos in proprietary DRM](https://github.com/yt-dlp/yt-dlp/issues/12563#issuecomment-2710353823), *including Creative Commons licensed videos where this DRM wrapping was explicitly prohibited.* YouTube can secure its content against users by requiring security upgrades on both the client and server (like HDCP's locks on both ends of the cable), but this comes at the cost of compatibility. They have to choose between maintaining compatibility with older insecure-but-functional protocols or marginal security improvements. Unfortunately for everyone they often choose the latter, deeming the services they break "acceptable losses". If you have an old smart TV or DVD player that shipped with a YouTube application, it probably doesn't work anymore. When I first wrote that I was thinking about embedded systems, like old smart TVs that aren't getting updates. Then I remembered YouTube doesn't work *on my phone.* I have an iPhone 6s, a hardware model first released in 2015. It's my perfect device: it has a headphone jack and supports Touch ID and 3D Touch (the best feature). It's the newest device [Apple deems "obsolete."](https://support.apple.com/en-us/102772) I purchased it in 2020-something and it's running the latest version of iOS available for it: iOS 15, released 2022. I have the latest version of the YouTube app available, and it doesn't work because YouTube killed it. YouTube not only stopped supporting my version of YouTube, it's fully cut off access and prevents it from functioning at all. ![](./youtube-versiongate.png) ![](./youtube-appstore.png) {: .side-by-side .align-top .size-s} Right now I'm sharing a YouTube red account. I am a paying customer of their service. Does that mean I get to actually use it? No. So when I talk about compatibility that doesn't just mean the weird old embedded systems I was first picturing, that means anything short of the cutting-edge. If you're living anything short of the disastrous yearly-phone-refresh hyperconsumer lifestyle, you're subject to anything breaking at any time. But it's needless! The problem is not that YouTube has completely changed the service they provide and extensive changes to the client would be required to support the the many new features. YouTube's service hasn't changed, they're just refusing to continue service to players who don't defend YouTube against undesired user behavior enough for them. The nuclear option is for YouTube to block you from watching (not downloading, just watching) videos at all, even though their maximally-supported channels. Despite this completely breaking the premise of their service, this is something they do regularly. If YouTube decides there's "suspicious" behavior coming from any device on your network (or if it thinks you're using a VPN), it can trip a flag that blocks public access to publicly posted videos. If your connection is flagged like this any attempt to access video will fail with the error "[Sign in to confirm you're not a bot](https://www.reddit.com/r/youtube/comments/1drdgyg/any_time_i_try_to_play_a_youtube_video_on_youtube/)" any time you try to watch a video without explicitly being logged into your YouTube account, which breaks video embeds in third-party programs. This lasts as long as they feel like keeping you blocked, with no way for you to even *request* they restore normal service. This is another reality it seems impossible to get around: when push comes to shove, YouTube doesn't have to send you any video at all. There is no negotiation here, YouTube signs itself unilateral authority. You've got no service-level agreement, no nothin'. ::: aside tangent YouTube downloading tools themselves have also drawn legal fire, although so far the tools have held out. The main example of this was [the fraudulent 2020 DMCA takedown request filed by the RIAA against the public repository for the "youtube-dl" downloading tool.](https://www.eff.org/deeplinks/2020/11/github-reinstates-youtube-dl-after-riaas-abuse-dmca). GitHub initially complied with the demands and [only reversed course after significant public outcry](https://github.blog/news-insights/policy-news-and-insights/standing-up-for-developers-youtube-dl-is-back/). It's easy to see how -- depending on how the law and public sentiment swing -- downloading video from YouTube as a practice could be entirely stamped out someday. ### Microsoft accounts on Windows 11 Here's an example that just happened very recently: Microsoft removing "workarounds" that allow you to install Windows without a Microsoft account and an internet connection. This puts together everything I've been talking about. It has the introduction of a new requirement, manufactured consent, manipulation via design, and systematically increasing pressure against objection. First, don't be confused by the nomenclature here: this is *not* about software licensing, this is about social media. On new installations of Windows, Microsoft has been gradually rolling out a requirement for already-licensed Windows installations that new users link their Windows operating system user account to their web account. This is part of a larger shift of the Windows ecosystem: tying your identity to every facet of the user experience. Every action you take can be governed not just by the hardware and software you own, but by your immediate relationship with Microsoft and the entitlements they choose to allow you. This exact name and function of this account has evolved over time. In 2001 it was your MSN account if you used MSN Messenger. In 2005 it became your Windows Live account if you used Hotmail. In 2012 it was renamed "Microsoft account", used for a bundle of services called Office 365, later renamed Microsoft 365 in 2022, and currently called... "[The Microsoft 365 Copilot app (formerly Office)](https://support.microsoft.com/en-us/office/the-microsoft-365-app-transition-to-the-microsoft-365-copilot-app-22eac811-08d6-4df3-92dd-77f193e354a5)". At some point Bing was involved? In parallel to all this it also serves as your Xbox Live account. It's the account you use for any Microsoft software-as-a-service products, and now you need an active account in good standing with Microsoft in order to use Windows at all. Whether Microsoft lets you use your computer or not hinges on the account you use to log into Minecraft. The technology to do this exists. In fact, one of the new hardware requirements for Windows 11 is the "Trusted Platform Module": a physical cryptography chip that enables even stronger device and tamper-prevention. This is the [ring minus one](https://pluralistic.net/2022/01/30/ring-minus-one/) chip designed to give manufacturers --rather than users -- control of what behavior is allowed on people's devices, in as granular a way as they care to enforce. There's a lot more to say about Microsoft shifting Windows from a software product you purchase to an ephemeral service, most of it bad. The entire campaign of digital tenancy is bad. Requiring an internet connection to install an operating system is bad. The Windows user experience turning into a vertically integrated nightmare is bad. Microsoft having the ability to deny you even the most basic access to a product you've purchased for policy reasons is bad. The policy reason itself -- extending corporate control over the entire operating environment -- is bad. Deanonymization and tying intimate usage information to data harvesting is bad. Digital feudalism is bad. [Replacing real software with software-as-a-service is bad in general.](https://blog.giovanh.com/blog/2023/02/27/lies-damned-lies-and-subscriptions/) It's bad for software use to be defined as a forever-relationship users have with Microsoft tied to a web account Microsoft unilaterally controls. Depending on a Microsoft account to use Windows means being obligated to meet any new requirements Microsoft might add to their terms at any time for any reason. Add that to Microsoft storing your files in the cloud by default, an environment with automatic content scanning and faulty detection of criminality[^detection]… there's so, so much danger here. (In fact, [Microsoft is rolling out even more scanning right now, and they're already limiting users' ability to control it.](https://hardware.slashdot.org/story/25/10/11/0238213/microsofts-onedrive-begins-testing-face-recognizing-ai-for-photos-for-some-preview-users)) They've already proven they don't need this kind of invasive control to be wildly, wildly profitable. [Microsoft assigns itself virtually unlimited authority over Microsoft accounts](https://www.microsoft.com/en-us/servicesagreement/). This is the "not actually buying anything" case [I've complained about before](https://blog.giovanh.com/blog/2023/05/20/netflixs-big-double-dip/#casino-capitalism); no matter what you purchase Microsoft insists it doesn't owe you anything. It's just providing whatever services it chooses to provide, which it does regardless of whether you pay them money or not. There are no projects, no services, no software, no licenses. There are only casinos. [^detection]: Many such cases. See past reporting: - [Joe Mullen, "The EARN IT Bill Is Back, Seeking To Scan Our Messages and Photos"](https://www.eff.org/deeplinks/2023/04/earn-it-bill-back-again-seeking-scan-our-messages-and-photos) - [Johana Bhuiyan, "Google refuses to reinstate man's account after he took medical images of son's groin"](https://www.theguardian.com/technology/2022/aug/22/google-csam-account-blocked) - [Guenni, "Microsoft's account suspensions and the OneDrive 'nude' photos"](https://borncity.com/win/2020/08/16/microsoft-kontensperrungen-und-die-onedrive-nacktfotos/) - [Andrew Brandt, "apparently #microsoft #Sharepoint now has the ability to scan inside of password-protected zip archives"](https://infosec.exchange/@threatresearch/110373860063222707) Including mine: [Client CSAM scanning: a disaster already (2021)](https://blog.giovanh.com/blog/2021/11/19/client-csam-scanning-a-disaster-already/) Also note unresolved cases with [Google](https://support.google.com/accounts/thread/233830896/banned-google-account?hl=en) and [Microsoft](https://learn.microsoft.com/en-us/answers/questions/5444502/my-microsoft-account-was-suspended-due-to-false-po) and similar. Currently incidents usually have to do with CSAM, which is currently the most politically acceptable thing to tie automated scanning and zero-tolerance enforcement to. But this is a system designed to be expanded on, and we're already seeing data collection used to go after more behavior including [abortion](https://www.eff.org/deeplinks/2025/10/flock-safety-and-texas-sheriff-claimed-license-plate-search-was-missing-person-it), [immigration](https://www.independent.co.uk/news/world/americas/us-politics/texas-immigration-lawyer-deportation-ice-b2744026.html), and even [journalism](https://www.newyorker.com/news/the-lede/how-my-reporting-on-the-columbia-protests-led-to-my-deportation). The point is, this is bad. But right now what I really want to look at is the ways people work around things which are bad. Which these are, and [which they've been doing.](https://www.ghacks.net/2020/11/16/dont-activate-the-lets-go-button-in-the-windows-10-settings-application/) Microsoft has rolled this change out progressively, so most normal users have already been using Microsoft accounts for a while now, even on Windows 10. At first tying your user account to Microsoft was presented as an [exciting new feature](https://www.ghacks.net/2020/11/16/dont-activate-the-lets-go-button-in-the-windows-10-settings-application). Then it became the default as early as setup, and the only way to circumvent it was to disconnect the machine from the internet entirely. But Microsoft tightened the noose further and has finally made an internet connection and account registration a hard requirement for setup: > [Tom Warren, "Microsoft is plugging more holes that let you use Windows 11 without an online account"](https://www.theverge.com/news/793579/microsoft-windows-11-local-account-bypass-workaround-changes){: .cite} > Microsoft is cracking down on bypass methods that let Windows 11 installs use a local account, and avoid an internet requirement during the setup process. In a new [Windows 11 test build](https://blogs.windows.com/windows-insider/2025/10/06/announcing-windows-11-insider-preview-build-26220-6772-dev-channel/) released today, Microsoft says it's removing known workarounds for creating local accounts... > ... > "We are removing known mechanisms for creating a local account in the Windows Setup experience (OOBE)," says Amanda Langowski, the lead for the Windows Insider Program. "While these mechanisms were often used to bypass Microsoft account setup, they also inadvertently skip critical setup screens, potentially causing users to exit OOBE with a device that is not fully configured for use." > > The changes mean Windows 11 users will need to complete the OOBE screens with an internet connection and Microsoft account in future versions of the OS. In its eagerness to seize more control over the entire desktop experience, Microsoft sees the Windows setup as a process that, as one of its core requirements, "[ensures that all users exit setup with internet connectivity and a Microsoft Account.](https://blogs.windows.com/windows-insider/2025/03/28/announcing-windows-11-insider-preview-build-26200-5516-dev-channel/)" Creating a regular on-device user account is [now understood as an "exploit", a "loophole"](https://www.tweaktown.com/news/108130/microsoft-once-again-tightens-grip-on-windows-setup-freedom/index.html) to get around "correct procedure." And, under that framing, of course Microsoft is within its rights to "crack down" on people using unsupported techniques that leave their machine "[misconfigured](https://www.theverge.com/news/793579/microsoft-windows-11-local-account-bypass-workaround-changes)." The remaining holdouts are competent technical users who understand the problems and are willing to go to the effort of working around the new demand. But as Microsoft gets stricter and stricter the pool of holdouts gets smaller and smaller, and as the pool of holdouts shrinks they get more and more marginalized. "Privacy weirdos", people who care more about imagined danger than actually getting work done. All of this is intentional, and so far the strategy is working. If this change had been rolled out in a way that required informed consent from users, opinion would've been split. Some people would've used it, some people wouldn't. Instead the people who objected were temporarily placated until the Overton window shifted and the rug could be pulled from under them. When linking your Microsoft account was first announced a lot of people assumed it would *have* to be optional at some level. It was too egregiously bad a practice to be delivered as a standard, let alone a requirement. Requiring the OS to phone home to complete setup was too outrageous, too out-of-line with how computers actually work. Microsoft would never actually be able to dictate which computers could run, even though -- as the game industry did with its consoles, and Apple did with mobile devices -- they'd constructed a technical and legal environment where doing that was possible. No matter what, a workaround would have to exist in some capacity. They couldn't *really* make it a requirement, surely. But that's exactly what's happening! ![EmilyAYoung1: People kept saying "you can just work around it!" - I've always asked "for how long?" - The answer was precisely this long. - I'm sure there'll be another workaround, but again for how long and how many mods will you need to do? oobe\bypassnro was always a Microsoft-supplied fix.](https://twitter.com/EmilyAYoung1/status/1975676073808699652) ### NFTs fighting a social acceptability war Let's go back to NFTs, a distinctly goofy technology. This isn't a war that's being fought anymore; NFTs squarely lost this one. But let's look at the conflict anyway. First the technical backbone. NFTs necessarily reference the resource they identify as a matter of public record, and in order to "display" an NFT, you have to be able to look up and show whatever it's referencing. Web browsers download the resource any time they want to "display" the NFT, and so anyone can in fact right-click and save it regardless of any "ownership information" present on a blockchain. This was used as a dismissal of NFT hype; the value proposition was effectively deflated by tools everyone already had for free. There was a good bit of clowning as people learned for the first time how images worked, and even some feeble cries for new technical controls to protect the exclusivity of NFT assets against "right-clickers." ![](./nftavatar.png){: .size-s} ![DegenNFTS: Can you disable right click save? @mr52pickup: The NFT space needs this if we ever hope to survive](./degennfts.jpg){: .size-s} But what if you really couldn't right-click? I don't think it's as far-fetched as it sounds. Let's imagine a world where Twitter followed-through with [integrating NFT display](https://x.com/XDevelopers/status/1585707921433923585) into the main platform, like they made such a fuss of doing. People would link NFTs to their Twitter account as a form of wealth display, and you would interact with the display side of the system right on the Twitter app. ![wongmjane: Twitter is working on Collectibles profile tab, NFT view and NFT details view https://t.co/BrqPyvaLOp](https://twitter.com/wongmjane/status/1452373149689909248) There's no right-clicking here. On a mobile app you can only save what select elements the app offers you the ability to save. Actual user agency is already heavily regulated in ways that are designed to hijack end-user devices to further business interests at user expense. [Screenshots can even be selectively disabled](https://screenshieldkit.com) to hide content the app deems sensitive. But doesn't this break down with the Twitter website? They haven't taken the operating system hostage there yet, and people would still be able to inspect pages and save resources, including NFT content. In this case Twitter can treat web browsers the same way YouTube treats "insecure" clients. They could deny access to "sensitive" resources like NFTs on the web client -- displaying some downscaled preview if not blocking access entirely -- with an upsell to use their official app instead, [as so many sites already do](https://discussions.apple.com/thread/253455157?sortBy=rank). Even if they did have some form of web client for this view, it's not true that a web interface in any way "cracks open the system". Even today on the web view you can't save videos without special tooling. But that kind of technical enforcement is not what happened. Instead, Twitter [quietly rolled back its brief foray into NFT support in 2024. ](https://techcrunch.com/2024/01/10/x-removes-support-for-nft-profile-pictures/) It wasn't stopped because it was impossible; it was stopped because the NFT "movement" just didn't have the juice. This wasn't the entire electronics industry moving to support the entertainment industry, this was Elon Musk grifting the world's dumbest investors, backed by the spare cycles of a [skeleton crew](https://www.nytimes.com/2022/11/04/technology/elon-musk-twitter-layoffs.html). They barely put together a functional webpage, let alone effective technical restrictions. ![nervouswaffle: I think the funniest thing about this shitty Twitter NFT integration is that it still gives you the option to save the image. Great job, dipfucks. - #NewNFTProfilePic https://t.co/hQfRf8cTDo](https://twitter.com/nervouswaffle/status/1484347461711790082) In order to effectively institute a restriction like this you also need to manufacture a social consensus. This is why there was an intentional campaign to attack the social acceptability of refusing to dignify the society NFTs aimed to create. There was a miniature culture war for social acceptability of NFTs and the social structures that legitimacy would require, but like everything having to do with NFTs it was clumsy and heavy-handed. NFT people tried to turn having their "property" right-click-saved as a form of victimhood. They tried to leverage mockery against their critics, [accusing people of just "not getting it" and generally being lesser people for rejecting their vision.](https://www.vice.com/en/article/5dgzed/what-the-hell-is-right-clicker-mentality) You even saw NFT people [trying to turn right-clicking into a form of retaliatory violence.](https://twitter.com/CoinersTakingLs/status/1458139621242523649) The goal here is to deny social acceptability to behavior that breaks the business model. But it failed: there was a refusal to dignify the attempt to impose a social structure. Their demands weren't humored and the technical enforcement they would have required wasn't built. ### Piracy? This obviously seems goofy. Of course legitimizing NFTs as a meaningful form of "ownership" never could have worked, it's simply too stupid. Why would people accept the introduction of artificial scarcity into a market that didn't need it? Who would care this much about ownership of an idea? And what society would allow the imposition of an artificial social structure that benefits some at the expense of others? This happens all the time, of course. Structurally, this isn't really that different from the delegitimization of media copying as piracy, which more or less worked. The Intellectual Property people had better public relations, better propaganda, and better arguments. But they accomplished the same thing the NFT people wanted: social acceptability of an ownership status over non-rivalrous goods made artificially scarce via institutions. And you know what the craziest part is? They were able to kill the social acceptability of right-clickers. And they had to really work for it. ![](./fast-hands.png) ![](./fast-this-disk-crop.png) Importantly, when these advertisements were published, **they were false**. At the time in Europe, "theft" and "stolen goods" were specifically defined in criminal law and didn't include file copying! None of the behavior described by these antipiracy ads involved depriving someone else of the use of their property. It was copyright infringement, sure, but it wasn't theft or even piracy! ![](./nintendo-commercial-rape.png) Nintendo famously called video game rental "commercial rape", but at least you could argue they were using a metaphor. The software industry was making a false equivalence: the assertion is that copyright infringement **is theft**, that they are literally the same thing. This is false, and the reason they had to lie and say one thing was something different is to manipulate public opinion. It's arguable how effective this kind of messaging was. I doubt heavy-handed propaganda changed the minds of [the rotters picking up a tosh copy of Bloggo's Pow from a dodgy monger](./bloggos-pow.png), which was definitely real. But between public relations and legal pressure, most unapproved copying has been successfully branded as shameful piracy and relegated to the underground, with many techniques already dying out. We already live in the world where the ownership weirdos won over right-clicker mentality. That's fully a thing that happened. ## Conclusion It's easy to see an absurd demand and write it off as impossible to enforce, but the technical infeasibility of a restriction doesn't keep it from sticking. Social acceptability determines not just legality but day-to-day technical abilities in the long term. Hard restrictions preventing technically "natural" behavior are already in place and effective, and campaigns are paving the way for more right now. What can and can't be effectively enforced depends on motivation and social acceptability. Policies can be implemented with minimal blowback by rolling restrictions out gradually, starting with people who are uninterested or otherwise disenfranchised. The holdouts can be coerced and forced later, once the policy has momentum and people have already acquiesced to it. In a cruel irony, the ability to work around impositions makes the people most knowledgeable about dangers and most equipped to communicate them complacent. Relying on hacks and workarounds can be a way of objecting something without actually imposing pushback: a smug "well, we'll see about that", up until you do in fact see yourself lose. And what we lose is our way of life. Apple killing support for my perfectly good Pebble smartwatch isn't something that shows up on my taxes or as a line item anywhere. But it does affect how I actually live my life every day, and that's the thing that matters. And it's extremely easy for it to be attacked. We live in a world where a corporation changing its policy about its own products is materially destructive to people with no right to remedy. I've been talking about computers here, but this is just an expression of a much older truth. This is gradualism, solidarity, deontology. Just because something doesn't affect you now doesn't mean it won't later. Being able to avoid trouble at first is a shortcut to a false sense of security and allows you to ignore structural problems even as they grow, eager to consume you too. You're vulnerable when *anyone* is vulnerable. You're one of everyone! ## Related ::: container related-reading - [Lawrence Lessig, "Code 2.0"](https://lessig.org/product/codev2/) - [Defend VPNs from government bans!](https://www.defendvpns.com/) - [Juliet Samuel, "Online Safety Act was botched from the start"](https://archive.ph/2025.08.13-190800/https://www.thetimes.com/comment/columnists/article/online-safety-act-botched-2xk8xwlps) - [Ernie Smith, "Why We Won't Stop Trying To Filter The Internet"](https://tedium.co/2025/08/26/internet-content-filters-history/) - [Samantha Cole, "Wyoming and South Dakota Age Verification Laws Could Include Huge Parts of the Internet"](https://www.404media.co/wyoming-and-south-dakota-age-verification-laws/) - [Andy Baio, "Why You Should Never, Ever Use Quora"](https://waxy.org/2018/12/why-you-should-never-ever-use-quora/) - [Ryan Whitwam, "Judge: Google can keep Chrome, must share search data with qualified competitors"](https://arstechnica.com/gadgets/2025/09/google-wont-have-to-sell-chrome-judge-rules/) - [Analog Hole](https://en.wikipedia.org/wiki/Analog_hole) - [Ken Fisher, "Privately, Hollywood admits DRM isn't about piracy" (2007)](https://arstechnica.com/tech-policy/2007/01/8616/) - [Web3 is Going Just Great](https://www.web3isgoinggreat.com/?tech=nft) - [smea, "Jailbreaking the 3DS Through 7 Years of Hardening" (video)](https://www.youtube.com/watch?v=WNUsKx2euFw) - [Stop Killing Games](https://www.stopkillinggames.com) - [Cory Doctorow, "The Shitty Tech Adoption Curve Has a Business Model"](https://pluralistic.net/2023/06/11/the-shitty-tech-adoption-curve-has-a-business-model/) - [Apple Blocks Immigration-Tracking App From App Store](https://www.businessinsider.com/apple-iceblock-app-store-removed-2025-10) - [Anthony Cuthbertson, "YouTube stops working for millions as war against ad blockers intensifies"](https://www.the-independent.com/tech/youtube-down-not-working-ad-blocker-b2552387.html) - [Kim Cameron, "The Laws of Identity"](https://learn.microsoft.com/en-us/previous-versions/dotnet/articles/ms996456(v=msdn.10)) - [Sangiovanni, Andrea and Juri Viehoff, "Solidarity in Social and Political Philosophy"](https://plato.stanford.edu/archives/sum2023/entries/solidarity/) - [Johnson, Robert and Adam Cureton, "Kant’s Moral Philosophy"](https://plato.stanford.edu/archives/win2025/entries/kant-moral/)