On August 5, 2021, Apple presented their grand new Child Safety plan. They promised âexpanded protections for childrenâ by way of a new system of global phone surveillance, where every iPhone would constantly scan all your photos and sometimes forward them to local law enforcement if it identifies one as containing contraband. Yes, really.
August 5 was a Thursday. This wasnât dumped on a Friday night in order to avoid scrutiny, this was published with fanfare. Apple really thought they had a great idea here and expected to be applauded for it. They really, really didnât. There are almost too many reasons this is a terrible idea to count. But people still try things like this, so as much as I wish it were, my work is not done. God has cursed me for my hubris, et cetera. Letâs go all the way through this, yet again.
Thu Aug 05 22:16:28 +0000 2021I am so deeply frustrated at how much we have to repeat these extremely basic principles because people just refuse to listen. Like, yes, we know. Everyone should know this by now. Itâs mind boggling. twitter.com/sarahjamielewiâŚ
The architectural problem this is trying to solveđ
Believe it or not, Apple actually does address a real architectural issue here. Half-heartedly addressing one architectural problem of many doesnât mean your product is good, or even remotely okay, but they do at least do it. Apple published a 14 page summary of the problem model (starting on page 5). Itâs a good read if youâre interested in that kind of thing, but Iâll summarize it here.
Client-side CSAM detection is designed to detect CSAM (child sex-abuse material, aka child porn) in personal photo libraries2 and message attachments. It does this by comparing images to a large database of known CSAM (more on this later). If an image is identified as known CSAM, it is sent to human moderators for review who can disable accounts and send evidence to law enforcement agencies.
The purpose of the âclient-sideâ part of client-side scanning is that the scan and check are performed right on the phone (the client) without sending any private pictures to an internet-connected Apple server. It works like this: For each image, your phone generates a hash for that image with some sort of hashing algorithm. A hashing algorithm is a one-way function: given some input file, hashing it will always produce the same hash, on any device. However, you cannot reconstruct the original image from the hash. The only way to identify some query hash as belonging to an image is to have the image already, hash it, and compare that to the query.
Apple themselves cannot legally generate the hash database, because they cannot legally possess CSAM. Only NCMEC3 (the National Center for Missing & Exploited Children, a government non-profit) can legally possess CSAM and therefore generate the databases of known CSAM to check against. As a defence against non-CSAM being inserted to the database, Apple generates its database by looking at the intersection of multiple CSAM databases from multiple countries. Only hashes found in both lists will set off alarms.
There are two major problems with this. First, even with the protection of not sending Apple the content of your files, this system is still fundamentally really bad. Second, the system cannot work and the protections it tries to offer make people far, far more vulnerable to attack. More on both of these issues later.
This is really to sate governmental concernsđ
So, given that Apple prides itself on being a privacy-respecting company, why would they want to set up a system like this in the first place? Well, it makes a lot more sense if you look at this announcement in its political context.
iCloud photos âautomatically keeps every photo and video you take in iCloud, so you can access your library from any device, anytime you wantâ. Because your iCloud data is encrypted end-to-end, though, your pictures arenât stored in a viewable format anywhere except your own devices. Apple canât read your data, hackers canât read your data, and the government canât read your data. This is the security provided by end-to-end encryption.
But there is a fierce war between the law enforcement community and user privacy. The Department of Justice despises true encryption and routinely pushes for âlawful accessâ in an attempt to criminalize encryption that would prevent foreign attackers (in this case, law enforcement officers, possibly with a warrant) from accessing a secure device.
Thereâs a reason itâs child abuse. Itâs always child abuse. The rhetoric of child abuse is second only to ânational securityâ in its potency in policy discussion. Child abuse is so thoroughly despised that itâs always the wedge used for some new method of policing. From US v. Coreas:
Child pornography is so repulsive a crime that those entrusted to root it out may, in their zeal, be tempted to bend or even break the rules. If they do so, however, they endanger the freedom of all of us.
All this (debunked) rhetoric of âthink of the childrenâ and âif you have nothing to hide you have nothing to fearâ serves to manipulate public sentiment and institute law enforcement systems that presume guilt and lack due process.
So, when Facebook discussed using end-to-end encryption on their services in 2018, William âserial liarâ Barr used child abuse to justify his open letter urging Facebook to leave its users vulnerable to scanning. Child safety quickly became one of DOJâs favourite rhetorical devices. A full section of their lawful access page is dedicated to rhetoric about the dangers of online child exploitation. âIf only youâd let us use your services as a panopticon surveillance networkâ, they say, âthen we could keep the children safe.â
Apple doesnât want to have its services turned into a panopticon surveillance network for law enforcement, but law enforcement keeps threatening to do so by force. The grotesquely-named âEARN IT Actâ was a recent attempt to strong-arm platforms into disabling encryption, or else lose Section 230 immunity â a basic legal principle any web platform needs to stay in business.
If Apple comes up with some alternative way to handle CSAM, that might diffuse the DOJâs favourite argument for seizing personal data. This is a diffusal and more; this is an olive branch to law enforcement. Apple gets a chance to show theyâre making a legitimate effort to combat CSAM and let law enforcement prosecute peddlers.
The problem is olive branches to law enforcement are futile, because law enforcement has an infinite appetite. No police force will ever have enough power to be satisfied. Apple is hoping this will make the DOJ drop their objections about encrypting user data, but this is folly. Once Apple can do client-side scanning, the DOJ can make them scan iMessage, apps, Facebook. This doesnât sate anybody, it just opens the door for law enforcement to demand more power.
Replying to matthew_d_green:Thu Aug 12 17:56:47 +0000 2021@hackermath @socrates1024 @mvaria @TheAaronSegal A lot of people in our field think they can negotiate some kind of deal with law enforcement. But we canât, and many of the proposals I see people put forward donât have good answers for what happens when law enforcement just renegotiates the technical countermeasure.
The Slippery slopeđ
Itâs very easy to write an article about how a thing might be a slippery slope. âOh, noâ, the lazy writer pens, âthis isnât bad, but I imagine it might become bad later!â But thatâs not what Iâm doing here, because you donât have to imagine anything. A gambling addiction isnât a âslippery slopeâ to crippling debt, crippling debt is a result of a gambling addiction. Thereâs no fallacy here, only well-understood cause and effect. To say understanding the consequences of this is to jump to conclusions is to say âI aimed my rifle and fired but what the bullet will do who can sayâ, and to say so should get anyone laughed out of polite society.
Appleâs threat model overview focuses on the threat of a secret attempt to add hashes to the DB (remember the intersecting hash database defence), but neglects the probability of overt attempts. Governments donât have to use subterfuge to add hashes to a database, they have the law. Thereâs nothing here that would prevent Apple from substituting another hash database in a system update â in fact, Apple will have to update the database to keep up with new CSAM.
Replying to kurtopsahl:Fri Aug 13 19:40:26 +0000 2021Another realistic threat is a gov't, overtly in a public law, requiring providers to scan for and report matches with their own database of censored hashes (political, moral, etc), on pain of fines and arrest of local employees. Apple says it will refuse.
Sat Aug 14 05:30:58 +0000 2021In their new FAQ, Apple says they will refuse govt requests to use their CSAM-scanning tech to scan for other forms of content. How exactly will they refuse? Will they fight it in court? Will they pull out of the country entirely? This is not a time to get vague. twitter.com/kurtopsahl/staâŚ
If demanded to institute âgovernment-mandated changes that degrade the privacy of usersâ, Apple says it will not, but thatâs not nearly as strong a protection as saying it cannot. But, of course, Apple saying it will refuse government demands to change their policy is ludicrous.
One just has to look at the despotsâ playbooks in India and Nigeria to see why. In an effort to quell political unrest, India has attempted to clamp down on digital services to censor protests and organization by instituting laws that demand social media delete any post the government requests. The removals demanded are frequently arbitrary and illegal.
Despite this, the Indian government has successfully roped Twitter and Facebook into censoring whatever sentiment the government wants to go away. Governments do this by demanding companies incorporate with a local presence (in Russia, too) and then threatening those local employees with violence if their demands arenât met. The same happened with Google in Hong Kong, with Google making and breaking the same meaningless little pledge.
Apple isnât shy about its commitment to adhere to all applicable laws everywhere it does business. Tim Cook himself recently confirmed this, explicitly, at trial. So Apple saying it will somehow be able to resist any amount of government pressure in order to protect user privacy is absurd. What makes it even more absurd is that Apple has literally already failed in exactly that.
In China, the government has forced Apple to store all the personal data of Chinese customers on servers run by a Chinese firm, unencrypted and physically managed by government employees. When Apple develops a new technology to scan encrypted content, the Chinese government getting their hands on that technology isnât a theoretical risk. Itâs a fact of life. In China, Apple works for the Chinese government. In the US, Apple works for the US government. Whatever tools Apple has are tools the state has. Any surveillance tools Apple builds for itself are tools Apple builds for the worldâs despots.
And, of course, the idea that Apple can resist government pressure is subverted, trivially, by the very fact that this very scanning feature is being developed as a result of government pressure. This story starts with Apple folding to government pressure from the very beginning.
But one might say: âSo, then, what is the issue? If the government either will or wonât make Apple backdoor their software to let the state to use consumer iPhones as they wish, why fight against potentially beneficial uses of surveillance at all?â
Well, this idea that âthereâs no stopping itâ is fundamentally wrong, because it discounts the significant jump from ânot existingâ to âexistingâ. It is far easier for a state to force a company to let the state use a tool than it is to force a company to make a tool. A top-down mandate to build a backdoor where there wasnât one before would leak, and people would resign. But if the software already exists, it just takes one executive decision to allow it. Heck, in states like China, where the military already controls the physical infrastructure, a surveillance apparatus could be seized by force. That just isnât the case if the backdoor doesnât exist.
And this is yet another scenario that literally happened. After the San Bernardino attack, Apple refused to assist the government in unlocking an encrypted iPhone. CEO Tim Cook wrote an open letter explaining that it did not have the tools to do this and would not create them:
Up to this point, we have done everything that is both within our power and within the law to help [law enforcement]. But now the U.S. government has asked us for something we simply do not have, and something we consider too dangerous to create. They have asked us to build a backdoor to the iPhone.
Specifically, the FBI wants us to make a new version of the iPhone operating system, circumventing several important security features, and install it on an iPhone recovered during the investigation. In the wrong hands, this software â which does not exist today â would have the potential to unlock any iPhone in someoneâs physical possession.
The FBI may use different words to describe this tool, but make no mistake: Building a version of iOS that bypasses security in this way would undeniably create a backdoor. And while the government may argue that its use would be limited to this case, there is no way to guarantee such control.
It is crucial to this argument that the tool to perform this âunlockâ does not exist. Apple never did unlock the device or build any tools to do so, but it would have been an entirely different story if it had tools it simply refused to use. Tools like that would be seized and used without allowing any of the due process that played out in the story. The question of whether the tool already exists or not is hugely significant, and must not be discounted.
Replying to SarahJamieLewis:Thu Aug 05 22:13:24 +0000 2021How long do you think it will be before the database is expanded to include "terrorist" content"? "harmful-but-legal" content"? state-specific censorship?
We also know that once government gets a new tool thereâs no room for restraint. A similar case happened in the UK: ISPs created a system to scan for and block child-abuse image, but it was only a matter of years before the government demanded they use those same systems to block a wide range of politically-motivated content, going as far as censoring alleged trademark infringement. As the court explicitly says,
the ISPs did not seriously dispute that the cost of implementing a single website blocking order was modest. As I have explained above, the ISPs already have the requisite technology at their disposal. Furthermore, much of the capital investment in that technology has been made for other reasons, in particular to enable the ISPs to implement the IWF blocking regime and/or parental controls. Still further, some of the ISPsâ running costs would also be incurred in any event for the same reasons. It can be seen from the figures I have set out in paragraphs 61-65 above that the marginal cost to each ISP of implementing a single further order is relatively small, even once one includes the ongoing cost of keeping it updated.
That is to say, once you have a system, itâs not hard to add one more entry. Any old entry, even if the system was meant to be tightly scoped.
We know exactly how this works, down to a science. Itâs happened before, itâs actively happening now, and it will happen to this.
And when the government does encroach on this, we wonât know. There wonât be a process, there wonât be restitution made when itâs found out. It took 12 years after 2001 to get hard evidence of mass warrantless surveillance. It took a full 19 for a court to even formally acknowledge the NSA collection program was illegal, and the repulsive laws that allowed it to spin up in the first place still havenât been repealed. Oh, and even under a pro-censorship government that rubber-stamped even more NSA spying, it still managed to breach its scope.
When the current security executive decides to use Facebook or Twitter or Apple for their latest political goal, there will be no process or deliberation. It will just be yet another crushing violation in one fell swoop.
Replying to matthew_d_green:Thu Aug 05 02:36:50 +0000 2021Whether they turn out to be right or wrong on that point hardly matters. This will break the dam â governments will demand it from everyone.
And by the time we find out it was a mistake, it will be way too late.
Mass surveillance is always wrongđ
Mass surveillance â searching everyoneâs data without a warrant, without probable cause, and without due process â is utterly abhorrent. Itâs utterly unconstitutional and is in direct conflict with the human right to privacy and autonomy.
Thereâs a huge difference between published information and personal information. When you post something on Facebook, itâs published as in public. When you post something on Twitter, itâs published. (Yes, even on your private account.) Itâs Twitterâs to disseminate now, not yours. Not so with your personal files. On-device photos not even received over iMessage, just backed up using Appleâs automatic backup system are not published.
Both common law and common sense understand the difference between these two contexts. While thereâs room for debate on how published works should be moderated, thereâs no expectation of privacy from the government there. But personal effects should never be searched â not by the government or a private corporation â without following the due process of documenting probable cause and having a warrant issued describing the particular artifact to be searched. The surveillance Apple proposes is a suspicionless search of personal effects, which is utterly unacceptable.
Appleâs proposition is a surveillance system. Apple sending out press releases focusing on the technical measures and the cryptography used is intentionally misleading. âItâs not surveillance, weâre hashing things and putting them in vouchersâ is a distinction without a difference. Sure, they made a new system to perform surveillance, but itâs still surveillance, and that means itâs wrong for all the same reasons all surveillance is wrong.
These surveillance systems will always compromise usersâ security, and that becomes increasingly dangerous as the systems are scaled up. On the other hand, these systems will never be enough for the law enforcement types who want it â even the âgoodâ ones who legitimately only want public safety.
The Columbia School of Engineeringâs Bugs in our Pockets: The Risks of Client-Side Scanning (CSS) captures the issue very well in its summary (emphasis added):
âŚwe argue that CSS neither guarantees efficacious crime prevention nor prevents surveillance. Indeed, the effect is the opposite. CSS by its nature creates serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic. There are multiple ways in which client-side scanning can fail, can be evaded, and can be abused.
Its proponents want CSS to be installed on all devices, rather than installed covertly on the devices of suspects, or by court order on those of ex-offenders. But universal deployment threatens the security of law-abiding citizens as well as lawbreakers. Technically, CSS allows end-to-end encryption, but this is moot if the message has already been scanned for targeted content. In reality, CSS is bulk intercept, albeit automated and distributed. As CSS gives government agencies access to private content, it must be treated like wiretapping. In jurisdictions where bulk intercept is prohibited, bulk CSS must be prohibited as well.
You canât have privacy with a system like this in placeđ
This is another point that should go without saying, but Iâm going to say it anyway: you canât have strong privacy while also surveillance every photo anyone takes and every message anyone sends. Those arenât compatible ideas.
Nuance is important, and itâs worth having careful discussions about these ideas and this technology, but itâs important to not to lose high-level comprehension of the issue. When nuanced discussions start implying that with enough nuance, surveillance systems can be built so they can somehow only ever be used for good, itâs time to step back from the detailed language for a moment and put the issue back into perspective, because we already know that canât be done. Nuance canât get you out of that hole, and when it tries to the best it can do is deceive.
Surveillance and privacy are diametrically opposed. No amount of cryptography can reconcile that. Surveillance, like most powers, cannot be designed so it can only ever be used by good people for good things. In fact, we know that secret surveillance in particular is prone to some of the worst forms of abuse.
Apple positions itself as a privacy-oriented company. Itâs a huge part of its marketing, yes, but Apple also took a real stand in the San Bernardino case. Whatâs more, Apple is using the importance of privacy right now as a defence for why they maintain an anti-competitive marketplace â to enforce strict privacy rules.
Replying to matthew_d_green:Fri Aug 06 21:22:34 +0000 2021The promise could not have been more clear.
Turning their entire platform into a mass surveillance apparatus compromises all of that. You canât have it both ways. Going from âthe privacy phoneâ to âthe surveillance phoneâ is a bait-and-switch of incomparable scale.
Fri Aug 06 02:23:54 +0000 2021No matter how well-intentioned, @Apple is rolling out mass surveillance to the entire world with this. Make no mistake: if they can scan for kiddie porn today, they can scan for anything tomorrow.
They turned a trillion dollars of devices into iNarcsâ*without asking.* twitter.com/Snowden/statusâŚ
In the words of Cory Doctorow,
Apple has a tactical commitment to your privacy, not a moral one. When it comes down to guarding your privacy or losing access to Chinese markets and manufacturing, your privacy is jettisoned without a second thought.
No one is giving away free Iphones [sic] in exchange for ads. You can pay $1,000 for your Apple product and still be the product.
Appleâs responseđ
Replying to matthew_d_green:Tue Aug 10 20:39:21 +0000 2021People are telling me that Apple are âshockedâ that theyâre getting so much pushback from this proposal. They thought they could dump it last Friday and everyone would have accepted it by the end of the weekend.
Apple (corporate, not normal Apple employees) responded to scepticism by doubling down, because theyâre a company and of course they did. First, they circulated language from NCMEC calling their critics âscreeching voices of the minorityâ:
Marita Rodrigues, executive director of strategic partnerships at NCMEC: Team Apple,
I wanted to share a note of encouragement to say that everyone at NCMEC is SO PROUD of each of you and the incredible decisions you have made in the name of prioritizing child protection.
Itâs been invigorating for our entire team to see (and play a small role in) what you unveiled today.
I know itâs been a long day and that many of you probably havenât slept in 24 hours. We know that the days to come will be filled with the screeching voices of the minority.
Our voices will be louder.
After insulting the concerned, Apple had their Senior Vice President of Software Engineering Craig Federighi put out this little number:
Because itâs on the [iPhone], security researchers are constantly able to introspect whatâs happening in Appleâs software. ⌠So if any changes were made that were to expand the scope of this in some way â in a way we had committed to not doing â thereâs verifiability, they can spot that thatâs happening.
Now, everyone even remotely familiar with iPhone security knows this is a ludicrous claim to make, because until the week before the CSAM announcement, Apple had been in a vicious lawsuit against the security firm Corellium for the crime of doing exactly that1. Specifically, they argued that DMCA 1201 should outlaw pretty much any reverse engineering tool that could be used for security research outside Appleâs tightly controlled internal program that leaves researchers and users alike in the lurch.
Matt Tait of Corellium responded that âiOS is designed in a way thatâs actually very difficult for people to do inspection of system services. ⌠[Apple,] youâve engineered your system so that they canât. The only reason that people are able to do this kind of thing is despite you, not thanks to you.â David Thiel commented that âApple has spent vast sums specifically to prevent this and make such research difficult.â Both are exactly right.
My first thought when I heard that Apple dropped the lawsuit just prior to the announcement was that it must have been a strategic move to defuse this argument. But no, just days later Apple filed an appeal to start a war with Corellium all over again. A generous interpretation would be that Appleâs right hand doesnât know what its left is doing, but Iâm convinced its head knows exactly what itâs doing: lying for profit. Because of course Federighi knows all that. Heâs on the front lines of the fight to defend Appleâs closed ecosystem against scrutiny and legal attempts to empower users to assert their rights over their devices by installing their own software. And heâs willing to make himself into a cartoonish liar to keep that from happening.
Replying to elegant_wallaby:Fri Aug 13 18:36:16 +0000 2021This pivot is totally disingenuous. If @Apple wants to lean on the iOS security community as their independent verification, it needs to stop treating that community like enemies and let us properly unlock and examine devices. 4/4
New attacks using this technologyđ
But is Apple at least right that this hashing technology can work? Also no.
Letâs take a closer look at that âhash the image and check if the hash is in the databaseâ step. As I said earlier,
For each image, your phone generates a hash for that image with some sort of hashing algorithm. A hashing algorithm is a one-way function: given some input file, hashing it will always produce the same hash, on any device. However, you cannot reconstruct the original image from the hash. The only way to identify some query hash as belonging to an image is to have the image already, hash it, and compare that to the query.
Of particular note here is the hashing algorithm. When we talk about hashing algorithms, we usually mean a cryptographic hashing algorithm. A good cryptographic hashing algorithm has the following properties:
- Determinism: Any given input will always produce the same hash
- One-way: Starting with a input and computing the hash is easy and quick, but starting with a hash and computing a message that yields that hash value is infeasibly hard
- Collision resistance: Finding two different inputs that generate the same hash value is infeasibly hard
- Avalanche: Any change to the input, no matter how small, should result in a completely different hash. The new hash should be uncorrelated with the hash of the original image
However, when we want to compare images instead of bytes, we start valuing different criteria. We no longer care about the exact value of the input, we now care that images that look alike produce the same hash value. Itâs important that you canât just change one pixel in an image (or even just save it with a different format) and change the hash, foiling the desired fingerprinting effect. That means we want something with collision tolerance: similar images should produce the same hash, within some tolerance, even if theyâre slightly different or saved in a different format. (Usually this also comes with a reverse avalanche effect.) This is called a perceptual hashing algorithm. The specific algorithm Apple is planning to use is NeuralHash, which has been reverse-engineered and can be tested today.
There are many secure, robust, precise cryptographic hashing algorithms available today. The same cannot be said of perceptual hashing algorithms because by design they can be neither precise nor secure; in order to maintain their collision tolerance, they need to be acceptably âfuzzyâ. Once fuzziness is intentionally introduced into an algorithm, that algorithm canât considered precise or secure enough to be used on highly sensitive security applications, such as procedurally determining whether or not someone is a sex criminal. That would be folly.
But a system using a fuzzy algorithm could still be effective at catching contraband images, though, if it were hard to manipulate an image to create an identical-looking image that produces a different perceptual hash. Bad news, itâs not. Hereâs 60 lines of python that do just that for any perceptual hash function, courtesy of Hector Martin, produced very shortly after he was challenged to do exactly that. And this attack on perceptual hashing isnât even implementation-specific. So thatâs that; you can start with a contraband image and derive a new image that looks identical but isnât detected. For all the harm this causes, it turns out itâs not even good at catching criminals.
![]()
@synopsi These two [images with different perceptual hashes] differ only in pixel coordinate (197, 201), which is #d3ccc8 and #d4ccc8 respectively. In fact, the raw RGB data only differs in one byte, by one.
Thatâs a collision with similar images, which is something we didnât want. But what about collision resistance in the case of truly dissimilar images? Remember, this technology is designed to scan incoming images, too. So, if someone sends you an image that this algorithm flags, itâs a strike against you. So, if you start with CSAM hashes, can you produce a harmless-looking image whose hash matches against an entry in the CSAM database, in order to force a user to be flagged? Yes. In this regard, too, NeuralHash is completely broken. Anish Athalye developed a tool that will arbitrarily construct an input that generates a known hash. You donât even need to start with real CSAM to use it to generate forced false positives.
Cryptography at this level is not one of my areas of expertise, so Iâll quote instead from Sarah Jamie Lewisâ excellent article about NeuralHash collision:
- preimage resistance: given y, it is difficult to find an x such that h(x) = y.
- second-preimage resistance: given x, it is difficult to find a second preimage z â x such that h(x) = h(z).
Both of these have been fundamentally broken for NeuralHash. There is a now a tool by Anish Athalye which can trivially, when given a target hash y, can find an x such that h(x) = y.
Given such a tool constructing second pre-images is trivial, but even without it is important to note that because of the way NeuralHash works it is very easy to construct images such that h(x) = h(y).
From left to right: a dog with hash D, a cat with hash C, and an constructed cat with hash D
What we have here is not a airtight cryptographic machine ready to weather the trial of being used to adjudicate the most sensitive criminal justice cases. Weâve instead been given a matching algorithm fuzzy enough for an attacker to misuse and yet complicated enough for false positives to be used as evidence against innocent people by a legal system not equipped to interrogate cryptographic evidence.
The Ends Justifies the Meansđ
In modern political theory, we understand these things called ârightsâ. Rights are a special kind of inviolable demand that stand above utilitarian calculations. If someone has an absolute right to due process, the government absolutely must give them due process. It doesnât matter how expedient it might be to go around unilaterally executing people, you absolutely cannot do it. Thatâs a hard line.
Another key principle: individual procedures have their own mandates. If a system was deployed under the strict legal requirement it only be used to solve a particular kind of crime, it must be kept qualified to that scope; it must never be used beyond that scope without an explicit mandate and authorization.
Modern governments respect neither of these things. Restricting tools to their scope is a restriction and to law enforcement, restrictions are abhorrent. To the authoritarian, their own feeling the need to use something should be authorization enough. Power is a convergent instrumental goal, perhaps even tautologically. Someone will focus in on the cause they care about the most and use whatever tools are available to pursue the outcome they view as right, because to them their cause is the most important thing. The discretionary judgement of the individual that some âgoodâ could be done should be, in the eyes of law enforcement, all that matters. And theyâre the ones behind the wheel.
This âends justifies the meansâ utilitarian philosophy is extremely prevalent among those with the authoritarian and fascist mindsets. There is nothing quite so dangerous as a utilitarian who finds god. It wasnât a coincidence that the mind who thought alcohol should be illegal thought poisoning drinkers was a good idea. Nor was it a surprise that the minds worked into a frenzy by militaristic nationalist rhetoric thought storming the capitol would be worth it to save America. Nor is it a surprise that olâ Billy âI let Trump use the DOJ to spy on his political opponents and then lied about it to congressâ Barr who wrote all those strong DOJ letters about the importance of banning encryption thinks part of his job is to use his position to keep federal prosecutors from challenging people associated with Trump. To these people, their cause is so infinitely important as to drown out all other concerns, all other principles, all other lives.
The government has a long history of doing this, using whatever tools are available to pursue whatever the current political ends are. Usually itâs something at least disguised as safety (in the case of NSA spying, encryption backdoors, wars against usersâ safety to protect their documents against malicious actors including government and the like), but can escalate to absurdity in politically charged environments or wartime.
âWe are obligated to use all the tools at our disposal to pursue our endsâ, the utilitarian says. And, to them, the only limit on that disposal is what they can get away with â not what those tools are for, or how they were authorized. But thatâs not how it works. It doesnât matter who thinks it, or if thereâs even consensus. Thatâs not how rights work.
Thu Aug 05 21:57:00 +0000 2021Basically, until you find a way to eradicate the philosophyâ the end justifies the meansâ entirely, every universal policing system is a terrible idea, because people will use it beyond its scope for something they think is more important
This is why itâs critically important not to make these tools available in the first place. When it comes to technology â especially surveillance â this predictable dynamic arises where citizens are put in progressively more danger while law enforcement sets a stated goal to aggressively pursue further and further deployment of the technology at the peopleâs expense. Itâs not a terribly complicated dynamic, but the conclusion is even simpler: you must not ever allow this category of technology. Surveillance is not a gap in law enforcementâs range for them to fill, it must be an absolute rule for them to work around. Forever.