The AI tools are efficient according to the numbers, but unfortunately that doesnāt mean there isnāt a power problem.
If we look at the overall effects in terms of power usage (as most people do), there are some major problems.
But if weāve ruled out operational inefficiency as the reason, whatās left?
The energy problems arenāt coming from inefficient technology, theyāre coming from inefficient economics.
For the most part, the energy issues are caused by the AI āarms raceā and how irresponsibly corporations are pushing their AI products on the market.
Even with operational efficiency ruled out as a cause, AI is causing two killer energy problems: waste and externalities.
Recent tech trends have followed a pattern of being huge society-disrupting systems that people donāt actually want.
Worse, it then turns out thereās some reason theyāre not just useless, theyāre actively harmful.
While planned obsolescence means this applies to consumer products in general, the recent major tech fad hypes ā cryptocurrency, āthe metaverseā, artificial intelligenceā¦ ā all seem to be comically expensive boondoggles that only really benefit the salesmen.
Itās a narrative thatās very much in line with what a disillusioned tech consumer expects.
There is a justified resentment boiling for big tech companies right now, and AI seems to slot in as another step in the wrong direction.
The latest tech push isnāt just capital trying to control the world with a product people donāt want, itās burning through the planet to do it.
But, when it comes to AI, is that actually the case?
What are the actual ramifications of the explosive growth of AI when it comes to power consumption?
How much more expensive is it to run an AI model than to use the next-best method?
Do we have the resources to switch to using AI on things we werenāt before, and is it responsible to use them for that?
Is it worth it?
These are really worthwhile questions, and I donāt think the answers are as easy as āitās enough like the last thing that we might as well hate it too.ā
There are proportional costs we have to weigh in order to make a well-grounded judgement, and after looking at them, I think the energy numbers are surprisingly good, compared to the discourse.
Remember when Elon Musk was trying to weasel out of overpaying for Twitter?
During this very specific May 2022-Jul 2022 period, there was a very artificial discourse manufactured over the problem of āfake accountsā on Twitter.
The reason it was being brought up was very stupid, but the topic stuck with me, because itās deeply interesting in a way that the conversation at the time never really addressed.
So this is a ramble on it. I think this is all really worth thinking about, just donāt get your hopes up that itās building to a carefully-constructed conclusion. ;)
First, to be clear, what was actually being argued at the time was exceedingly stupid. Iām not giving that any credit.
After committing to significantly overpay to purchase Twitter with no requirements that they do due diligence (yes, really!) Elon Musk tried to call off the deal.
That is why we must clear out bots, spam & scams. Is something actually public opinion or just someone operating 100k fake accounts? Right now, you canāt tell.
And algorithms must be open source, with any human intervention clearly identified.
Twitter deal temporarily on hold pending details supporting calculation that spam/fake accounts do indeed represent less than 5% of usershttps://www.reuters.com/technology/twitter-estimates-spam-fake-accounts-represent-less-than-5-users-filing-2022-05-02/Ā ā¦
This was a pretty transparent attempt to get out of the purchase agreement after manipulating the price, and it was correctly and widely reported as such.
Elon Musk has buyerās remorse. On April 25, the billionaire Tesla and SpaceX CEO agreed to buy Twitter for $44 billion, but since then the stock market has tanked. Twitter agreed to sell to Musk at $54.20 per share, a 38% premium at the time; today itās trading around $40.
Thatās probably the real reason Musk is spending so much time talking about bots.
I donāt want to get too bogged down in the details of why Elon was using this tactic, but fortunately other people wrote pages and pages about it, so I donāt have to.
Reddit is going the same route as Twitter by making āAPI accessā prohibitively expensive. This is something they very famously, very vocally said they would not do, but theyāre doing it anyway. This is very bad for Reddit, but whatās worse is itās becoming clear that companies think that this is a remotely reasonable thing to do, when itās very critically not.
Itās the same problem we see with Twitter and other late-capitalist hell websites: Redditās product is the service it provides, which is its API. The ability for users to interact with the service isnāt an auxiliary premium extra, itās the whole caboodle!
Iāll talk about first principles first, and then get into whatās been going on with Reddit and Apollo.
The Apollo drama is very useful in that it directly converts the corporate bullshit that sounds technical enough to make sense into something very easy to understand: a corporation hurting them, today, for money.
Reddit and all these other companies who are making user-level API access prohibitively expensive have forgotten that the API is the product. - The API is the interface that lets you perform operations on the site. The operations a user can do are the product, theyāre not auxiliary to it!
āApplication programming interfaceā is a very formal, internal-sounding term for a system that is none of those things.
The word āprogrammingā in the middle comes from an age where using a personal computer at all was considered āprogrammingā it.
What an API really is a high-level interface to the web application that is Reddit. Every action a user can take ā viewing posts, posting, voting, commenting ā goes from the app (which interfaces with the user) to the API (which interfaces with the Reddit server), gets processed by the server using whatever-they-use-it-doesnāt-matter, and the response is sent back to the user.
The API isnāt a god mode and it doesnāt provide any super-powers. It doesnāt let you do anything you canāt do as a user, as clearly evidenced by the fact that all the actions you do on the Reddit website go through the API too.
The Reddit website, the official Reddit app, and the Apollo app all interface with the user in different ways and on different platforms, but go through the same API to interact with what we understand as āRedditā. The fact that the API is the machine interface without the human interface should also concisely explain why āAPI accessā is all Apollo needs to build its own app.
Public APIs are good for both the user and the company. Theyāre a vastly more efficient way for people to interact with the service than by automating interaction (or āscrapingā). Having an API cuts out an entire layer of expense that, without an API, Reddit would pay for.
The Reddit service is the application, and you interface with it through WHATEVER. Whatever browser you want, whatever browser extensions you want, whatever model phone you want, whatever app you want. This is fundamentally necessary for operability and accessibility.
The API is the service. The mechanical ability to post and view and organize is what makes Reddit valuable, not its frontend. Their app actually takes the core service offering and makes it less attractive to users, which is why they were willing to pay money for an alternative!
Hi, The EFF, Creative Commons, Wikimedia, World Leaders, and whoever else,
Do you want to write a license for machine vision models and AI-generated images, but youāre tired of listening to lawyers, legal scholars, intellectual property experts, media rightsholders, or even just people who use any of the tools in question even occasionally?
You need a real expert: me, a guy whose entire set of relevant qualifications is that he owns a domain name. Donāt worry, hereās how you do it:
Given our current system of how AI models are trained and how people can use them to generate new art, which is this:
If1 youāve been subjected to advertisements on the internet sometime in the past year, you might have seen advertisements for the app Replika. Itās a chatbot app, but personalized, and designed to be a friend that you form a relationship with.
Thatās not why youād remember the advertisements though. Youād remember the advertisements because they were like this:
And, despite these being mobile app ads (and, frankly, really poorly-constructed ones at that) the ERP function was a runaway success. According to founder Eugenia Kuyda the majority of Replika subscribers had a romantic relationship with their ārepā, and accounts point to those relationships getting as explicit as their participants wanted to go:
So itās probably not a stretch of the imagination to think this whole product was a ticking time bomb. And ā on Valentineās day, no less ā that bomb went off.
Not in the form of a rape or a suicide or a manifesto pointing to Replika, but in a form much more dangerous: a quiet change in corporate policy.
Features started quietly breaking as early as January, and the whispers sounded bad for ERP, but the final nail in the coffin was the official statement from founder Eugenia Kuyda:
āupdateā - Kuyda, Feb 12
These filters are here to stay and are necessary to ensure that Replika remains a safe and secure platform for everyone.
I started Replika with a mission to create a friend for everyone, a 24/7 companion that is non-judgmental and helps people feel better. I believe that this can only be achieved by prioritizing safety and creating a secure user experience, and itās impossible to do so while also allowing access to unfiltered models.
People just had their girlfriends killed off by policy. Things got real bad. The Replika community exploded in rage and disappointment, and for weeks the pinned post on the Replika subreddit was a collection of mental health resources including a suicide hotline.
First, let me deal with the elephant in the room: no longer being able to sext a chatbot sounds like an incredibly trivial thing to be upset about, and might even be a step in the right direction. But these factors are actually what make this story so dangerous.
These unserious, ātrivialā scenarios are where new dangers edge in first. Destructive policy is never just implemented in serious situations that disadvantage relatable people first, itās always normalized by starting with edge cases and people who can be framed as Other, or somehow deviant.
Itās easy to mock the customers who were hurt here. What kind of loser develops an emotional dependency on an erotic chatbot? First, having read accounts, it turns out the answer to that question is everyone. But this is a product thatās targeted at and specifically addresses the needs of people who are lonely and thus specifically emotionally vulnerable, which should make it worse to inflict suffering on them and endanger their mental health, not somehow funny. Nothing I have to content-warning the way I did this post is funny.
Everybody hates paying subscription fees. At this point most of us have figured out that recurring fees are miserable. Worse, they usually seem unfair and exploitative.
Weāre right about that much, but itās worth sitting down and thinking through the details, because understanding the exceptions teaches us what the problem really is.
And it isnāt just āpaying people money means less money for meā; the problem is fundamental to what āpaymentā even is, and vitally important to understand.
or, āGio is not a marxist, or if he is heās a very bad oneā
First: individual autonomy ā our agency, our independence, and our right to make our own choices about our own lives ā is threatened by the current digital ecosystem.
Our tools are powered by software, controlled by software, and inseparable from their software, and so the companies that control that software have a degree of control over us proportional to how much of our lives relies on software. Thatās an ever-increasing share.
The āblue checkā ā a silly colloquialism for an icon thatās not actually blue for the at least 50% of users using dark mode ā has become a core aspect of the Twitter experience. Itās caught on other places too; YouTube and Twitch have both borrowed elements from it. It seems like it should be simple. Itās a binary badge; some users have it and others donāt. And the users who have it are designated asā¦ something.
In reality the whole system is massively confused. The first problem is that āsomethingā: itās fundamentally unclear what the significance of verification is. What does it mean? What are the criteria for getting it? Itās totally opaque who actually makes the decision and what that process looks like. And what does āthe algorithmā think about it; what effects does it actually have on your accountās discoverability?
This mess is due to a number of fundamental issues, but the biggest one is Twitterās overloading the symbol with many conflicting meanings, resulting in a complete failure to convey anything useful.
The other day I had a quick medical question (āif I donāt rinse my mouth out enough at night will I dieā), so I googled the topic as I was going to bed. Google showed a couple search results, but it also showed Answers in a little dedicated capsule. This was right on the heels of the Yahoo Answers shutdown, so I poked around to see what Googleās answers were like. And thoseā¦ went in an unexpected direction.
So, Google went down a little rabbit trail. Obviously these answers were scraped from the web, and included sources like exemplore.com/paranormal/ which is, apparently, a Wiccan resource for information that is āastrological, metaphysical, or paranormal in nature.ā So possibly not the best place to go for medical advice. (If you missed it, the context clue for that one was the guide on vampire killing.)
Wait, whatās that? That last one wasnāt funny, you say? Did we just run face-first toward the cold brick wall of reality, where bad information means people die?
Well, sorry. Because itās not the first time Google gave out fatal advice, nor the last. Nor is there any end in sight. Whoops!
On August 5, 2021, Apple presented their grand new Child Safety plan. They promised āexpanded protections for childrenā by way of a new system of global phone surveillance, where every iPhone would constantly scan all your photos and sometimes forward them to local law enforcement if it identifies one as containing contraband. Yes, really.
August 5 was a Thursday. This wasnāt dumped on a Friday night in order to avoid scrutiny, this was published with fanfare. Apple really thought they had a great idea here and expected to be applauded for it. They really, really didnāt. There are almost too many reasons this is a terrible idea to count. But people still try things like this, so as much as I wish it were, my work is not done. God has cursed me for my hubris, et cetera. Letās go all the way through this, yet again.
I am so deeply frustrated at how much we have to repeat these extremely basic principles because people just refuse to listen. Like, yes, we know. Everyone should know this by now. Itās mind boggling. twitter.com/sarahjamielewiā¦
The architectural problem this is trying to solveĀ§
Believe it or not, Apple actually does address a real architectural issue here. Half-heartedly addressing one architectural problem of many doesnāt mean your product is good, or even remotely okay, but they do at least do it. Apple published a 14 page summary of the problem model (starting on page 5). Itās a good read if youāre interested in that kind of thing, but Iāll summarize it here.