!morewrite
@awful.systemsI just want to share a little piece of this provocation, but would like to know how compelling it sounds? I've been sitting on it for a while and starting to think its probably not earning that much space in words. The overarching point is that anyone who complains about constraints imposed on them as being constraints in general either isn't making something purposeful enough to concretely challenge the constraints or isn't actually designing because they haven't done the hard work of understanding the constraints between them and their purpose. Anyway, this is a snippet from a longer piece which leads to a point that the scumbags didn't take over, but instead the environment evolved to create the perfect habitat for scumbags who want to make money from providing as little value as possible:
The constraints of taking up space
Software was once sold on physical media packaged in boxes that were displayed with price tags on shelves alongside competing products in brick and mortar stores.
Limited shelf space stifled software makers into making products innovative enough to earn that shelf space.
The box that packaged the product stifled software makers into having a concrete purpose for their product which would compel more interest than the boxes beside it.
The price tag stifled software makers into ensuring that the product does everything it says on the box.
The installation media stifled software makers into making sure their product was complete and would function.
The need to install that software, completely, on the buyer’s computer stifled the software makers further into delivering on the promises of their product.
The pre-broadband era stifled software makers into ensuring that any updates justified the time and effort it would take to get the bits down the pipe.
But then…
Connectivity speeds increased, and always-on broadband connectivity became widespread. Boxes and installation media were replaced by online purchases and software downloads.
Automatic updates reduced the importance of version numbers. Major releases which marked a haul of improvements significant enough to consider it a new product became less significant. The concept of completeness in software was being replaced by iterative improvements. A constant state of becoming.
The Web matured with advancements in CSS and Javascript. Web sites made way for Web apps. Installation via downloads was replaced by Software-as-a-service. It’s all on a web server, not taking up any space on your computer’s internal storage.
Software as a service instead of a product replaced the up-front price tag with the subscription model.
…and here we are. All of the aspects of software products that take up space, whether that be in a store, in your home, on your hard disk, or in your bank account, are gone.
https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer
Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer
https://buttondown.email/maiht3k/archive/the-grimy-residue-of-the-ai-bubble/
What kind of residue will the AI bubble's popping leave behind? By Alex Photo credit: Marc Sendra Martorell Q2 earnings are in. According to Pitchbook data,...
I've been hit by inspiration whilst dicking about on Discord - felt like making some off-the-cuff predictions on what will happen once the AI bubble bursts. (Mainly because I had a bee in my bonnet that was refusing to fuck off.)
Its no secret the industry's put all their chips into AI - basically every public company's chasing it to inflate their stock prices, NVidia's making money hand-over-fist playing gold rush shovel seller, and every exec's been hyping it like its gonna change the course of humanity.
Additionally, going by Baldur Bjarnason, tech's chief goal with this bubble is to prop up the notion of endless growth so it can continue reaping the benefits for just a bit longer.
If and when the tech bubble pops, I expect a full-blown crash in the tech industry (much like Ed Zitron's predicting), with revenues and stock prices going through the floor and layoffs left and right. Additionally, I'm expecting those stock prices will likely take a while to recover, if ever, as tech likely comes to be viewed either as a stable, mature industry that's no longer experiencing nonstop growth.
Chance: Near-Guaranteed. I'm pretty much certain on this, and expect it to happen sometime this year.
Extrapolating a bit from Prediction 1, I suspect we might see a lot less people going into tech/STEM degrees if tech crashes like I expect.
The main thing which drew so many people to those degrees, at least from what I could see, was the notion that they'd make you a lotta money - if tech publicly crashes and burns like I expect, it'd blow a major hole in that notion.
Even if it doesn't kill the notion entirely, I can see a fair number of students jumping ship at the sight of that notion being shaken.
Chance: Low/Moderate. I've got no solid evidence this prediction's gonna come true, just a gut feeling. Epistemically speaking, I'm firing blind.
The AI bubble's given us a pretty hefty amount of mockery-worthy shit - Mira Murati shitting on the artists OpenAI screwed over, Andrej Karpathy shitting on every movie made pre-'95, Sam Altman claiming AI will soon solve all of physics, Luma Labs publicly embarassing themselves, ProperPrompter recreating motion capture, But Worse^tm, Mustafa Suleyman treating everything on the 'Net as his to steal, et cetera, et cetera, et fucking cetera.
All the while, AI has been flooding the Internet with unholy slop, ruining Google search, cooking the planet, stealing everyone's work (sometimes literally) in broad daylight, supercharging scams, killing livelihoods, exploiting the Global South and God-knows-what-the-fuck-else.
All of this has been a near-direct consequence of the development of large language models and generative AI.
Baldur Bjarnason has already mentioned AI being treated as a major red flag by many - a "tech asshole" signifier to be more specific - and the massive disconnect in sentiment tech has from the rest of the public. I suspect that "tech asshole" stench is gonna spread much quicker than he thinks.
Chance: Moderate/High. This one's also based on a gut feeling, but with the stuff I've witnessed, I'm feeling much more confident with this than Prediction 2. Arguably, if the cultural rehabilitation of the Luddites is any indication, it might already be happening without my knowledge.
If you've got any other predictions, or want to put up some criticisms of mine, go ahead and comment.
(CW: Every aspect of dog-related trauma. Opiate abuse. Write anything you like in the comments: assume I would otherwise be posting this in some venue appropriate for its content.)
SOONDAE, the hero dog. Remember him? His face was on billboards.
He still kneels when the master approaches. He's strong. Watching him come to my heel again is like seeing a spring being wound up.
He's an old dog now. He only touches his chin to the ground for a moment. Then he shakes his head and pushes beside me, into the narrow space between my shin and the bathroom door.
He's been eating less, so he fits very well. Even if he had to push past me by force, I wouldn't have been able to hold him back. He does not choose to prove his strength in that way, though. I think he doesn't want me to prove the idea that I might try.
He remembers the scent and appearance of this two-room apartment even though it's been over a year since he lived here. The floors are so clean as to be sterile, but I'm still here. It probably smells like me.
After so much exertion he comes to rest on the marble tile. His paws slip -- they have no traction -- and he slips wide, in obvious pain as he slides. There's a swelling on his buttock that will eventually kill him. With a spring this old, it's difficult to know that it will spring back again.
He rolls onto his back and I see what he sees too -- the red rubbing alcohol on the counter. He raises his paws to his face to beg.
Dogs are able to be liked by humans, but that's their appearance, not their personality. Dogs don't know how to speak in a way that humans can understand. No dog in the wild begs like Soondae: to create a personality, I had to train it.
A dog that can't express itself is not, as you might think, a violent creature. Wolves are predators: dogs aren't, and only some contain violence. The tendency to fight without being provoked is also taught.
We don't know what dogs want. A dog has to be taught, in its natural nonverbal language, to express a desire for each little thing it wants. When a dog wants something without being told to, it's like a new color has come into being.
Now Soondae is begging -- for what? I know, and you don't know.
This is the bathroom where we gave Soondae his hero's welcome. You can see the evidence on the floor: marks in the tile made by the thick, astringent soap we used, long ago, to get the blood out of his fur and off his flesh.
As soon as the shower stopped dripping, a cameraman raced past me, thick braided rope of cables trailing behind him like a fox's long tail, and came to a deep squat in it. I brushed Soondae's haunch too quickly and caught a snag in the matted fur. The dog yelped once.
I only wanted to get him clean.
The photographer brought his camera lower, flash dead for now but near enough to go off bright enough to increase his pain. I thought of what I could do for a nice dog, a hero dog. The most expensive sirloin. I felt gratitude that he'd never had it. He'd never been taught to desire it.
You've got to understand that despite what you've seen on the billboards, Soondae never smiled. He wasn't a good boy and he wasn't a bad dog -- he was just a dog. There were dark circles around his eyes from the whole history of his life: reminders of a time, in his infancy, when I didn't know him and didn't control him.
We had always tried to show him love, but he didn't understand it. He couldn't show love back to us in a way that we understood -- only physical submission. Now his ability to show physical submission was strained by all the pain he was in, blood caked around his guard-hairs, even his muzzle.
He wouldn't stop making such painful noises and I looked at the photographer and saw that they were disturbed, effectively cornered on the low ground, hearing him bark. I didn't know Soondae as a killer. Blood around his lips, I didn't think of him that way. I sponged it away, the flecks of foam at the corner of his mouth. He made such awful noise.
In my cabinet I had a magic red bottle bought before the war, ornately labeled, an inheritance. Something very rare that they don't make anymore. It looked like milk. I took it, I opened it. I approached Soondae from behind and brought a needle from my pocket. I put it under his buttock where I knew the fat muscle was, like beef chuck.
He yelped again. I used a washcloth to get rid of the thin blood, his own blood, teeming through the opening. I watched the cameraman's soothed reaction as Soondae, the hero dog, became more quiet.
I had great fear of the hidden power of the droplets of morphine leftover on the surface of my skin. I washed my hands, and again.
The photo was taken. I turned back to look at him. I saw him grinning and drooling, not like a dog does. I knew that he had seen the magic red bottle.
We scrubbed him down so deep that his matted fur began to fall out. When that didn't work, we shaved him. The rare moment of pleasure in his otherwise cruel life.
Soondae, the hero dog. There are crimes a dog is expected to be able to understand -- theft, assault, murder. What a dog actually understands is the flow of aggression between its master and whoever its master is threatened by. A dog is known to charge into a fire or bite an electrical cable if its master is threatened by it.
I couldn't stand living with a dog who had killed someone, even when I found out that it hadn't been rabies. I had expected never to see him again.
Imagine what I saw. Do not imagine the object itself: imagine the looming presence of the object: centered in my window, not so close as to take the entire space but at a distance that made it convenient to view from any corner of my studio room: the room I slept in, cooked food in, watched television in. Imagine my experience -- not from your perspective, from my perspective -- and not on the senses, in my head. How it actually felt to be me and to be oppressed by it.
Now I'll fill in the object. The billboard I have already described to you -- Soondae, the hero dog. His grin, tongue at the corner of his mouth, unable to lift himself from the floor. Imagine it standing for many months.
In this imagined experience I've already sold the dog to his new owner. Now I have the feeling every morning of waking up to his elated face, and the knowledge of what caused that face. And every afternoon, its shadow streaming into my living room.
Then one day, it's not there. I'm not oppressed by it. Instead there's just the open sky behind it.
The appearance of the sky behind it has nothing to do with why I'm no longer oppressed. The goodness of being free is better than the goodness of the clean, open sky, but no attempt I make to explain the goodness of being free is clear. The only explanation that is clear to you my verbalized account of how the open sky makes me feel.
By staring and by feeling such horrible things, I demand a comprehensible account from Soondae of how much better it is to be free of pain. I am, at the time, acknowledging that the only part of Soondae's account that he can lucidly express to me is the part made visible in Soondae's expression: the feeling of his overpowering morphine high.
Now in my bathroom the signs that he sees the end are telling: he's thin, you can feel his ribs. There may be nothing that it's like to be out of pain, but there's something that it's like to be freed of it.
Soondae's mild aggression would lead one to believe he would prefer to have no master at all. His eyes go out of focus as he softens, now taking in breath, paw-fingers tight at the sides of his face, saliva dripping on his tongue.
He senses the idea of an enduring pleasure just beyond the sensory tableau that forcefully makes itself into objects in his view. He wishes for the shadow puppets to go back to being shadows, as they were in his infancy. He imagines the erasure of everything unpleasant to him -- of going back to a sea of pleasing red.
Now, I'm aware, morphine comes in many kinds, often in pills and much more rarely, today, in syrup. The magic red bottle isn't made and it's not sold to the public, but there are thousands of products in red bottles like it. Often candies, celebratory candles, certain soaps.
Seeing Soondae fall before my rubbing alcohol and beg tells me that he's seen thousands of red bottles in thousands of places, never for him. I see that he's formed a permanent sense-memory like the association of my smell with his former house. I say all this knowing that there's no plausible way he could have tried it a second time.
I have never tried an opiate; I don't intend to try an opiate. What I beileved months ago about morphine was that you had to try it twice to become addicted. I believed that well-adjusted people had no reason to try it twice.
Soondae had it once.
There is phenobarbital in my cabinet that can kill an aging dog. Paradoxically and irrationally, I fear the morphine more. I fear putting myself out or even killing myself. I ask myself if it would be so wrong to kill him pleasantly.
Freedom is not ordering what I want from a list of freedoms. I may live a life that others assess as meaningless. I may live a life that seems destructive.
There are freedoms I crave that I won't grant. I fear death so intensely that I'm frightened of pouring it into Soondae who yearns for it. My choice of poison will not matter in an hour.
Every day I do something subtractive. I spend time and the time is gone. I think every day of things I want to delete -- no police officers, no prisons, but also no crime.
To imagine this world, you have to imagine what it's like for me, not just what it would be like for you. You have to think of the erasure as killing pain -- not the goodness of there being nothing, you have to think of the goodness of going from something to nothing at all. The relief.
This imagined world is a happier place -- it's a simpler place -- the shapes that offend me sink into the tableau. Nothing is made for me here -- I imagine making a place for myself in the negative space. I imagine no borders, but what I'm really imagine is the boundary of my body dissolving into the boundary of my physical surroundings.
Every day I take some step towards attainment or away from it. See, I barely know where I'm going -- I know nothing's empty, I see shapes in it, I see thought rising in the medium like bubbles, and I see bubbles pooling at the surface. What do I want? I don't know. I know what I don't want. How happy does a life have to become for it to be meaningful?
Answer fast: you have 70 years.
I think of a thousand things in a list of things I want to delete. I think of everyone standing up and collectively walking out. No work, no scarcity. I imagine everyone marching out to a cliff and looking at the sea.
I look at my dog and watch him smiling and don't understand it, then see that I've stabbed my thumb by accident.
(Gonna expand on a comment I whipped out yesterday - feel free to read it for more context)
At this point, its already well known AI bros are crawling up everyone's ass and scraping whatever shit they can find - robots.txt, honesty and basic decency be damned.
The good news is that services have started popping up to actively cockblock AI bros' digital smash-and-grabs - Cloudflare made waves when they began offering blocking services for their customers, but Spawning AI's recently put out a beta for an auto-blocking service of their own called Kudurru.
(Sidenote: Pretty clever of them to call it Kudurru.)
I do feel like active anti-scraping measures could go somewhat further, though - the obvious route in my eyes would be to try to actively feed complete garbage to scrapers instead - whether by sticking a bunch of garbage on webpages to mislead scrapers or by trying to prompt inject the shit out of the AIs themselves.
The main advantage I can see is subtlety - it'll be obvious to AI corps if their scrapers are given a 403 Forbidden and told to fuck off, but the chance of them noticing that their scrapers are getting fed complete bullshit isn't that high - especially considering AI bros aren't the brightest bulbs in the shed.
Arguably, AI art generators are already getting sabotaged this way to a strong extent - Glaze and Nightshade aside, ChatGPT et al's slop-nami has provided a lot of opportunities for AI-generated garbage (text, music, art, etcetera) to get scraped and poison AI datasets in the process.
How effective this will be against the "summarise this shit for me" chatbots which inspired this high-length shitpost I'm not 100% sure, but between one proven case of prompt injection and AI's dogshit security record, I expect effectiveness will be pretty high.
After reading through Baldur's latest piece on how tech and the public view gen-AI, I've had some loose thoughts about how this AI bubble's gonna play out.
I don't have any particular structure to this, this is just a bunch of things I'm getting off my chest:
Past AI springs had the good fortune to have had no obvious negative externalities to sour the public's reputation (mainly because they weren't public facing, going by David Gerard).
This bubble, by comparison, has been pretty much entirely public facing, giving us, among other things:
A veritable slop-nami of garbage-looking art, interesting only when it comes off as completely fucking insane (say hi Biblically-accurate gymnasts)
Copyright infringement and art theft on a Biblical scale, leading to basically major AI company getting sued out the ass (with Suno and Udio being the latest targets)
Colossal amounts of power consumption, and thus planet-cooking levels of CO2 emissions (for the latest example, Google missed its climate targets as a direct result of AI)
High-profile public embarrassments left and right, with Google's pizza-glue pisstake the most obvious coming to mind
Scammers making use of voice-cloning tech to make their scams more convincing (with a particularly notorious flavour imitating a loved one under duress) (thanks to @mountainriver for pointing this one out)
And probably a few more I'm missing
All of these have done a lot of damage to AI's public image, to the point where its absence is an explicit selling point - damage which I expect to last for at least a decade.
When the next AI winter comes in, I'm expecting it to be particularly long and harsh - I fully believe a lot of would-be AI researchers have decided to go off and do something else, rather than risk causing or aggravating shit like this. (Missed this incomplete sentence on first draft)
Speaking of copyright, basically every AI company has worked under the assumption that copyright basically doesn't exist and they can yoink whatever they want without issue.
With Gen-AI being Gen-AI, getting evidence of their theft isn't particularly hard - as they're straight-up incapable of creativity, they'll puke out replicas of its training data with the right prompt.
Said training data has included, on the audio side, songs held under copyright by major music studios, and, on the visual side, movies and cartoons currently owned by the fucking Mouse..
Unsurprisingly, they're getting sued to kingdom come. If I were in their shoes, I'd probably try to convince the big firms my company's worth more alive than dead and strike some deals with them, a la OpenAI with Newscorp.
Given they seemingly believe they did nothing wrong (or at least Suno and Udio do), I expect they'll try to fight the suits, get pummeled in court, and almost certainly go bankrupt.
There's also the AI-focused COPIED act which would explicitly ban these kinds of copyright-related shenanigans - between getting bipartisan support and support from a lot of major media companies, chances are good it'll pass.
I feel the tech industry as a whole is gonna see its image get further tainted by this, as well - the industry's image has already been falling apart for a while, but it feels like AI's sent that decline into high gear.
When the cultural zeitgeist is doing a 180 on the fucking Luddites and is openly clamoring for AI-free shit, whilst Apple produces the tech industry's equivalent to the "face ad", its not hard to see why I feel that way.
I don't really know how things are gonna play out because of this. Taking a shot in the dark, I suspect the "tech asshole" stench Baldur mentioned is gonna be spread to the rest of the industry thanks to the AI bubble, and its gonna turn a fair number of people away from working in the industry as a result.
Who's Scott Alexander? He's a blogger. He has real-life credentials but they're not direct reasons for his success as a blogger.
Out of everyone in the world Scott Alexander is the best at getting a particular kind of adulation that I want. He's phenomenal at getting a "you've convinced me" out of very powerful people. Some agreed already, some moved towards his viewpoints, but they say it. And they talk about him with the preeminence of a genius, as if the fact that he wrote something gives it some extra credibility.
(If he got stupider over time, it would take a while to notice.)
When I imagine what success feels like, that's what I imagine. It's the same thing that many stupid people and Thought Leaders imagine. I've hardcoded myself to feel very negative about people who want the exact same things I want. Like, make no mistake, the mental health effects I'm experiencing come from being ignored and treated like an idiot for thirty years. I do myself no favors by treating it as grift and narcissism, even though I share the fears and insecurities that motivate grifters and narcissists.
When I look at my prose I feel like the writer is flailing on the page. I see the teenage kid I was ten years ago, dying without being able to make his point. If I wrote exactly like I do now and got a Scott-sized response each time, I'd hate my writing less and myself less too.
That's not an ideal solution to my problem, but to my starving ass it sure does seem like one.
Let me switch back from fantasy to reality. My most common experience when I write is that people latch onto things I said that weren't my point, interpret me in bizarre and frivolous ways, or outright ignore me. My expectation is that when you scroll down to the end of this post you will see an upvoted comment from someone who ignored everything else to go reply with a link to David Gerard's Twitter thread about why Scott Alexander is a bigot.
(Such a comment will have ignored the obvious, which I'm footnoting now: I agonize over him because I don't like him.)
So I guess I want to get better at writing. At this point I've put a lot of points into "being right" and it hasn't gotten anywhere. How do I put points into "being more convincing?" Is there a place where I can go buy a cult following? Or are these unchangeable parts of being an autistic adult on the internet? I hope not.
There are people here who write well. Some of you are even professionals. You can read my post history here if you want to rip into what I'm doing wrong. The broad question: what the hell am I supposed to be doing?
This post is kind of invective, but I'm increasingly tempted to just open up my Google drafts folder so people can hint me in a better direction.
Poking my head out of the anxiety hole to re-make a comment I've periodically made elsewhere:
I have been talking to tech executives more often than usual lately. [Here is the statistically average AI take.] (https://stackoverflow.blog/2023/04/17/community-is-the-future-of-ai/)
You are likely to read this and see "grift" and stop reading, but I'm going to encourage you to apply some interpretive lenses to this post.
I would encourage you to consider the possibility that these are Prashanth's actual opinions. For one, it's hard to nail down where this post is wrong. Its claims about the future are unsupported, but not clearly incorrect. Someone very optimistic could have written this in earnest.
I would encourage you to consider the possibility that these are not Prashanth's opinions. For instance, they are spelled correctly. That is a good reason to believe that a CEO did not write this. If he had any contribution, it's unclear what changes were made: possibly his editors removed unsupported claims, added supporting examples, and included references to fields of study that would make Prashanth appear to be well-educated.
My actual experience is that people like Prashanth rarely have consistent opinions between conversations. Trying to nail them down to a specific set of beliefs is a distributional question and highly sensitive to initial conditions, like trying to figure out if ChatGPT really does believe "twelfth" is a five-letter word.
Like LLMs, salespeople are conditioned on their previous outputs. Prashanth wrote this. (or put his name on it) It is public information that he believes this. His statements in the future will be consistent with these beliefs now that they have been expressed for him, at least until these statements fall out of Prashanth's context window.
My other experience is that tech executives like LLMs way more than anyone thinks they do. There is nothing they like more than LLMs. However much you think they like LLMs, they like LLMs more than that. Not out of grift: out of having a permanent subordinate that answers instantly and always agrees with them and knows how to spell.
Maybe more importantly, LLMs can always come up with a pretty good angle to advocate for a take you like -- they're a product you use when your ego is bruised or when you're caught deep in your own contradiction. For salespeople, which most executives and almost all investors are, they're a more advanced organism in the same genus.
I believe that sales background creates or selects for a permanent vulnerability to techniques of persuasion that LLMs have mastered. Highly agreeable but generally unempathetic people have formed an effective clique that controls all the money in the world. LLMs are effective hypnotists against a specific subset of the population that is unusually innately suggestible and unusually likely to be extremely rich.
I would encourage you to consider a fourth possibility. What if Prashanth's post was written and edited by AI? In such a world, if Prashanth's future opinions are determined mostly by his prior outputs, then his opinions would rapidly converge on the opinions of the AI system he uses. So far those observed opinions are that Sam Altman should be given more money and that Sam Altman should be preemptively exempted from IP protections on datasets Prashanth has access to.
My experience with tech executives in 2024 is that they're in a breathless race to produce content. Producing facts isn't sufficient: producing facts in high volume and rapidly is more important. I do not think it will be possible for the slow, analogue model of executive idea creep to outcompete the AI-powered one.
My proposal for any OpenAI employee reading this post:
I just read Naomi Klein's No Logo, and despite being so late to that party It's not hard to imagine how big an impact it had in its time at identifying the brand being the product more than the things the businesses made (*sold).
Because I'm always trying to make connections that might not be there, I can't help think we're at a stage where "Brand" is being replaced by "UX" in a world of tech where you can't really wear brands on your shoulders.
We're inside the bubble so we talk in terms of brands (i.e. openAI) and personalities (sama), which are part of brand really, but outside of the bubble the UX is what gets people talking.
When you think about Slack doing their AI dataset shit, you can really see how much their product is a product of UX, or fashion, that could easily be replaced by a similar collection of existing properties.
As I write this, I already wonder if UX is just another facet of brand or if it's a seperate entity.
Anyway, I'm writing this out as a "is this a thing?" question. WDYR?