Top
Best
New

Posted by cratermoon 7/4/2025

The Rise of Whatever(eev.ee)
644 points | 508 commentspage 3
bgwalter 7/4/2025|
What a great article, I hope it becomes Internet lore. We have been in the Whatever Economy for quite a while and the LLM hype is the logical conclusion.

Like the author, I'm mystified by those who accept the appearance of output as a valid goal.

Even with constrained algorithmic "AI" like Stockfish, which, unlike LLMs, actually works, chess players frown heavily on using it for cheating. No chess player can go to a tournament and say: "I made this game with Stockfish."

tyre 7/4/2025||
The author is mad at Stripe and PayPal for banning transactions involving unicorn wieners but this is imposed on them by the backing banks.

The reason behind banning adult materials has to do with Puritanism and with the high rates of refunds on adult websites.

anonzzzies 7/4/2025||
I have an issue with Stripe or Paypal banning merchants without recourse that do not sell adult stuff or anything else bad/high refund, just because 'AI flagged the account'. And I know from Paypal that they have used 'AI' (statistics) to flag accounts, without recourse, almost since they started. It did cost us a lot of money; we had no refunds or anything and paypal was simply (in email and on the phone) repeating that there was no recourse. We moved to an EU system which we are using for 15 years now and never had any issues of course (as we never did anything weird, ever); also, I can call them or visit them if anything happens unlike the impersonal big guys. Far cheaper too. Screw paypal & stripe (their fees are an absolute joke, no idea why they are so popular), thanks.
dspillett 7/4/2025|||
> no idea why they are so popular

Momentum. They are the big games in town because so many people use them, so many people use them because they are the big games in town. There was a time for both when they didn't suck as much as they do now, at least relative to what other options existed.

movetheworld 7/4/2025|||
Which EU system did you move to?
tavavex 7/4/2025|||
I still think it's mostly puritanism. "Adult transactions" is a massive category of goods and services, and I'm willing to bet that John the average guy buying an overpriced subscription on generic porn website #729 and regretting it 20 minutes later is much more likely to trigger a refund or chargeback than someone commissioning an artist or buying goods (anything from real-life things to 3D models).

Yet, the payment processors will all reliably treat anything NSFW equally by suppressing it as much as they can. From banning individuals who dare do transactions they don't approve of to directly pressuring websites that might tolerate NSFW content by threatening to take away their only means of making money. If they only cared about refunds and profitability, they wouldn't ban individual artists - because the fact how these artists often manage to stay undetected for years suggests that many of their customers aren't the kind to start complaining.

It's quite fascinating how this is the one area where the companies are willing to "self-regulate". They don't process sales of illicit drugs because the governments above them said no and put in extensive guardrails to make these illegal uses as difficult as reasonably possible. Yet, despite most first-world governments not taking issue with adult content at large (for now), the payment processors will act on their own and diligently turn away any potential revenue they could be collecting.

N_Lens 7/4/2025||
There's a reason "Paypal mafia" is in the lexicon.
ChrisMarshallNY 7/4/2025||
Reading this person's blog, I came upon this article[0].

I can absolutely relate. That was ten years ago, so I'm not exactly sure where they are, now, but they still seem to be going strong.

[0] https://eev.ee/blog/2015/06/09/i-quit-the-tech-industry/

simonw 7/4/2025||
Looks like I was the inspiration for this post then. https://bsky.app/profile/simonwillison.net/post/3lt2xbayttk2...

> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.

The reaction to that post has been interesting. It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.

Analogies like this will inevitably get people hung up on the details of the analogy though. Lots of people jumped straight to "a table saw does a single job reliably, unlike LLMs which are non-deterministic".

I picked table saws because they are actually really dangerous and can cut your thumb off if you don't know how to use them.

rcxdude 7/4/2025||
Also, if you don't have a table saw, just cutting a straight line efficiently and accurately is a fairly important baseline skill for doing carpentry, something which becomes a lot less of an issue with a table saw, and that makes some of the skillset of carpentry less important for getting good results (especially if you then make things that only consist of straight lines and so you also don't need to be able to do more complex shapes well). I think it's a pretty decent analogy.
latexr 7/4/2025|||
> Looks like I was the inspiration for this post then.

You were not, as is patently obvious from the sentence preceding your quote (emphasis mine):

> Another Bluesky quip I saw earlier today, and the reason I picked up writing this post (which I’d started last week)

The post had already been started, your comment was simply a reason to continue writing it at that point in time. Had your comment not existed, this post would probably still have been finished (though perhaps at a later date).

> It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.

Despite your restating, your point still reads to me as the opposite as what you claim to have intended. Inventing the table saw is a poor analogy because the problem with the LLM hype has nothing to do with their invention. It’s the grifts and the irresponsible shoving of it down everyone’s throats that’s a problem. That’s why the comparison fails, you’re juxtaposing things which aren’t even slightly related. The invention of a technology and the hype around it are two entirely orthogonal matters.

simonw 7/4/2025||
For your benefit I will make two minor edits to things I have said.

> Looks like I was the inspiration for this post then

I replace that with:

> Looks like I was the inspiration for finishing this post then

And this:

> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.

I can rephrase as:

> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the introduction of the table saw.

latexr 7/4/2025||
> For your benefit

If that’s your true impetus, please don’t bother. There’s nothing which benefits me about your words being clearer and less open to misinterpretation. You are, of course, completely welcome to disagree with and ignore my suggestions.

> thanks to the introduction of the table saw.

That makes absolutely no difference at all. And it doesn’t matter anymore either, the harm to your point is already done, no one’s going back to it now to reinterpret it. I was merely pointing out what I see as having gone wrong so you can avoid it in the future. But again, entirely up to you what you do with the feedback.

alittlebee 7/4/2025||
[dead]
ninetyninenine 7/4/2025|||
You have to realize that we're only a couple years into wide spread adoption of LLMs as agentic coding partners. It's obvious too everyone, and you that LLMs currently cannot replace coders.

People are talking about the trendline, what AI was 5 years ago versus what AI is today points to a different AI 5 years down the line. Whatever AI will be 5 years from now it is immensely possible that LLMs may eliminate programming as a career. If not 5 years... give it 10. If not 10, give it 15. Maybe it happens in a day, a major break through in AI, or maybe it will be like what's currently happening, slow erosion and infiltration into our daily tasks where it takes on more and more responsibilities until one day, it's doing everything.

I mean do I even have to state the above? We all know it. What's baffling to me is how I get people saying shit like this:

>"LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.

I mean it's an obvious complete misrepresentation. People are talking about the future. Not the status quo and we ALL know this yet we still make comments like that.

simonw 7/4/2025|||
The more time I spend using LLMs for code (and being impressed at how much better they are compared to six months ago) the less I worry for my career.

Using LLMs as part of my process helps me understand how much of my job isn't just bashing out code.

My job is to identify problems that can be solved with code, then solve them, then verify that the solution works and has actually addressed the problem.

An even more advanced LLM may eventually be able to completely handle the middle piece. It can help with the first and last pieces, but only when operated by someone who understands both the problems to be solved and how to interact with the LLM to help solve them.

No matter how good these things get, they will still need someone to find problems for them to solve, define those problems and confirm that they are solved. That's a job - one that other humans will be happy to outsource to an expert practitioner.

It's also about 80% of what I do as a software developer already.

indigoabstract 7/4/2025|||
I don't know what will come in the future, but to me it's obvious that any variation of LLMs, no matter how advanced won't replace a skilled human who knows what they're doing.

Through no fault of their own, but they're literally blind. They don't have eyes to see, ears to hear or fingers to touch and feel & have no clue if what they've produced is any good to the original purpose. They are still only (amazing) tools.

ninetyninenine 7/4/2025||
LLMs produce video and audio data and can parse and change audio and visual data. They hear, see and read and the only reason they can’t touch is because we don’t have the training data.

You do not know if LLMs I the future can’t replace humans. You can only say right now they can’t replace humans. In the future the structure of the LLM may be modified or it become one module out of multiple that is required for agi.

These are all plausible possibilities. But you have narrowed it all down to a “no”. LLMs are just tools with no future.

The real answer is nobody knows. But there are legitimate possibilities here. We have a 5 year trend line projecting higher growth into the future.

indigoabstract 7/4/2025||
> In the future the structure of the LLM may be modified or it become one module out of multiple that is required for agi. > The real answer is nobody knows.

This is all just my opinion of course, but it's easy to expect that being an LLM that knows all there is to know about every subject written in books and the internet would be enough to do every office work that can be done with a computer. Yet strangely enough, it isn't.

At this point they still lack the necessary feedback mechanism (the senses) and ability to learn on the job so they can function on their own independently. And people have to trust them, that they don't fail in some horrible way and things like that. Without all these they can still be very helpful, but can't really "replace" a human in doing most activities. And also, some people seem to possess a sense of aesthetics and a wonderful creative imagination, things that LLMs don't really display at this time.

I agree that nobody knows the answer. If and when they arrive at that point, by then the LLM part would probably be just a tiny fraction of their functioning. Maybe we can start worrying then. Or maybe we could just find something else to do. Because people aren't tools, even when economically worthless.

ninetyninenine 7/4/2025||
I disagree. The output of an LLM is like a crapshoot. It might work it might not like 40 to 60 percent of the time. That in itself tells us it’s not a small component of something bigger. It’s likely a large component and core structure of what is to come. We’ve closed the gap about half way.
squidbeak 7/4/2025||
The thing is that at this stage, LLMs, and perhaps AI in other forms, also have careers. Right now they're junior developers. But whose career will develop faster or go further? Theirs? or the new programmer's?
wiseowise 7/4/2025||
Who cares?
squidbeak 7/4/2025||
The person who'd chosen programming as a career, if AI overtakes human programmers.
Procrastes 7/4/2025||
This captures something I've been struggling to describe, and "Whatever" is the perfect term for this.

Private Equity & Financialization: Whatever for business Flood the Zone & Deadcatting: "Whatever" for politics

It's what I think about when I hear all of the "AI is going to eliminate all the jobs." That's just a convenient cover story for "Tax laws changed so R&D isn't free money anymore, and we need to fire everyone."

When almost every drop of wealth is in the control of a tiny number of people, it's not surprising that the world turns into one big competition for ways to convince those people that you have a way for them to sop up the remaining thimbleful too.

ddorian43 7/4/2025|
The tax law for R&D got fixed though with the big bill.
lovich 7/4/2025||
I read through the essay and really resonated with some parts and didn’t resonate with others, but I think they put some words to the feelings I have had on AI and its effect in the tech industry

> There are people who use these, apparently. And it just feels so… depressing. There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though. That’s management, a fairly different job. I’m not interested in managing. I’m certainly not interested in managing this bizarre polite lying daydream machine. It feels like a vizier who has definitely been spending some time plotting my demise.

I was several minutes of reading before this paragraph when the idea hit me that this person hates managing. Because everyone I’ve met who hates using AI to produce software describes to me problems like the AI not being correct or lying to them if the model thought that would please you better, and that’s my experience with junior engineers as a manager.

And everyone I’ve met who loves AI at some point makes an analogy to it, that compares it to a team of eager juniors who can do a lot of work fast but can’t have their output trusted blindly, and that’s my experience with junior engineers as a manager.

And then anyone whose been trying to get an Engineering manager job over the past few months and tracking their applications metadata has seen the number of open postings for their requirements go down month after month unless you drop the manager part and keep all the same criteria but as IC

And then I read commentary from megacorps about their layoffs and read between the lines like here[1]

>… a Microsoft spokesperson said in a statement, adding that the company is reducing managerial layers …

I think our general consternation around this is coming from creators being forced into management instead of being able to outsource those tasks to their own managers.

I am not really sure what to do with this insight

[1] https://www.cnn.com/2025/07/02/tech/microsoft-layoffs-9000-e...

pm215 7/4/2025|
I think there's still a difference even if you look at it as "supervising a bunch of juniors". I'm happy to review the output of a human in that case because I believe that even if they got some stuff wrong and it might have been quicker and more interesting for me to just do the thing, the process is helping them learn and get better, which is both good in itself and also means that over time I have to do the supervision and support part less and less. Supervising an LLM misses out both of those aspects, so it's just not-very-fun work.
lovich 7/4/2025||
>… the process is helping them learn and get better, which is both good in itself and also means that over time I have to do the supervision and support part less and less. Supervising an LLM misses out both of those aspects, so it's just not-very-fun work.

Legitimately I think you are missing my point. What I quoted out of your response could be applied to prompt engineering/managment/tinkering. I think everyone who likes doing this with juniors and hates it with AI is conflating their enjoyment of teaching juniors with the dopamine you get from engaging with other primates.

I think most people I’ve met who hated AI would have the same level of hate for a situation where their boss made them actually manage an underperforming employee instead of letting them continue on as is ad infinitum.

It’s hard work both mentally and emotionally to correct an independent agent well enough to improve their behavior but not strongly enough to break them, and I think most AI haters are choking on this fact.

I’m saying that from the position of an engineer who got into management and choked on the fact that sometimes upper leadership was right and the employee complaining to me about the “stupid rules” or trying to lie to me to get a gold star instead of a bronze one was the agent in the system who was actually at fault

pm215 7/4/2025||
No, I really don't think that prompt engineering is the same thing. Anything I put in the prompt may help this particular conversation, but a fresh instance of the LLM will be exactly the way it was before I started. Improvements in the LLM will happen because the LLM vendor releases a new model, not because I "taught" it anything.
eddiewithzato 7/4/2025||
You also don’t get the satisfaction of watching something grow. Teaching and being a mentor is entirely separate to massaging a prompt
pm215 7/4/2025||
Yeah. I do agree with lovich that there's a lot of stuff about management that's just not fun (and that's part of why I've always carefully avoided it!) -- and one thing about AI is not just that it's management but that it's management with a lot of the human-interaction, mentoring, etc upsides removed.
827a 7/4/2025||
> What are we actually saying here — that even Microsoft has to evaluate usage of “AI” directly, because it doesn’t affect performance enough to have an obvious impact otherwise

Oh wow.

Ezhik 7/4/2025||
I feel it. We can debate AI over and over and over but my ultimate problem is not even the tech itself but the "whatever" part.

I'm a bit annoyed with LLMs for coding, because I care about the craft. But I understand the premise of using them when the end goal is not "tech as a craft" but "tech as a means". But that still requires having some reason to use the tech.

Hell, I feel the "tech as a means to get money" part for people trying to climb up the social ladder.

But for a lot of people who already did get to the top of it?

At some point we gotta ask what the point of SEO-optimizing everything even is.

Like, is the end goal optimizing life out of life?

Why not write a whole app using LLMs? Why not have the LLM do your course work? Why do the course work at all? Why not have the LLM make a birthday card for your partner? Why even get up in the morning? Why not just go leave and live in a forest? Why live at all?

What is even the point?

gattr 7/4/2025||
On a more positive note, LLMs (or their successors) could be used to create a perfect tutor. Taylored for every individual, automatically adjusting learning material difficulty, etc.

But yeah, first we'll go through a few (?) years of the self-defeating "ChatGPT does my homework" and the necessary adjustments of how schools/unis function.

vrighter 7/4/2025|||
So you suggest training a model for each individual student? Because LLM "inference" sure as hell isn't capable of tailoring anything to anything, or change in any way.

And also, how is personalized bullshit better than generic bullshit? We'd need to solve the bullshit problem in the first place, which is mathematically guaranteed NOT to be possible with these types of architectures.

Ezhik 7/4/2025|||
Oh yeah, it's going to be really interesting when the hype dies down and we start seeing the actual good use cases get homed in.
wiseowise 7/4/2025||
You need to take a break from your “craft” if fancy autocomplete makes you question reason to live.
Ezhik 7/4/2025||
And do what, given that AI-as-marketed optimizes out having relationships with people and enjoying art/cinema/music?

Touch grass all by myself?

wiseowise 7/4/2025||
[flagged]
Ezhik 7/4/2025||
I think calling me delusional for trying to understand the hype might be a bit much. It's not my own life I was describing.
OvbiousError 7/4/2025||
> Bitcoin failed as a currency because the people who got most invested in it do not care about currency

As far as I understand, bitcoin is fundamentally unusable as a currency. Transactions are expensive and limited to ?7k? every few seconds. It's also inherently deflationary, you want inflationary currency, you want people spending, not hoarding.

fsh 7/4/2025|
It's much worse, the maximum on-chain transaction rate is something like 7 per second. Also the time intervals between blocks have a huge spread, so it can take more than an hour for a transaction to be confirmed if you are unlucky. This is obviously impractical, so people came up with schemes such as Lightning to avoid touching the blockchain as much as possible. Of course this makes it much more difficult to judge whether the system can be cheated in some way...
m0wer 7/4/2025||
Blockchains don't scale. But that's a feature, not a bug.

Great protocols are built in layers.

You have decentralized instant settlement for an average of 0.005% even for micropayments with the Lightning Network (another protocol built on top of Bitcoin). That's orders of magnitude away from the settlement time and resilience of the current payment networks.

immibis 7/10/2025||
This might be the first time I've heard the argument "it's great that my project is shit because a lot of great things are layers on top of shit"
m0wer 7/11/2025||
You missed the point. The IP protocol doesn't scale! Ethernet MTU is just 1500 bytes how are we ever going to transfer a movie over the Internet!

Ethernet does not need to carry the whole movie in one packet. If it does the job of delivering the MTU to the host on the other side of the cable, it's good. Websockets can be figured out somewhere else. The IP stack is not shit because each layer does just one thing, it's good because of that.

But hey, time will tell.

More comments...