Top
Best
New

Posted by cratermoon 7/4/2025

The Rise of Whatever(eev.ee)
644 points | 508 commentspage 2
ZYbCRq22HbJ2y7 7/4/2025|
IMO, this is more ranting about people who meet the metrics of the platform.

You're a platform drone, you have no mind, yada. Yet, we are reading the author's blog.

The author may hate LLMs, but they will lead to many people realizing things they never were aware of, like the author's superficial ability to take information and present it in a way that engages others. Soon that will be a thing that is known. Not many will make money sharing information in prose.

What the author refers to as "LLMs" today, will continually improve and "get better" at everything the author has issues with, maybe in novel ways we can't think of at the moment.

Alternative take:

"Popular culture" has always been a "lesser" ideal of experience, and now that ontological grouping now includes the Internet, as a whole. There are no safe corners, everything you experience on the Internet, if someone shared it with you, is now "Popular culture".

Everyone knows what you know, and you are no longer special or have special things to share, because awareness is ubiquitous.

This is good for society in many ways.

For example, with information asymmetry, where assholes made others their food, it will become less common that people are food.

Things like ad-driven social networks will fade away as this realization becomes normalized.

Unfortunately, we are at the very early stages of this, and it takes a very long time for people to become aware of things like hoaxes.

wiseowise 7/4/2025|
I’m in your camp, but what makes you think assholes won’t suffocate the technology in its infancy and use it to oppress others even further?
vasco 7/4/2025||
Well they even call them "content creators", is there anything more whatever? It's literally "whatever" content needed to load the ads around it. It's not painters or musicians or documentarians, it's content creators.
kzrdude 7/4/2025|
I'm delighted to see the point about "content" in the blog post, finally someone saying that (I also think so and have thought so).
vasco 7/4/2025||
I always found it weird that the new generation embraced being called the id of the div they put stuff in.
ranger207 7/4/2025||
I liked the idea in there about how there's so many people out there that look at a thing merely for its potential to "line go up", and the actual thing itself doesn't matter at all, it's just Whatever. The current crypto is the perfect distillation in that regard, but it does apply more broadly to everything. It's the same problem with management as a skill entirely separate from what's being managed: It's abstracting around Whatever (be it a particular crypto token, Netflix show, or plumbing company), and when the abstraction inevitably leaks, the MBA solution is to crush whatever leaks, since that's not purely related to "line goes up". Of course, abstractions don't only leak trivial details, and so when the leaky part is cut off to try to shove the Whatever back into the box, it's cutting off important parts of the Whatever itself, causing it to wither and die. The second half of the article about LLMs and stuff I agree with, but I think that insight from the first part of the article is the more important one
dwedge 7/4/2025||
It's probably just me but I really struggle to read this whiney tone that became common around ten years ago. It's not about the subject matter itself, it's about the jokes that are always sad somehow, the font choice, the tone of the language. Some kind of perpetual-victim style of writing.
lubujackson 7/5/2025|||
I agree, it's immature, repetitive, polemic and dismissive. Not that it doesn't capture a popular mindset or make a decent point, but it does so without clarity or persuasion. Then again, it's just a blog post.
Epora 7/4/2025||
[dead]
nottorp 7/4/2025||
Emphasis:

-----

This is why I absolutely cannot fucking stand creative work being referred to as "content". "Content" is how you refer to the stuff on a website when you're designing the layout and don't know what actually goes on the page yet. "Content" is how you refer to the collection of odds and ends in your car's trunk. "Content" is what marketers call the stuff that goes around the ads.

"Content"... is Whatever.

-----

People, please don't think of yourself as "content consumers".

layer8 7/4/2025|
I upvoted your content.
marcus_holmes 7/4/2025||
I think I share the conclusion, but coming at it from the other side.

The point of doing things is the act of doing them, not the result. And if we make the result easily obtainable by using an LLM then this gets reinforced not destroyed.

I'm going to use sketching as an example, because it's something I enjoy but am very bad at. But you could talk in the same way about playing a musical instrument, writing code, writing anything really, knitting, sports, anything.

I derive inspiration from other people who can sketch really well, and I enjoy and admire their ability. But I'm happy that I will never be that good. The point of sketching (for me) is not to produce a fantastic drawing. The point is threefold: firstly to really look at the world, and secondly to practice a difficult skill, and thirdly the meditative time of being fully absorbed in a creative act.

I like that the fact that LLMs remove the false idea that the point of this is to produce Art. The LLM can almost certainly produce better Art than I can. Which is great because the point of sketching, for me, is the process not the result, and having the result be almost completely useless helps make that point. It also helps that I'm really bad at sketching, so I never want to hang the result on my wall anyway.

I understand that if you're really good at something, and take pride in the result of that, and enjoy the admiration of others at your accomplishments, then this might suck. That's gotta be tough. But if you only ever did it for the results and admiration, then maybe find something that you actually enjoy doing?

AndrewDucker 7/4/2025||
The point of nearly everything I do in the office is the result, not the doing.

For art/craft you are completely correct though.

prmph 7/4/2025||
Are you not contradicting yourself? If the point of doing things is the doing itself, not the result, then how is it meaningful to just issue a low-effort prompt for the AI to do it?
Revisional_Sin 7/5/2025||
They are saying that the option to make AI art, but not doing so, helped to clarify their purpose for making art.
vincnetas 7/4/2025||
After reading "...the press release and in the fine print it says that now it can count the number of letters in “Mississippi” correctly or whatever" tried to count letters in another word :)

You said: how many letters are in the lithuanian word "nebeprisikaspinaudamas"? Just give me one number. ChatGPT said: 23

You said: how many letters are in the lithuanian word 'nebeprisikaspinaudamas'. Just give me one number. ChatGPT said: 21

Both are incorrect by the way. It's 22

fsh 7/4/2025||
I'm all for making fun of LLMs, but asking this from a software that processes vectors of tokens is a bit silly. The information isn't really there in the input.
layer8 7/4/2025||
Then it shouldn’t pretend to know the correct answer.
molteanu 7/4/2025||
I got 22 the first time. But then 21 the second time. And the third time...
thombles 7/4/2025||
Speaking as a grump who recently chilled out, put reservations on hold and gave Claude a crack... it turns out that the anti-AI crowd (which still includes me in many regards) gets a lot wrong about the experience of using it, as demonstrated in TFA. You don't get reams of wishy-washy code unless you ask for it. If you're an experienced developer who Knows What They Want then you can wield it like a scalpel. None of the output is a surprise because you discussed the context and requirements first. It just gets there (probably) faster than you might have typing out keywords yourself. If the goal is Whatever, then sure, you will get that faster.
hyper57 7/4/2025||
> You don't get reams of wishy-washy code unless you ask for it.

I've found that they get pretty wishy-washy when you correct them. As an example, yesterday I was working on porting a function from the open-source CUE4Parse project from C# to Python (for a hobby project), and the LLM (Gemini 2.5 Pro) suggested the following translation of a C# method:

    @staticmethod
    def from_name(name: str) -> 'FPackageId':
        # C# Encoding.Unicode is UTF-16 LE in Python
        lower_name_bytes = name.lower().encode('utf-16-le')
        hashed_id = CityHash64(lower_name_bytes)
        return FPackageId(hashed_id)
I noted that the original used a custom ToLower() implementation:

> This custom ToLower(), does that produce the same result as .lower() in Python?

Gemini answered with a lot of text and concluded: "You should use Python's standard lower() method for your port."

I pushed back with:

> Right, but for this to work (looking up an asset by its hash as contained in global.utoc), I probably have to match the behavior of Unreal Engine...

  Gemini: You are absolutely right. My apologies, your insight is spot on. For the purpose of matching hashes that are pre-calculated by Unreal Engine and stored in files like `global.utoc`, you **must** replicate the engine's behavior exactly. My previous advice to use Python's `.lower()` was incorrect in this specific context.

  You need to implement a function that behaves identically to the C# `ToLower` you highlighted. The engine uses this specific, non-Unicode-aware, ASCII-only lowercasing function before hashing the path to generate the `FPackageId`. Using Python's standard `.lower()` would produce a different byte sequence for any path containing non-English characters, leading to a hash mismatch.
From my experience this is representative of the typical LLM interaction once one ventures into niche topics like Unreal Engine modding.
swat535 7/4/2025|||
Yes, if they make a mistake and you point it out, they will make another mistake. In many of those cases, it's better to scrap it and start with a fresh context, try a different prompt (or provider smaller context).

Also, more importantly, they will ignore negative directives. So telling it: "don't do X" , will get ignored. You are better of using positive directives instead.

rcxdude 7/4/2025||||
It's pretty difficult to have a useful back and forth with an LLM, because they're really heavily finetuned to be agreeable (and also they're not particularly smart, just knowledgeable, so their 'system 1' is a lot better than their 'system 2', to analogize with 'thinking fast and slow'). Generally speaking if they don't get a useful answer in one shot or with relatively simple, objective feedback, they're just going to flop around and agree with whatever you last suggested.
prmph 7/4/2025|||
Exactly.

But, to make a comparison here with Claude Code, I was initially impressed with Geminis ability to hold a keep a conversation on track, but it rarely gets the hint when I express annoyance with its output. Claude has an uncanny ability to guess what I find wrong with its output (even when I just respond with WTF!) and will try to fix it often in actually useful ways, Gemini just keeps repeating its last output after acknowledging my annoyance.

alittlebee 7/4/2025|||
I like this perspective, it’s not a replacement for thinking, it’s a replacement for typing what has already been typed countless times.
nottorp 7/4/2025||
> If you're an experienced developer who Knows What They Want then you can wield it like a scalpel.

But that's not what the marketing says. The marketing says it will do your entire job for you.

In reality, it will save you some typing if you already know what to do.

On HN at least, where most people are startup/hustle culture and experts in something, they don't think long term enough to see the consequences for non experts.

thombles 7/4/2025|||
Well I never set much store by marketing and I'm not planning to start. :) More seriously though it helps explain the apparent contradiction that it sounds scammy at a macro level yet many individuals report getting a lot of value out of it.
nottorp 7/4/2025||
> many individuals report getting a lot of value

I'm not sure it's a lot of value. It probably is in the short term, but in the long run...

There have already been studies saying that you don't retain the info about what a LLM does for you. Even if you are already an expert (a status which you have attained the traditional way), that cuts you off from all those tiny improvements that happen every day without noticing.

adastra22 7/4/2025|||
> In reality, it will save you some typing if you already know what to do.

This goes too far in the other direction. LLMs can do far more than merely saving you typing. I have successfully used coding agents to implement code which at the outset I had no business writing as it was far outside my domain expertise. By the end I'd gained enough understanding to be able to review the output and guide the LLM towards a correct solution, far faster than the weeks or months it would have taken to acquire enough background info to make an attempt at coding it myself.

nottorp 7/4/2025||
I'd love it if everyone who posts statements like this would also include a link to their professional experience. Or at least state a number of years they've been developers.

I'm sure I can do what you describe as well. I've actually used LLMs to get myself current on some stuff I knew (old) basics for and they were useful indeed as you say.

I'm also sure it wouldn't help your interns to grow to your level.

adastra22 7/4/2025||
Why? LLMs are fantastic tutors, if used right. It's how you use it, not the background knowledge you bring to it.
soueuls 7/6/2025||
I found some interesting bits in this article but overall I think it’s being overly cynical.

In the past ten years, I worked with one guy from Nigeria and a bunch of people from Iran. Bitcoin (or rather cryptocurrencies in general) have been more than just mere gimmicks.

Sending money from one side of the world to another, for such a low fees, even when the central bank of Nigeria is blocking USD transfers, even when bank transfers to Iran are being blocked, has been very useful in itself.

As for AI, yes if you use it as a God, you will be disappointed. Yes it can’t do everything. Yes it will hallucinate.

But it’s been a great learning environment for me, I keep asking questions to get an overview of things.

I used to learn concepts such as « clean architecture » because AI is never tired, it can provide endless variations to the same problems, until you better understand the underlying principles and the recurring patterns.

When I work on a project, 20-30% is creative, cutting edge, never seen before. 70% is CRUD, necessary boilerplate.

I know what I am supposed to be doing, I can verify the result, double check the validity of it.

Why would I waste hours typing letters?

Do I rely on AI for everything? No I don’t.

But pretending it’s completely useless, is nonsensical.

Yes it’s just statistically inferring the next token, but it’s actually a very simple, powerful concept.

rednafi 7/4/2025|
Software programming used to be a blue-collar thing in the early days, when hardware wiring was all the rage.

Then it became hip, and people would hand-roll machine-specific assembly code. Later on, it became too onerous when CPU architecture started to change faster than programmers could churn out code. So we came up with compilers, and people started coding at a higher level of abstraction. No one lamented the lost art of assembly.

Coding is just a means to an end. We’ve always searched for better and easier ways to convince the rocks to do something for us. LLMs will probably let us jump another abstraction level higher.

I too spent hours looking for the right PHP or Perl snippet in the early days to do something. My hard-earned bash-fu is mostly useless now. Am I sad about it? Nah. Writing bash always sucked, who am I kidding. Also, regex. I never learned it properly. It doesn’t appeal to me. So I’m glad these whatever machines are helping me do this grunt work.

There are sides of programming I like, and implementation isn’t one of them. Once upon a time I could care less about the binary streams ticking the CPU. Now I’m excited about the probable prospect of not having to think as much about “higher-level” code and jumping even higher.

To me, programming is more like science than art. Science doesn’t care how much profundity we find in the process. It moves on to the next thing for progress.

eddiewithzato 7/4/2025||
LLMs will not be doing that. I wish they could, but they just spit out whatever without verifying anything. Even in Cursor which has the agent tell you to run the test script they generated to verify the output, it just says “yep seems fine to me!”.

AI at the current state in my workflow is a decent search engine and stackoverflow. But it has far greater pitfalls as OP pointed out (it just assumes the code is always 100% accurate and will “fake” API).

wiseowise 7/4/2025||
That’s where you, human, come into the scene.
eddiewithzato 7/4/2025||
And that’s where I end up wasting more time investigating and fixing issues, rather than creating a solution ;)

I only use AI for small problems rather than let it orchestrate entire files.

archagon 7/4/2025||
LLMs are not an abstraction. If anything, they are the opposite of an abstraction.
More comments...