You're a platform drone, you have no mind, yada. Yet, we are reading the author's blog.
The author may hate LLMs, but they will lead to many people realizing things they never were aware of, like the author's superficial ability to take information and present it in a way that engages others. Soon that will be a thing that is known. Not many will make money sharing information in prose.
What the author refers to as "LLMs" today, will continually improve and "get better" at everything the author has issues with, maybe in novel ways we can't think of at the moment.
Alternative take:
"Popular culture" has always been a "lesser" ideal of experience, and now that ontological grouping now includes the Internet, as a whole. There are no safe corners, everything you experience on the Internet, if someone shared it with you, is now "Popular culture".
Everyone knows what you know, and you are no longer special or have special things to share, because awareness is ubiquitous.
This is good for society in many ways.
For example, with information asymmetry, where assholes made others their food, it will become less common that people are food.
Things like ad-driven social networks will fade away as this realization becomes normalized.
Unfortunately, we are at the very early stages of this, and it takes a very long time for people to become aware of things like hoaxes.
-----
This is why I absolutely cannot fucking stand creative work being referred to as "content". "Content" is how you refer to the stuff on a website when you're designing the layout and don't know what actually goes on the page yet. "Content" is how you refer to the collection of odds and ends in your car's trunk. "Content" is what marketers call the stuff that goes around the ads.
"Content"... is Whatever.
-----
People, please don't think of yourself as "content consumers".
The point of doing things is the act of doing them, not the result. And if we make the result easily obtainable by using an LLM then this gets reinforced not destroyed.
I'm going to use sketching as an example, because it's something I enjoy but am very bad at. But you could talk in the same way about playing a musical instrument, writing code, writing anything really, knitting, sports, anything.
I derive inspiration from other people who can sketch really well, and I enjoy and admire their ability. But I'm happy that I will never be that good. The point of sketching (for me) is not to produce a fantastic drawing. The point is threefold: firstly to really look at the world, and secondly to practice a difficult skill, and thirdly the meditative time of being fully absorbed in a creative act.
I like that the fact that LLMs remove the false idea that the point of this is to produce Art. The LLM can almost certainly produce better Art than I can. Which is great because the point of sketching, for me, is the process not the result, and having the result be almost completely useless helps make that point. It also helps that I'm really bad at sketching, so I never want to hang the result on my wall anyway.
I understand that if you're really good at something, and take pride in the result of that, and enjoy the admiration of others at your accomplishments, then this might suck. That's gotta be tough. But if you only ever did it for the results and admiration, then maybe find something that you actually enjoy doing?
For art/craft you are completely correct though.
You said: how many letters are in the lithuanian word "nebeprisikaspinaudamas"? Just give me one number. ChatGPT said: 23
You said: how many letters are in the lithuanian word 'nebeprisikaspinaudamas'. Just give me one number. ChatGPT said: 21
Both are incorrect by the way. It's 22
I've found that they get pretty wishy-washy when you correct them. As an example, yesterday I was working on porting a function from the open-source CUE4Parse project from C# to Python (for a hobby project), and the LLM (Gemini 2.5 Pro) suggested the following translation of a C# method:
@staticmethod
def from_name(name: str) -> 'FPackageId':
# C# Encoding.Unicode is UTF-16 LE in Python
lower_name_bytes = name.lower().encode('utf-16-le')
hashed_id = CityHash64(lower_name_bytes)
return FPackageId(hashed_id)
I noted that the original used a custom ToLower() implementation:> This custom ToLower(), does that produce the same result as .lower() in Python?
Gemini answered with a lot of text and concluded: "You should use Python's standard lower() method for your port."
I pushed back with:
> Right, but for this to work (looking up an asset by its hash as contained in global.utoc), I probably have to match the behavior of Unreal Engine...
Gemini: You are absolutely right. My apologies, your insight is spot on. For the purpose of matching hashes that are pre-calculated by Unreal Engine and stored in files like `global.utoc`, you **must** replicate the engine's behavior exactly. My previous advice to use Python's `.lower()` was incorrect in this specific context.
You need to implement a function that behaves identically to the C# `ToLower` you highlighted. The engine uses this specific, non-Unicode-aware, ASCII-only lowercasing function before hashing the path to generate the `FPackageId`. Using Python's standard `.lower()` would produce a different byte sequence for any path containing non-English characters, leading to a hash mismatch.
From my experience this is representative of the typical LLM interaction once one ventures into niche topics like Unreal Engine modding.Also, more importantly, they will ignore negative directives. So telling it: "don't do X" , will get ignored. You are better of using positive directives instead.
But, to make a comparison here with Claude Code, I was initially impressed with Geminis ability to hold a keep a conversation on track, but it rarely gets the hint when I express annoyance with its output. Claude has an uncanny ability to guess what I find wrong with its output (even when I just respond with WTF!) and will try to fix it often in actually useful ways, Gemini just keeps repeating its last output after acknowledging my annoyance.
But that's not what the marketing says. The marketing says it will do your entire job for you.
In reality, it will save you some typing if you already know what to do.
On HN at least, where most people are startup/hustle culture and experts in something, they don't think long term enough to see the consequences for non experts.
I'm not sure it's a lot of value. It probably is in the short term, but in the long run...
There have already been studies saying that you don't retain the info about what a LLM does for you. Even if you are already an expert (a status which you have attained the traditional way), that cuts you off from all those tiny improvements that happen every day without noticing.
This goes too far in the other direction. LLMs can do far more than merely saving you typing. I have successfully used coding agents to implement code which at the outset I had no business writing as it was far outside my domain expertise. By the end I'd gained enough understanding to be able to review the output and guide the LLM towards a correct solution, far faster than the weeks or months it would have taken to acquire enough background info to make an attempt at coding it myself.
I'm sure I can do what you describe as well. I've actually used LLMs to get myself current on some stuff I knew (old) basics for and they were useful indeed as you say.
I'm also sure it wouldn't help your interns to grow to your level.
In the past ten years, I worked with one guy from Nigeria and a bunch of people from Iran. Bitcoin (or rather cryptocurrencies in general) have been more than just mere gimmicks.
Sending money from one side of the world to another, for such a low fees, even when the central bank of Nigeria is blocking USD transfers, even when bank transfers to Iran are being blocked, has been very useful in itself.
As for AI, yes if you use it as a God, you will be disappointed. Yes it can’t do everything. Yes it will hallucinate.
But it’s been a great learning environment for me, I keep asking questions to get an overview of things.
I used to learn concepts such as « clean architecture » because AI is never tired, it can provide endless variations to the same problems, until you better understand the underlying principles and the recurring patterns.
When I work on a project, 20-30% is creative, cutting edge, never seen before. 70% is CRUD, necessary boilerplate.
I know what I am supposed to be doing, I can verify the result, double check the validity of it.
Why would I waste hours typing letters?
Do I rely on AI for everything? No I don’t.
But pretending it’s completely useless, is nonsensical.
Yes it’s just statistically inferring the next token, but it’s actually a very simple, powerful concept.
Then it became hip, and people would hand-roll machine-specific assembly code. Later on, it became too onerous when CPU architecture started to change faster than programmers could churn out code. So we came up with compilers, and people started coding at a higher level of abstraction. No one lamented the lost art of assembly.
Coding is just a means to an end. We’ve always searched for better and easier ways to convince the rocks to do something for us. LLMs will probably let us jump another abstraction level higher.
I too spent hours looking for the right PHP or Perl snippet in the early days to do something. My hard-earned bash-fu is mostly useless now. Am I sad about it? Nah. Writing bash always sucked, who am I kidding. Also, regex. I never learned it properly. It doesn’t appeal to me. So I’m glad these whatever machines are helping me do this grunt work.
There are sides of programming I like, and implementation isn’t one of them. Once upon a time I could care less about the binary streams ticking the CPU. Now I’m excited about the probable prospect of not having to think as much about “higher-level” code and jumping even higher.
To me, programming is more like science than art. Science doesn’t care how much profundity we find in the process. It moves on to the next thing for progress.
AI at the current state in my workflow is a decent search engine and stackoverflow. But it has far greater pitfalls as OP pointed out (it just assumes the code is always 100% accurate and will “fake” API).
I only use AI for small problems rather than let it orchestrate entire files.