Like the author, I'm mystified by those who accept the appearance of output as a valid goal.
Even with constrained algorithmic "AI" like Stockfish, which, unlike LLMs, actually works, chess players frown heavily on using it for cheating. No chess player can go to a tournament and say: "I made this game with Stockfish."
The reason behind banning adult materials has to do with Puritanism and with the high rates of refunds on adult websites.
Momentum. They are the big games in town because so many people use them, so many people use them because they are the big games in town. There was a time for both when they didn't suck as much as they do now, at least relative to what other options existed.
Yet, the payment processors will all reliably treat anything NSFW equally by suppressing it as much as they can. From banning individuals who dare do transactions they don't approve of to directly pressuring websites that might tolerate NSFW content by threatening to take away their only means of making money. If they only cared about refunds and profitability, they wouldn't ban individual artists - because the fact how these artists often manage to stay undetected for years suggests that many of their customers aren't the kind to start complaining.
It's quite fascinating how this is the one area where the companies are willing to "self-regulate". They don't process sales of illicit drugs because the governments above them said no and put in extensive guardrails to make these illegal uses as difficult as reasonably possible. Yet, despite most first-world governments not taking issue with adult content at large (for now), the payment processors will act on their own and diligently turn away any potential revenue they could be collecting.
I can absolutely relate. That was ten years ago, so I'm not exactly sure where they are, now, but they still seem to be going strong.
[0] https://eev.ee/blog/2015/06/09/i-quit-the-tech-industry/
> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.
The reaction to that post has been interesting. It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.
Analogies like this will inevitably get people hung up on the details of the analogy though. Lots of people jumped straight to "a table saw does a single job reliably, unlike LLMs which are non-deterministic".
I picked table saws because they are actually really dangerous and can cut your thumb off if you don't know how to use them.
You were not, as is patently obvious from the sentence preceding your quote (emphasis mine):
> Another Bluesky quip I saw earlier today, and the reason I picked up writing this post (which I’d started last week)
The post had already been started, your comment was simply a reason to continue writing it at that point in time. Had your comment not existed, this post would probably still have been finished (though perhaps at a later date).
> It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.
Despite your restating, your point still reads to me as the opposite as what you claim to have intended. Inventing the table saw is a poor analogy because the problem with the LLM hype has nothing to do with their invention. It’s the grifts and the irresponsible shoving of it down everyone’s throats that’s a problem. That’s why the comparison fails, you’re juxtaposing things which aren’t even slightly related. The invention of a technology and the hype around it are two entirely orthogonal matters.
> Looks like I was the inspiration for this post then
I replace that with:
> Looks like I was the inspiration for finishing this post then
And this:
> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.
I can rephrase as:
> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the introduction of the table saw.
If that’s your true impetus, please don’t bother. There’s nothing which benefits me about your words being clearer and less open to misinterpretation. You are, of course, completely welcome to disagree with and ignore my suggestions.
> thanks to the introduction of the table saw.
That makes absolutely no difference at all. And it doesn’t matter anymore either, the harm to your point is already done, no one’s going back to it now to reinterpret it. I was merely pointing out what I see as having gone wrong so you can avoid it in the future. But again, entirely up to you what you do with the feedback.
People are talking about the trendline, what AI was 5 years ago versus what AI is today points to a different AI 5 years down the line. Whatever AI will be 5 years from now it is immensely possible that LLMs may eliminate programming as a career. If not 5 years... give it 10. If not 10, give it 15. Maybe it happens in a day, a major break through in AI, or maybe it will be like what's currently happening, slow erosion and infiltration into our daily tasks where it takes on more and more responsibilities until one day, it's doing everything.
I mean do I even have to state the above? We all know it. What's baffling to me is how I get people saying shit like this:
>"LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.
I mean it's an obvious complete misrepresentation. People are talking about the future. Not the status quo and we ALL know this yet we still make comments like that.
Using LLMs as part of my process helps me understand how much of my job isn't just bashing out code.
My job is to identify problems that can be solved with code, then solve them, then verify that the solution works and has actually addressed the problem.
An even more advanced LLM may eventually be able to completely handle the middle piece. It can help with the first and last pieces, but only when operated by someone who understands both the problems to be solved and how to interact with the LLM to help solve them.
No matter how good these things get, they will still need someone to find problems for them to solve, define those problems and confirm that they are solved. That's a job - one that other humans will be happy to outsource to an expert practitioner.
It's also about 80% of what I do as a software developer already.
Through no fault of their own, but they're literally blind. They don't have eyes to see, ears to hear or fingers to touch and feel & have no clue if what they've produced is any good to the original purpose. They are still only (amazing) tools.
You do not know if LLMs I the future can’t replace humans. You can only say right now they can’t replace humans. In the future the structure of the LLM may be modified or it become one module out of multiple that is required for agi.
These are all plausible possibilities. But you have narrowed it all down to a “no”. LLMs are just tools with no future.
The real answer is nobody knows. But there are legitimate possibilities here. We have a 5 year trend line projecting higher growth into the future.
This is all just my opinion of course, but it's easy to expect that being an LLM that knows all there is to know about every subject written in books and the internet would be enough to do every office work that can be done with a computer. Yet strangely enough, it isn't.
At this point they still lack the necessary feedback mechanism (the senses) and ability to learn on the job so they can function on their own independently. And people have to trust them, that they don't fail in some horrible way and things like that. Without all these they can still be very helpful, but can't really "replace" a human in doing most activities. And also, some people seem to possess a sense of aesthetics and a wonderful creative imagination, things that LLMs don't really display at this time.
I agree that nobody knows the answer. If and when they arrive at that point, by then the LLM part would probably be just a tiny fraction of their functioning. Maybe we can start worrying then. Or maybe we could just find something else to do. Because people aren't tools, even when economically worthless.
Private Equity & Financialization: Whatever for business Flood the Zone & Deadcatting: "Whatever" for politics
It's what I think about when I hear all of the "AI is going to eliminate all the jobs." That's just a convenient cover story for "Tax laws changed so R&D isn't free money anymore, and we need to fire everyone."
When almost every drop of wealth is in the control of a tiny number of people, it's not surprising that the world turns into one big competition for ways to convince those people that you have a way for them to sop up the remaining thimbleful too.
> There are people who use these, apparently. And it just feels so… depressing. There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though. That’s management, a fairly different job. I’m not interested in managing. I’m certainly not interested in managing this bizarre polite lying daydream machine. It feels like a vizier who has definitely been spending some time plotting my demise.
I was several minutes of reading before this paragraph when the idea hit me that this person hates managing. Because everyone I’ve met who hates using AI to produce software describes to me problems like the AI not being correct or lying to them if the model thought that would please you better, and that’s my experience with junior engineers as a manager.
And everyone I’ve met who loves AI at some point makes an analogy to it, that compares it to a team of eager juniors who can do a lot of work fast but can’t have their output trusted blindly, and that’s my experience with junior engineers as a manager.
And then anyone whose been trying to get an Engineering manager job over the past few months and tracking their applications metadata has seen the number of open postings for their requirements go down month after month unless you drop the manager part and keep all the same criteria but as IC
And then I read commentary from megacorps about their layoffs and read between the lines like here[1]
>… a Microsoft spokesperson said in a statement, adding that the company is reducing managerial layers …
I think our general consternation around this is coming from creators being forced into management instead of being able to outsource those tasks to their own managers.
I am not really sure what to do with this insight
[1] https://www.cnn.com/2025/07/02/tech/microsoft-layoffs-9000-e...
Legitimately I think you are missing my point. What I quoted out of your response could be applied to prompt engineering/managment/tinkering. I think everyone who likes doing this with juniors and hates it with AI is conflating their enjoyment of teaching juniors with the dopamine you get from engaging with other primates.
I think most people I’ve met who hated AI would have the same level of hate for a situation where their boss made them actually manage an underperforming employee instead of letting them continue on as is ad infinitum.
It’s hard work both mentally and emotionally to correct an independent agent well enough to improve their behavior but not strongly enough to break them, and I think most AI haters are choking on this fact.
I’m saying that from the position of an engineer who got into management and choked on the fact that sometimes upper leadership was right and the employee complaining to me about the “stupid rules” or trying to lie to me to get a gold star instead of a bronze one was the agent in the system who was actually at fault
Oh wow.
I'm a bit annoyed with LLMs for coding, because I care about the craft. But I understand the premise of using them when the end goal is not "tech as a craft" but "tech as a means". But that still requires having some reason to use the tech.
Hell, I feel the "tech as a means to get money" part for people trying to climb up the social ladder.
But for a lot of people who already did get to the top of it?
At some point we gotta ask what the point of SEO-optimizing everything even is.
Like, is the end goal optimizing life out of life?
Why not write a whole app using LLMs? Why not have the LLM do your course work? Why do the course work at all? Why not have the LLM make a birthday card for your partner? Why even get up in the morning? Why not just go leave and live in a forest? Why live at all?
What is even the point?
But yeah, first we'll go through a few (?) years of the self-defeating "ChatGPT does my homework" and the necessary adjustments of how schools/unis function.
And also, how is personalized bullshit better than generic bullshit? We'd need to solve the bullshit problem in the first place, which is mathematically guaranteed NOT to be possible with these types of architectures.
Touch grass all by myself?
As far as I understand, bitcoin is fundamentally unusable as a currency. Transactions are expensive and limited to ?7k? every few seconds. It's also inherently deflationary, you want inflationary currency, you want people spending, not hoarding.
Great protocols are built in layers.
You have decentralized instant settlement for an average of 0.005% even for micropayments with the Lightning Network (another protocol built on top of Bitcoin). That's orders of magnitude away from the settlement time and resilience of the current payment networks.
Ethernet does not need to carry the whole movie in one packet. If it does the job of delivering the MTU to the host on the other side of the cable, it's good. Websockets can be figured out somewhere else. The IP stack is not shit because each layer does just one thing, it's good because of that.
But hey, time will tell.