Top
Best
New

Posted by mips_avatar 12/3/2025

Everyone in Seattle hates AI(jonready.com)
967 points | 1065 comments
assemblyman 12/3/2025|
I am not in Seattle. I do work in AI but have shifted more towards infrastructure.

I feel fatigued by AI. To be more precise, this fatigue includes several factors. The first one is that a lot of people around me get excited by events in the AI world that I find distracting. These might be new FOSS library releases, news announcements from the big players, new models, new papers. As one person, I can only work on 2-3 things at a given interval in time. Ideally I would like to focus and go deep in those things. Often, I need to learn something new and that takes time, energy and focus. This constant Brownian motion of ideas gives a sense of progress and "keeping up" but, for me at least, acts as a constantly tapped brake.

Secondly, there is a sentiment that every problem has an AI solution. Why sit and think, run experiments, try to build a theoretical framework when one can just present the problem to a model. I use LLMs too but it is more satisfying, productive, insightful when one actually thinks hard and understands a topic before using LLMs.

Thirdly, I keep hearing that the "space moves fast" and "one must keep up". The fundamentals actually haven't changed that much in the last 3 years and new developments are easy to pick up. Even if they did, trying to keep up results in very shallow and broad knowledge that one can't actually use. There are a million things going on and I am completely at peace with not knowing most of them.

Lastly, there is pressure to be strategic. To guess where the tech world is going, to predict and plan, to somehow get ahead. I have no interest in that. I am confident many of us will adapt and if I can't, I'll find something else to do.

I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.

checker659 12/4/2025||
> The fundamentals actually haven't changed that much in the last 3 years

Even said fundamentals don't have much in the way to foundations. It's just brute forcing your way using a O(n^3) algorithm using a lot of data and compute.

1x_engineer 12/10/2025|||
Brute force!? Language modeling is a factorial time and memory problem. Someone comes up with a successful method that’s quadratic in the input sequence length and you’re complaining…?
tech_ken 12/4/2025|||
O(n^(~2.8)) because fast matrix mult?
viking123 12/4/2025|||
I hate scammers like many of the Anthropic employees that post every other week "brooo we have this model that can break out of the system bro!"

"broo it's so dangerous let me tell you how dangerous it is! you don't want to get this out! we have something really dangerous internally!"

Those are the worst, Dario included there btw, almost a worse grifter than Altman.

The models themselves are fine except Claude that calls the police if you say the word boob.

jordanb 12/4/2025||
Dario wishes he was the grifter Altman is. He's like a kirland brand grifter compared to Altman. Altman is a generational level talent when it comes to grifting.
ashoeafoot 12/7/2025||
[dead]
red-iron-pine 12/4/2025|||
> I am actually impressed with and heavily use models. The tiresome part now are some of the humans around the technology who participate in the behaviors listed above.

the AI just an LLM and it just does what it is told to.

no limit to human greed though

mkoubaa 12/4/2025||
I get excited by new model releases, try it, switch it to default if I feel it's better, and then I move on. I don't understand why any professional SWE should engage in weird cultish behavior about these models, it's a better mousetrap as far as I'm concerned
agobineau 12/4/2025||
its just the old pc vs mac cultism. nobody who actually has work to do cares. much like authors obsessed with typewriters, transport companies with auto brands, etc
shepardrtc 12/3/2025||
Ok so a few thoughts as a former Seattleite:

1. You were a therapy session for her. Her negativity was about the layoffs.

2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.

3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.

4. I don't think people hate AI, they hate the hype.

Anyways, your app actually does sound interesting so I signed up for it.

hexator 12/3/2025||
Some people really do hate AI, it's not entirely about the layoffs. This is a well insulated bubble but you can find tons of anti-AI forums online.
abustamam 12/4/2025|||
Yeah, as a gamer I get a lot of game news in my feeds. Apparently there's a niche of indie games that claim to be AI-free. [0]

And I read a lot of articles about games that seem to love throwing a dig at AI even if it's not really relevant.

Personally, I can see why people dislike Gen AI. It takes people's creations without permission.

That being said, morality of the creation of AI tooling aside, there are still people who dislike AI-generated stuff. Like, they'd enjoy a song, or an image, or a book, and then suddenly when they find out it's AI suddenly they hate it. In my experience with playing with comfy ui to generate images, it's really easy to get something half decent, it's really hard to get something very high quality. It really is a skill in itself, but people who hate AI think it's just type a prompt and get image. I've seen workflows with 80+ nodes, multiple prompts, multiple masks, multiple loras, to generate one single image. It's a complex tool to learn, just like photoshop. Sure you can use Nano-Banana to get something but even then it can take dozens of generations and prompt iterations to get what you want.

[0] https://www.theverge.com/entertainment/827650/indie-develope...

johnnyanmac 12/4/2025|||
>morality of the creation of AI tooling aside,

That's a big aside

>Like, they'd enjoy a song, or an image, or a book, and then suddenly when they find out it's AI suddenly they hate it.

Yes, because for some people its about supporting human creation. Finding out it's part of a grift to take from said humans can be infuriating. People don't want to be a part of that.

rng-concern 12/4/2025|||
That is part of it, but the bigger part for me is, art is an expression of human emotion. When I hear music, I am part of those artists journey, struggles. The emotion in their songs come from their first break-up, an argument they had with someone they loved. I can understand that on a profound, shared level.

Way back me and my friends played a lot of starcraft. We only played cooperatively against the AI. Until one day me and a friend decided to play against each other. I can't tell put into words how intense that was. When we were done (we played in different rooms of house), we got together, and laughed. We both knew what the other had gone through. We both said "man, that was intense!".

I don't get that feeling from an amalgamation of all human thoughts/emotions/actions.

One death is a tragedy. A million deaths is a statistic.

runarberg 12/5/2025||
I am curious how your GP feels about art forgery. I can personally see my self enjoying a piece of art only to find out it was basically copied from a different artist, and then be totally put off by the experience, totally unable to enjoy that stolen piece of art ever again.
abustamam 12/5/2025||
I think context is important though. If someone pays for an original and it ends up being a forgery then yeah that's terrible and the purchaser was scammed. There is no similar concept to art online because it's all just pixels, unless you buy into the concept of NFTs as art ownership (I don't). Someone who steals someone else's online art and claims authorship would be the closest analogy to forgery, I suppose, but the aim of forgery is to sell it as the original author's work, not as the work of the thief's. But if I did find out thay an image of an artist I liked was indeed drawn by someone else, I'd try to find a way to support that someone else. It doesn't make me enjoy the art any less.

Of course knowing the provenance of something you enjoy, and learning that it has dark roots, can certainly tarnish your enjoyment of said thing, like knowing where your diamonds came from, or how sausage is made. It's hard to make a similar connection to AI generated stuff.

I listen to a lot of EDM. Some of the tracks on my playlist are almost certainly AI generated. If I like a song and check out the artist and find that it's a bot then I'm disappointed because that means I can never see them live, but I can still bop my head to the beat.

runarberg 12/5/2025||
My original post said “forgery” but what I meant was “plagiarism”. I’ve looked up what the terms mean, and I definitely used the wrong word. My post is quite confusing because of that. I am sorry about that.
code_for_monkey 12/4/2025|||
if I learn music had an AI involved it actually makes me feel awful. It totally just strips it of any appeal for me.
saulpw 12/4/2025||
I agree, but out of curiosity, how does invisible autotuning make you feel?
DrSiemer 12/4/2025|||
Most of the people that dislike genAI would have the exact same opinion if all the training data was paid for in full (whatever a fair price would be for what is essentially just reference material)
Yizahi 12/4/2025||
That if carries a lot of meaning here. In reality it is and was impossible to pay for all the stolen data. Also LLM corpos not only didn't pay for the data, but they never even asked. I know it may be a surprise, but some people would refuse to sell their data to a mechanical parrot.
nateglims 12/4/2025||||
Outside of tech, I think the opinion is generally negative. AI has lost a lot the narrative due to things like energy prices and layoffs.
throw234234234 12/4/2025|||
Would agree with this and think it is more than just your reasons, especially if you venture outside the US at least from what I've experienced. I've seen it at least personally more so where AI tech hubs aren't around and there is no way to "get in on the action". I see blue collar workers who are less threatened ask me directly with less to lose - why would anyone want to invent this? One of the reasons the average person on the street doesn't relate well to tech workers in general; there is a perceived lack of "street smarts" and self preservation.

Anecdotally its almost like they see them like mad scientists who are happy blowing up themselves and the world if they get to play with the new toy; almost childlike usually thinking they are doing "good" in the process. Which is seen as a sign of a lack of a type of intelligence/maturity by most people.

sunaookami 12/4/2025||||
ChatGPT is one of the most used websites in the world and it's used by the most normal people in the world, in what way is the opinion "generally negative"?
throw234234234 12/5/2025|||
A big reason is relative advantage. The "I have to use it because its there now and everyone else is, but I would rather no one have to use it at all" argument.

Lets say I'm a small business and I want to produce a new logo for some marketing material. In the past I would of paid someone either via a platform or some local business to do it. That would of just been the cost of business.

Now since there is a lower cost technology, and I know my competition is using it, I should use it too else all else equal I'm losing margin compared to my competition.

It's happening in software development too. Its the reason they say "if you don't use AI you will be taken over by someone who does". It may be true; but that person may of wished the AI genie was never let out of the bottle.

KptMarchewa 12/4/2025||||
This is the epitome of the "yet you participate in society" gotcha.
sunaookami 12/4/2025||
No it's not. No one is forced to use ChatGPT, it got popular by itself. When millions use it voluntarily, that contradicts the 'generally negative' statement, even if there are legitimate criticisms of other aspects of AI.
KptMarchewa 12/4/2025|||
I can criticize cities overreliance of cars for transport, yet own a car and even sporadically use it. The same applies here.
taurath 12/4/2025|||
Comment sections here are filled with stories of people forced to use LLMs. You are clearly not even paying attention.
nateglims 12/4/2025||||
You can use ChatGPT for minor stuff and still have a negative view on AI. In fact the non-tech white collar workers I know use chatgpt for stuff like business writing at work but are generally concerned.

Negative sentiment also comes through in opinion polling in the US.

goatlover 12/4/2025||||
We'll see how long that lasts with their their new Ad framework. Probably most normal people are put off by all the other AI being marketed at them. A useful AI website is one thing, AI forced into everything else is quite another. And then they get to hear on the news or from their friends how AI-everything is going to take all the jobs so a few controversial people in tech can become trillionaires.
johnnyanmac 12/4/2025||||
ChatGPT for the common folk is used in the same way PirateBay is. Something can be "popular" and also "bad"
sunaookami 12/4/2025||
The argument was that common folk see it as "bad" which is clearly not the case.
johnnyanmac 12/4/2025||
Yes, and I made an argument supporting that "used" and "it's bad" are not mutually exclusive . You simply repeated what I responded to and asserted you're the right opinion.

It's clearly not that straightforward.

sunaookami 12/4/2025||
I get your argument but in this case it is that straightforward because it's not a forced monopoly like e.g. Microsoft Windows. Common folk decided to use ChatGPT because they think it is good. Think Google Search, it got its market position because it was good.
johnnyanmac 12/4/2025||
>Common folk decided to use ChatGPT because they think it is good.

That is not the only reason to use a tool you think is bad. "good enough" doesn't mean "good". If you think it's better to generate an essay due in an hour then rush something by hand, that doesn't mean it's "good". If I decide to make a toy app full of useless branches, no documentation, and tons of sleep calls, it doesn't mean the program is "good". It's just "good enough".

That's the core issue here. "good enough" varies on the context, and not too many people are using it like the sales pitch to boost the productivity of the already productive.

mlrtime 12/4/2025||
I don't agree with your comments, especially using PirateBay as an example. Stating either as "bad" is purely subjective. I find both PirateBay and ChatGPT both good things. They both bring value to me personally.

I'd wager that most people would find both as "good" depending on how you framed the question.

AlexCoventry 12/4/2025||||
Go express a pro-AI opinion or link a salient, accurate AI output on reddit, and watch the downvotes roll in.
sunaookami 12/4/2025||
We are talking about the common folk here, not redditors.
taurath 12/4/2025|||
Seemingly the primary economic beneficiaries of AI are people who own companies and manage people. What this means for the average person working for a living is probably a lot of change, additional uncertainty, and additional reductions in their standard of living. Rich get richer, poor get poorer, and they aren't rich.
sunaookami 12/4/2025||
What has this to do with what I wrote? Go take your class conflict somewhere else.
taurath 12/4/2025||
Responding to your message:

> in what way is the opinion "generally negative"

I'm just trying to tell you what people outside your bubble think, that AI is VERY MUCH a class thing. Using AI images at people is seen as completely not cool, it makes one look like a corporate stooge.

deaux 12/4/2025|||
Globally, the opinion isn't generally negative. It's localized.
ajkjk 12/4/2025|||
What does that mean?
nateglims 12/4/2025|||
Sure, I meant the anglosphere. But in most countries, the less people are aware of technology or use the internet the less they are enthusiastic about AI.
deaux 12/4/2025||
I don't see the correlation between technology/internet use and man-on-the-street attitudes towards AI. Compare Sweden with Japan.
nateglims 12/4/2025||
Its a weak proxy for people who are not in tech.

In polling japan and sweden are very similar in terms of sentiment though: https://www.pewresearch.org/global/2025/10/15/how-people-aro...

deaux 12/5/2025||
I still don't see it. Look at some of the countries with the highest relatively high individual "personal tech usage" as well as "percentage of workers/economy connected to tech": South Korea, Israel, Japan, the US, UK, the Netherlands. The first three are on the positive end, the next two on the negative end, and the last one in the middle.

"Region of the world" correlation looks a lot stronger than that.

zubiaur 12/3/2025||||
Some people take find their life meaning through craft and work. When that craft is suddenly less scarce, less special, so does that craft-tied meaning.

I wonder if these feelings are what scribes and amanuenses felt when the printing press arrived.

I do enjoy programming, I like my job and take pride on it, but I actively try for it not to be the life-mean giving activity. I'm a just mercenary of my trade.

recursive 12/4/2025|||
The craft isn't any less scarce. If anything, only more. The craft of building wooden furniture is just as scarce as ever, despite the existence of Ikea.
pjmlp 12/4/2025||
Which is the only woodworkers that survive are the ones with enough customers willing to pay premium prices for furniture, or lucky to live in countries where Ikea like shops aren't yet a thing.
Antibabelic 12/4/2025||||
They are also the people who are able to see the most clearly how subpar generative-AI output is. When you can't find a single spot without AI slop to rest your eyes on and see it get so much praise, it's natural to take it as a direct insult to your work.
sjsdaiuasgdia 12/4/2025||
Yes, the general acceptance of generally mediocre AI output is quite frustrating.

Cool, you "made" that image that looks like ass. Great, you "wrote" that blog post with terrible phrasing and far too many words. Congrats, I guess.

venturecruelty 12/4/2025|||
I mean, I would still hate to be replaced by some chat bot (without being fairly compensated because, societally, it's kind of a dick move for every company to just fire thousands of people and then nobody can find a job elsewhere), but I wouldn't be as mad if the damn tools actually worked. They don't. It's one thing to be laid off, it's another to be laid off, ostensibly, to be replaced by some tool that isn't even actually thinking or reasoning, just crapping out garbage.

And I will not be replying to anyone who trots out their personal AI success story. I'm not interested.

DrSiemer 12/4/2025||
The tech works well enough to function as an excuse for massive layoffs. When all that is over, companies can start hiring again. Probably with a preference for employees that can demonstrate affinity with the new tools.
utopiah 12/4/2025||||
> Some people really do hate AI

That's probably me for a lot of people. The reality is a bit finer than this namely :

- I hate VC funded AI which is actually super shallow (basically OpenAI/Claude wrappers)

- I hate VC funded genuine BigAI that sells itself as the literal opposite of what it is, e.g. OpenAI... being NOT open.

- I hate AI that hides it's ecological cost. Generating text, videos, etc is actually fascinating, but not if making the shittiest video with the dumbest script is taking the same amount of energy I'd need to fly across the globe.

- I hate AI that hides it's human cost, namely using cheap labor from "far away" where people have to label atrocities (murders, rape, child abuse, etc) without being provided proper psychological support.

- I hate AI that embodies capitalist principles of exploitation. If somehow your entire AI business relies on an entire pyramid of everything listed above to capture a market then hike the price once dependency is entrenched you might be a brilliant business man but you suck as a human being.

etc... I could go on but you get the idea.

I do love open source public AI research though. Several of my very good friends are researchers in universities working on the topic. They are smart, kind and just great human beings. Not fucking ghouls riding the hype with 0 concern for our World.

So... yes maybe AI haters have a slightly more refined perspective but of course when one summarize whatever text they see in 3 words via their favorite LLM, it's hard to see.

pbmonster 12/4/2025|||
> making the shittiest video with the dumbest script is taking the same amount of energy I'd need to fly across the globe.

I get your overall point, but the hyperbole is probably unhelpful. Flying a human across the globe takes several MWh. That's billions of tokens created (give or take an order of magnitude...).

utopiah 12/4/2025||
Does your comparison include training, data center building, GPUs productions, etc or solely inference? (genuine question I don't know the total cost for e.g. Sora2, only inference which AFAIK is significant yet pale in comparison to everything upstream)
pbmonster 12/4/2025|||
No, that's one reason why there's at least an order of magnitude wiggle room there. I just took the first number for J/Token I found on arxiv from 2025. Choosing the exact model and hardware it runs on is also making a large difference (probably larger than your one-time upfront costs, since those are needed only once and spread out across years of inference).

My point is mobility, especially commercial flight, is extremely energy intense and the average westerner will burn much more resources here than on AI use. People get mad at the energy and water use of AI, and they aren't wrong, but right now it really is only a drop in the ocean of energy and water we're wasting anyways.

utopiah 12/4/2025||
> right now it really is only a drop in the ocean of energy and water we're wasting anyways.

That's not what I heard. Maybe it was in 2024 but now data centers have their own categories in energy consumption whereas until now it was "others". I think we need to update our collective understanding in terms of actual energy consumed. It was all fun & games until recently and slop was kind of harmless consequence ecologically speaking but from what I can tell in terms of energy, water, etc it is not negligible anymore.

pbmonster 12/4/2025||
Probably just a matter of perspective. It's a few hundreds of TWh per year in 2025 - that's huge, and it's growing quickly. But again, that's still only a small fraction of a percent of total human primary energy consumption during the same time.
mlrtime 12/4/2025|||
You could say the same about the airplane, does the CO2 emissions that the airline states for my seat include building the plane, the R/D, training the pilot.
utopiah 12/4/2025||
Sure and I do, it's LCA https://en.wikipedia.org/wiki/Life-cycle_assessment the problem IMHO being that the AI hype entire ecosystem is literally hiding everything it can about this behind the veil of giving information to competitors. We have CO2eq on model cards but we don't have much datapoints on proprietary models running on Azure cloud or wherever. At best we infer from some research papers that are close enough but we don't know for the most popular models and that's quite problematic. The car industry did everything it could too, e.g. Volkswagen scandal so let's not repeat that.
frozenseven 12/4/2025|||
[flagged]
thefz 12/4/2025||||
> Some people really do hate AI

AGI? No, although it's not there. LLMs? Yes, lots. The main benefit they can give is to sort-of-speed-up internet search, but I have to go and check the sources anyway so I'll revert back to 20+ years of experience of doing it myself. Any other application of machine learning such almost instant speech to text? No, it's useful.

jama211 12/4/2025||||
Emphasis on ‘some’. Compare that to the article title!
isodev 12/4/2025||||
I don’t think people hate models. They hate that techbros are putting LLMs in places they don’t belong … and then trying to anthropomorphize the thing finding what best rhymes with your prompt as “reasoning” and “intelligence” (which it isn’t).
crote 12/4/2025||||
I'd extend that to "very few people love AI".

In real life, I don't know anyone who genuinely wants to use AI. Most of them think it's "meh", but don't have any strong feelings about using it if it's more convenient - like Google shoving it in their face during a search. But would they pay for it, or miss it if it's gone? Nope, not a chance.

skwirl 12/4/2025|||
On this topic I think it’s pretty off base to call HN a “well insulated bubble” - AI skepticism and outright hate is pretty common here and AI negative comments often get a lot of support. This thread itself offers plenty of examples.
didibus 12/4/2025|||
> You were a therapy session for her. Her negativity was about the layoffs.

I think there is no "her", the article ends with saying:

> My former coworker—the composite of three people for anonymity—now believes she's [...]

I think it's just 3 different people and they made up a "she" single coworker as a kind of example person.

I don't know, that's my reading at least, maybe I got it wrong.

mips_avatar 12/4/2025||
I hate to be cagey here but I just really don’t want to make anyone’s life harder than it needs to be by revealing their identity. Microsoft is a really tough place to be an employee right now.
iamkonstantin 12/4/2025|||
The hate starts with the name. LLMs don't have the I in AI. It's like marketing a car as self-driving while all it can do is lane assist.
wongarsu 12/4/2025|||
That's because there are at least 5 different definitions of AI.

- At it's inception in 1955 it was "learning or any other feature of intelligence" simulated by a machine [1] (fun fact: both neural networks and computers using natural language were on the agenda back then)

- Following from that we have the "all machine learning is AI" which was the prevalent definition about a decade ago

- Then there's the academic definition that is roughly "computers acting in real or simulated environments" and includes such mundane and algorithmic things as path finding

- Then there's obviously AGI, or the closely related Hollywood/SciFi definition of AI

- Then there's just "things that the general public doesn't expect computers to be able to do". Back when chess computers used to be called AI this was probably the closest definition that fits. Clever sales people also used to love to call prediction via simple linear regression AI

Notably four out of five of them don't involve computers actually being intelligent. And just a couple years ago we still sold simple face detection as AI

1: https://www-formal.stanford.edu/jmc/history/dartmouth/dartmo...

ACCount37 12/4/2025||
And yet, somehow, "it's not actually AI" has wormed its way into the minds of various redditors.
KptMarchewa 12/4/2025||||
It's the opposite. It is doing the driving but you really have to provide lane assist, otherwise you hit the tree, or start driving in the opposite direction.

Many people claim it's doing great because they have driven hundreds of kilometers, but don't particularly care whether they arrived at the exact place, and are happy with the approximate destination.

ACCount37 12/4/2025||||
Then what do they have?

Is the siren song of "AI effect" so strong in your mind that you look at a system that writes short stories, solves advanced math problems and writes working code, and then immediately pronounce it "not intelligent"?

isodev 12/4/2025|||
It doesn’t actually solve those math problems though, does it? It replies with a solution if it has seen one often enough in training data or something that looks like a solution but isn’t. At the end, the human still needs to proof it.

Same for short stories, it doesn’t actually write new stories, it rehashes stories it (probably illegally) ingested in training data.

LLMs are good at mimicking the content they were trained on, they don’t actually adopt or extend the intelligence required to create that content in the first place.

ACCount37 12/4/2025||
Oh, I remember those talks. People actually checking whether an LLM's response is something that was in the training data, something that was online that it replicated, or something new.

They weren't finding a lot of matches. That was odd.

That was in the days of GPT-2. That was when the first weak signs of "LLMs aren't just naively rephrasing the training data" emerged. That finding was controversial, at the time. GPT-2 couldn't even solve "17 + 29". ChatGPT didn't exist yet. Most didn't believe that it was possible to build something like it with LLM tech.

I wish I could say I was among the people who had the foresight, but I wasn't. Got a harsh wake-up call on that.

And yet, here we are, in year 20-fucking-25, where off-the-shelf commercially available AIs burn through math competitions and one shot coding tasks. And people still say "they just rehash the training data".

Because the alternative is: admitting that we found an algorithm that crams abstract thinking into arrays of matrix math. That it's no longer human exclusive. And that seems to be completely unpalatable to many.

kahnclusions 12/4/2025|||
Based on the absolute trash I usually get out of ChatGPT, Claude, etc, I wouldn’t say that it writes “working” code.
rtp4me 12/4/2025|||
You and I must be using very different versions of Claude. As an infra/systems guy (non-coder), the ability for me to develop some powerful tools simply by leveraging Claude has been nothing short of amazing. I started using Claude about 8 months ago and have since created about 20 tools ranging from simple USB detection scripts (for secure erasing SSDs) to complex tools like an Azure File Manager and a production-ready data migration tool (Azure to Snowflake). Yes, I know bash and some Python, but Claude has really helped me create tools that would have taken many weeks/months to build using the right technology stack. I am happy to pay for the Claude Max plan; it has returned huge dividends to my productivity.

And, maybe that is the difference. Non coders can use AI to help build MVPs and tooling they could otherwise not do (or take a long time to get done). On the other hand, professional coders see this as an intrusion to their domain, become very skeptical because it does not write code "their way" or introduces some bugs, and push back hard.

habinero 12/4/2025||
Yeah. You're not a coder, so you don't have the expertise to see the pitfalls and problems with the approach.

If you want to use concrete to anchor some poles in the ground, great. Build that gazebo. If it falls down, oh well.

If you want to use concrete to make a building that needs to be safe and maintained, it's critical that you use the right concrete mix, use rebar in the right way, and seal it properly.

Civil engineers aren't "threatened" by hobbyists building gazebos. Software engineers aren't "threatened" by AI. We're pointing out that the building's gonna fall over if you do it this way, which is what we're actually paid to do.

rtp4me 12/4/2025||
Sorry, carefully read the comments on this thread and you will quickly realize "real" coders are very much threatened by this technology - especially junior coders. They are frightened their job is at stake by a new tool and take a very anti-AI view to the entire domain - probably more-so for those who live in areas where the wages are not high to begin with. People who come from a different perspective truly see the value of what these tools can help you do. To say all AI output is slop or garbage is just wrong.

The flip of this is to understand and appreciate what the new tooling can help you do and adopt. Sure, junior coders will face significant headwinds, but I guarantee you there are opportunities waiting to get uncovered. Just give it a couple of years...

habinero 12/4/2025||
No. You're misreading the reactions because you've made some incorrect assumptions and you do a fundamentally different job than those people.

I legit don't know any professional SWE who feels "threatened" by AI. We don't get hired to write the kind of code you're writing.

mlrtime 12/4/2025||||
Every HN thread about AI eventually has someone claiming the code it produces is “trash” or “non-working.” There are plenty of top-tier programmers here who dismiss anyone who actually finds LLM-generated code useful, even when it gets the job done.

I’m tempted to propose a new law—like Poe’s or Godwin’s—that goes something like: “Any discussion about AI will eventually lead to someone insisting it can’t match human programmers.”

ACCount37 12/4/2025|||
By that metric: do you?

Seeing an AI casually spit out an 800 lines script that works first try is really fucking humbling to me, because I know I wouldn't be able to do that myself.

Sure, it's an area of AI advantage, and I still crush AI in complex codebases or embedded code. But AI is not strictly worse than me, clearly. The fact that it already has this area of advantage should give you a pause.

rtp4me 12/4/2025||
Humbling indeed. I am utterly amazed at Claude's breadth of knowledge and ability to understand the context of our conversations. Even if I misspell words, don't use the exact phrase, or call something a function instead of a thread, Claude understands what I want and helps make it happen. Not to mention the ability to read hundreds of lines of debug output and point out a tiny error that caused the bug.
dekoidal 12/4/2025|||
See also hoverboards
mips_avatar 12/3/2025|||
I think these companies would benefit from honesty, if they're right and their new AI capabilities are really powerful then poisoning their workforce against AI is the worst thing they could do right now. Like a generous severance approach and compassionate layoffs would go a long way right now.
ozfive 12/4/2025|||
The layoffs are due to tax incentives in the tax cut bills that financially incentivize offshoring work.
shepardrtc 12/4/2025||
That makes sense. And its even worse of a reason. At least for people living in the US.
Karrot_Kream 12/3/2025|||
I was an early employee at a unicorn and I saw this culture take hold once we started hiring from Big Tech talent pools and offering Big Tech comp packages, though before AI hype took off. There's a crazy lack of agency that kicks in for Big Tech folks that's really hard to explain. This feeling that each engineer is this mercenary trying really hard to not get screwed by the internal system.

Most of it is because there's little that ties actual output to organizational outcomes. AI mandates after all are just a way to bluntly for e engineers to use AI, where if you were at a startup or smaller company you would probably organically find how much an LLM helps you where. It may not even help your actual work even if it helps your coworkers. That market feedback is sorely missing from the Big Techs and so hamfisted engineering mandates have to do in order to for e engineers to become more efficient.

In these cases I always try to remind friends that you can always leave a Big Tech. The thing is, from what I can tell, a lot of these folks have developed lifestyle inflation from working in Big Tech and some of their anger comes from feeling trapped in their Big Tech role due to this. While I understand, I'm not particularly sympathetic to this viewpoint. At the end of the day your lifestyle is in your hands.

mips_avatar 12/4/2025|||
Thanks for signing up. I’m going to try really hard to open up some beta slots next week so more people can try it. There’s some embarrassingly bad bugs in prod right now…
johnnyanmac 12/4/2025|||
>FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.

Close. We're in a recession and they are using AI as an excuse for another wave of outsourcing.

>I don't think people hate AI, they hate the hype.

I hate the grift. I hate having it forced on me after refusing multiple times. That's pretty much 90% of AI right now.

nektro 12/6/2025|||
> 4. I don't think people hate AI, they hate the hype.

except a lot of people really do hate AI

pier25 12/3/2025|||
It’s not only the hype though.

What about the complete lack of morality some (most?) AI companies exhibit?

What about the consequences in the environment?

What about the enshitification of products?

What about the usage of water and energy?

Etc.

abustamam 12/4/2025|||
Not to diminish your overall point, but enshittification has been happening well before AI, AI just made it much easier and faster to enshittify everything.
jjgreen 12/4/2025||
But AI allows us to make customised enshitification, think of the possibilities!
pier25 12/4/2025||
We don't need to spend money on customer support!
red-iron-pine 12/4/2025||
hell we don't need customers. we'll just get MS or Nvidia to invest in us, while leaning on their offerings.
parineum 12/4/2025|||
Is your "etc." keep repeating the same two points you did in your list of four?
pier25 12/4/2025||
What about the RAM price surge?

What about diverting funding from much more useful and needed things?

What about automation of scams, surveillance, etc?

I can keep going.

There are plenty of reasons to hate on AI beyond hype.

parineum 12/4/2025||
> What about the RAM price surge?

It's a bit more expensive. It's not the end of the world. Production will likely increase if the demand is consistent.

> What about diverting funding from much more useful and needed things?

And who determines that? People put there money where they want to. People think AI will provide value to other people and those people will, therefore, pay money for AI. So the funding that AI is receiving is directly proportional to how useful and needed people think AI is. I disagree, but I'm not a dictator.

> What about automation of scams, surveillance, etc?

Technology makes things easier, including bad things. This isn't the first time this happened and it won't be the last. It also makes avoiding those things easier though but that usually lags a bit behind.

> I can keep going.

Please do because it seems like you're grasping at straws.

conartist6 12/4/2025||
Why would you not hate AI. What is there to like.

It's the closing trash compactor of soullessness and hate of the human, described vividly as having affected Microsoft culture as thoroughly as intergranular corrosion can turn a solid block of aluminum to dust.

Fuck Microsoft for both hating me and hating their own people. Fuck. That. Shit.

NoGravitas 12/4/2025||
> It's the closing trash compactor of soullessness and hate of the human, described vividly as having affected Microsoft culture as thoroughly as intergranular corrosion can turn a solid block of aluminum to dust.

That's a great way to describe it. There's a good article that points out AI is the new aesthetic of fascism. And, of course, in Miyazaki's words, "I strongly feel that this is an insult to life itself."

p0w3n3d 12/4/2025||
Someone wrote on HN the (IMO) main reason why people do not accept AI.

  AI is about centralisation of power
So basically, only a few companies that hold on the large models will have all the knowledge required to do things, and will lend you your computer collecting monthly fees. Also see https://be-clippy.com/ for more arguments (like Adobe moving to cloud to teach their model on your work).

For me AI is just a natural language query model for texts. So if I need to find something in text, make join with other knowledge etc. things I'd do in SQL if there was an SQL processing natural language, I do in LLM. This enhances my work. However other people seem to feel threatened. I know a person who resigned CS course because AI was solving algorithmic exercises better than him. This might cause global depression, as we no longer are on the "top". Moreover he went to medicine, where people basically will be using AI to diagnose people and AI operators are required (i.e. there are no threats of reductions because of AI in Public Health Service)

So the world is changing, the power is being gathered, there is no longer possibility to "run your local cloud with open office, and a mail server" to take that power from the giants.

encyclopedism 12/4/2025||
> AI is about centralisation of power

I do not believe this is the main reason at all.

The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".

Much of the meaning we humans derive from work is tied to the value it provides to society. One can do coding for fun but doing the same coding where it provides value to others/society is far more meaningful.

Presently some may say: AI is amazing I am much more productive, AI is just a tool or that AI empowers me. The irony is that this in itself shows the deficiency of AI. It demonstrates that AI is not yet powerful enough to NOT need to empower you to NOT need to make you more productive. Ultimately AI aims to remove the need for a human intermediary altogether that is the AI holy grail. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months may be dramatically impacted.

I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.

Thews 12/4/2025|||
It sounds like you agreed by the end, just with a slightly different way of getting there.
encyclopedism 12/4/2025||
> AI is about centralisation of power > So basically, only a few companies that hold on the large models will have all the knowledge required to do things,

There are open source models and these will continue to keep abreast of new features. On device only models are likely to be available too. Both will be good enough especially for consumer use cases. Importantly it is not corporations alone that have access to AI. I for-see whole countries releasing their versions in an open source fashion and much more. After all you can't stop people applying linear algebra ;-)

There doesn't appear to be a moat for these organisations. HN users mention hopping from model to model like rabbits. The core mechanic is interchangeable.

There is a 'barrier to entry' of sorts that does exert some pressure or centralisation particularly at scale. It conveniently aligns well for large corporations and it is that GPU's are expensive and AI requires a lot of processing power. But it isn't the core issue.

fragmede 12/4/2025||||
Hardware is a different problem from software. Once the chores robot gets here, then what? Millions of houseworkers are now out of a job. Then how will you feel about AI?
p0w3n3d 12/5/2025||||
> I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes

You're absolutely right, this is another face of this AI coin... We people are taught to do things and love doing them and we're scared it's going to be taken away from us. This is what I thought when writing about the man who cancelled CS course. He apparently predicted that learning algorithms solving won't make him happy because AI will do it for him

roncesvalles 12/5/2025||||
I don't think that's the reason either.

AI is just not that good. If it really made me more productive, why wouldn't I use it all the time? I'd get everything done before lunch and go home. Or I'd use it all day to do the work of 3 people and be on the fast track to promotions.

The problem is simply that it gets in the way. For things I know nothing about, AI is excellent. For things that I'm good at and have literally been doing for a decade+, I can just do it better and faster myself, and I'm tired of people who know nothing about my profession gaslighting me into thinking that LLMs do the same thing. And I'm really tired of people saying "oh AI is not good today, but it'll be good tomorrow so just start using them" -- fine, wake me up when it's good because I've been waiting and patiently testing every new SOTA model since 2023.

Just get the facts right, that's all I ask of tech execs. Why has AI become a religion?

yetihehe 12/5/2025||
> Or I'd use it all day to do the work of 3 people and be on the fast track to promotions.

Or, like execs want, you do work of 3 people, so we can fire two and get the bonus, plus maybe a 5% pay increase for you. "If someone is good at digging, give him a bigger shovel".

milch 12/5/2025||
My impression is purely from a metrics perspective, people who were underperforming can really look like they are 3x as productive. AI is a real increase for them because it can do things they couldn't. It just gives them the ability to publish a thousand lines worth of PRs a day, which the regular and over performers have to review, but that shows up in THEIR metrics, not in the underperformer's metrics. If all you look at is metrics and KPIs and have no technical understanding, this looks amazing to you.

Most people I've worked with that were already some of the most productive before AI took off are still at the top, and AI didn't move the needle much for them. There's simply no way for them to do 3x the work.

Slava_Propanei 12/4/2025|||
[dead]
Rebuff5007 12/4/2025|||
But why not? AI also has very powerful open models (that can actually be fine-tuned for personal use) that can compete against the flagship proprietary models.

As an average consumer, I actually feel like i'm less locked into gemini/chatgpt/claude than I am to Apple or Google for other tech (i.e. photos).

futuraperdita 12/4/2025|||
> AI also has very powerful open models (that can actually be fine-tuned for personal use) that can compete against the flagship proprietary models.

It was already tough to run flagship-class local models and it's only getting worse with the demand for datacenter-scale compute from those specific big players. What happens when the model that works best needs 1TB of HBM and specialized TPUs?

AI computation looks a lot like early Bitcoin: first the CPU, then the GPUs, then the ASICs, then the ASICs mostly being made specifically by syndicates for syndicates. We are speedrunning the same centralization.

ModernMech 12/4/2025||
It appears to me the early exponential gains from new models have plateaued. Current gains seem very marginal, it could be the future model best model that needs "1TB of HBM and specialized TPUs" won't be all that better than the models we have today. All we need to do is wait for commodity hardware that can run current models, and OpenAI / Anthropic et al are done if their whole plan to monetize this is to inject ads into the responses. That is, unless they can actually create AGI that requires infrastructure they control, or some other advancement.
troyvit 12/4/2025||||
That's what I was thinking as I was listening to the "be like clippy" video linked in the parent. Those local models probably won't be able to match the quality of the big guys' for a long time to come, but for now the local, open models have a lot of potential for us to escape this power consolidation before it's complete and still get their users 75-80% of the functionality. That remaining 20-25%, combined with the new skill of managing an LLM, is where the self-value comes in, the bit that says, "I do own what I built or learned or drew."

The hardest part with that IMO will be democratizing the hardware so that everybody can afford it.

wartywhoa23 12/4/2025||
Hopes that we all will be running LLM models locally in the face of skyrocketing prices on all kinds of memory sound very similar to the cryptoanarchists' ravings about full copies of blockchain stored locally on every user's device in the face of exponential growth of its size.
troyvit 12/4/2025||
The only difference is that memory prices skyrocketing is a temporary thing resulting from a spike in demand from incompetent AI megalomaniacs like Sam Altman who don't know how to run a company and are desperate to scale because that's the only kind of sustainability they understand.

Once the market either absorbs that demand (if it's real) or else over-produces for it, RAM prices are going to either slowly come back down (if it's real) or plunge (if it isn't).

People are already running tiny models on their phones, and there's a Mistral 3B model that runs locally in a browser (https://huggingface.co/spaces/mistralai/Ministral_3B_WebGPU).

So we'll see what happens. People used to think crypto currencies were going to herald a new era of democratizing economic (and other) activity before the tech bros turned Bitcoin into a pyramid scheme. It might be too late for them to do the same with locally-run LLMs but the NVidias and AMDs of the world will be there to take our $.

runarberg 12/4/2025||||
There is a case that the indices owned by the major search engines are a form of centralization of power. Normal people and smaller companies would have to pay a lot of money to get indices for their new competing search engine. However the analogy falls apart when you look at a) the scale of the investments involved and b) the pervasiveness of the technology.

Creating a search engine index requires several orders of magnitude less computing power then creating the weights of an LLM model. Like it is theoretically possible for somebody with a lot of money to spare to create a new search index, but only the richest of the rich can do that with an LLM model.

And search engines are there to fulfill exactly one technical niche, albeit an important one. LLMs are stuffed into everything, whether you like it or not. Like if you want to use Zoom, you are not told to “enrich your experience with web search”, you are told, “here is an AI summary of your conversation”.

waffletower 12/4/2025||||
Exactly. I was paying for Gemini Pro, and moved to a Claude subscription. Am going to switch back to Gemini for the next few months. The cloud centralization, in its current product stage, allows you to be a model butterfly. And these affordable and capable frontier model subscriptions, help me train and modify my local open weight models.
marginalia_nu 12/4/2025|||
Economies of scale makes this a space that is really difficult to be competitive in as a small player.

If it's ever to be economically viable to run a model like this, you basically need to run it non-stop, and make money doing so non-stop in order to offset the hardware costs.

stickfigure 12/4/2025|||
No, the main reason people don't accept AI is that it isn't very good[1] at the things they want to accomplish.

Everyone I know accepts AI for the things it is good at, and rejects it for things it sucks at. The dividing line varies by task and the skill of the operator (both "how good at persuading the AI" and "how easy would it be to just do the job by hand").

In some companies the problem is management layers with thin understanding trying to force AI into the organization because they read some article in CIO Magazine. In other companies (like Microsoft) I suspect the problem is that they're forcing the org to eat their own dogfood and the dogfood kinda sucks.

[1] Yet.

latexr 12/4/2025|||
> This enhances my work. However other people seem to feel threatened.

I wish people would stop spreading this as if it were the main reason. It’s a weak argument and disconnected from reality, like those people who think the only ones who dislike cryptocurrencies are the ones who didn’t become rich from it.

There are plenty of reasons to be against the current crop of AI that have nothing to do with employment. The threat to the environment, the consolidation of resources by the ones at the top, the spread of misinformation and lies, the acceleration of mass surveillance, the decay of critical thinking, the decrease in quality of life (e.g. people who live next to noisy data centres)… Not everything is about jobs and money, the world is bigger than that.

Zoquento 12/4/2025|||
Don't forget the fact that most of the time, the AI tools don't actually work well enough to be worth the trouble.

AI meeting notes are great! After you spend twice as long editing out the errors, figuring out which of the two Daves was talking each time, and removing all the unimportant side-items that were captured in the same level of detail as the core decision.

AI summaries are great - if you're the sort of person that would use a calculator that's wrong 10% of the time. The rest of us realize that an hour spent reading something is more rewarding and useful than an hour spent double checking an AI summary for accuracy.

AI as Asbestos isn't even an apt comparison, both are toxic and insidious, but Asbestos at least had a clear and compelling use case at the time. It solved some problems both better and cheaper than available alternatives. AI solves problems poorly and at higher cost, and people call you "threatened" if you point that out.

crote 12/4/2025|||
Asbestos was a great product! We used it absolutely everywhere for a reason: fireproof, a great insulator, and you can literally make fabric out of it. If it wasn't for the whole lung cancer thing, it would've still been as common as plastic.

AI, on the other hand? Seems like we're mostly getting cancer.

rvba 12/4/2025|||
AI summaries are great for orgs that dont do them (AI better than nothing) but not that great for those orgs that have an agenda and a note taker. What is very rare. Bur better quality
crote 12/4/2025|||
No, having no summary can be more valuable than having a wrong summary.

The problem is that the wrong summary will be treated as the truth, as the original recording will of course have been deleted after a grace period. Oh, you're looking into a way to clean up hanging child processes spawned by your CI worker? Guess it's now on the record that "rvba mentions looking into the best way to kill his children without leaving a trace"! There's no way that could possibly be misinterpreted a few years down the line, right?

stockresearcher 12/4/2025|||
No way. Anything written down becomes the source of truth 3+ months later. Either write it down correctly or don’t write it down at all.
kouru225 12/5/2025||||
Ngl I feel like most people only accept these criticisms of AI because they’re against AI to begin with. If you look at the claims, they fall apart pretty quickly. The environment issue is negligible and has less to do with AI than just computing in general, the consolidation of resources assumes that larger more expensive AI models will outcompete smaller local models and that’s not necessarily happening, the spread of misinformation doesn’t seem to have accelerated at all since AI came about (probably because we were already at peak misinformation and AI can’t add much more), the decay in critical thinking is far overblown if not outright manipulated data.

About the only problem here is the increase of surveillance and you can avoid that by running your own models, which are getting better and better by the day. The fact that people are so willing to accept these criticisms without much scrutiny is really just indicative of prior bias

wseqyrku 12/4/2025|||
Would you say there's at least fifty percent of people who are informed about these?
latexr 12/4/2025|||
It’s not clear to me what exactly you’re asking, but I’ll try to answer it anyway. I’d say that of the people who are against AI, fewer than 50% (to use your number) aren’t against it solely (or primarily, or at all) because they feel threatened for their job. Does that answer your question?
wseqyrku 12/4/2025||
I meant more about this part:

> The threat to the environment, the consolidation of resources by the ones at the top, the spread of misinformation and lies, the acceleration of mass surveillance, the decay of critical thinking

My question was that how many people are actually concerned about those things? If you think about it it's kind of obvious but it takes conscious effort to see it and I suspect not many people do.

code_for_monkey 12/4/2025||
anecdotal but it seems many people are concerned about those things
balamatom 12/4/2025||
I know, right? And very often I have the notion that that these things are obvious concerns for all sensible people. Except that lately I've been noticing how for some reason I only get that notion as long as I'm regularly consuming one information-containing product or another.

When I turn off my browser, video player, and ebook reader, outside it's a bit of a hellscape really, I really can't wait to get back online where people care about the real things, such as systemic collapse. But while I'm disconnected I do notice how the only thing that people seem to actually be enjoying right now are those self-same glass beads and plague blankets of Big Tech that we're dissing while trapped within them.

th0rine 12/4/2025|||
Yes.

Source: My ass.

Would it make their concerns less valid however if it wasn't?

wseqyrku 12/5/2025||
You might have read that wrong.

Of course not. I think more people should be aware of this especially talking about the majority who are outside of our own bubble. If you go to a random place and interact with a random person, you will likely encounter the dominating group, and that, I think, directly corelates to what is going to happen next.

waffletower 12/4/2025|||
I think it is incredibly healthy to be critical and perhaps even a tinge cynical about the intentions of companies developing and productizing large language models (AI). However, the argument here completely ignores the evolving ecosystem of open weight models. Yes, the prominent companies developing frontier models are attempting to build markets and moats where possible, and the capital cloud investments are incredibly centralized. But even in 2025 the choice is there, with your own capital investment (RTX, MacBook etc.), for completely private and decentralized AI. You can also choose your own cloud too -- Cloudflare just acquired Replicate. If enough continue to participate in the open weight ecosystem, this centralization need not be totalitarian.
raxxorraxor 12/4/2025|||
You can still run your local cloud and AI providers will be heavily consolidated to a few.

While for programming task I do use Claude currently, local models can be tuned to serve 80% of the time reduction you win by using AI. Depends a bit on the work you do. This will improve probably, while frontier models seem to hit hard ceilings.

Where I would disagree is that joining concepts or knowledge works at all with current AI. It works decently bad in my opinion. Even the logical and mathematical improvements of the latest Gemini model don't impress too much yet.

adamhartenz 12/4/2025||
Local models are fine for the way we have been using AI, like as a chatbot, or a fancy autocomplete. But everyone is craming AI into everything. Windows will be an agentic OS whether we like it or not. There will be no using your own local model for that use case. It is looking like everything is moving that way.
waffletower 12/4/2025||
Hmmm, maybe use a different OS? I would never dream of using Windows to get any type of work done myself and there are many others like me. There certainly are choices. If you prefer to stay, MCP services can be configured to use local models, and people are doing so on Windows as well (and definitely with MacOS and Linux). From an OS instrumentation perspective, I think MacOS is probably the most mature -- Apple has acknowledged MCP and intends a hybrid approach defaulting to their own in house, on device, models, but by embracing MCP appears to be allowing local model access.
kouru225 12/5/2025|||
Karpathy recently did an interview where he says that the future of AI is 1b models and I honestly believe him. The small models are getting better and better, and it’s going to end up decentralizing power moreso than anything else
lerp-io 12/4/2025|||
but the opposite is actually true. u can use ai to bypass a lot of SaaS solutions
entropi 12/4/2025||
So you are saying now that you can bypass a lot of solutions offered by a mix of small/large providers by using a single solution from a huge provider, this is the opposite of a centralization of power?
fauigerzigerk 12/4/2025|||
>"by using a single solution from a huge provider"

The parent didn't say that though and clearly didn't mean it.

Smaller SaaS providers have a problem right now. They can't keep up with the big players in terms of features, integrations and aggressive sales tactics. That's why concentration and centralisation is growing.

If a lot of specialised features can be replaced by general purpose AI tools, that could weaken the stranglehold that the biggest Saas players have, especially if those open weights models can be deployed by a large number of smaller service providers or even self hosted or operated locally.

That's the hypothesis I think. I'm not sure it will turn out that way though.

I'm not sure whether the current hyper-competitive situation where we have a lot of good enough open weights models from different sources will continue.

I'm not sure that AI models alone will ever be reliable enough to replace deterministic features.

I'm not sure whether AI doesn't create so many tricky security issues that once again only the biggest players can be trusted to manage them or provide sufficient legal liability protection.

lerp-io 12/4/2025|||
with ai specialized hardware you can run the open source models locally too and without without the huge provider stealing your precious IP
entropi 12/4/2025||
ah, so what you are saying is this: now you can buy your own specialized hardware, which is realistically produced and sold by a single company on earth, compete with ~3 of the largest multinational corporations to do so (consider the ram prices lately, to get a sense of the effect of this competition), spend tens of thousands in the process, and run your 'own' model, which someone spends millions to train and makes it open for some reason (this is not a point about its existence, its about its reliability. I don't think its wise to assume the open models will be roughly in line with SOTA forever). This way, by spending roughly 1-2 orders of magnitude more, you can eliminate a handful of SaaS products that you use.

Sorry, I don't see this happening, at least not for the majority. Even if it does, it would still be arguably centralizing.

hatefulheart 12/4/2025||
Sorry but your SQL comparison is way off. SQL is deterministic, has a defined implementation that databases must follow and when you run a statement it presents a query plan.

This is the absolute opposite to using an LLM. Please stop using this comparison and perhaps look for others, like for example, a randomised search engine.

code_for_monkey 12/4/2025|||
hear me out though: What if every time you used an sql query it made a bunch of stuff up? 80% of the time its what you want but sometimes it just imagines data and pulls it out of its butt
hatefulheart 12/4/2025||
You’re absolutely right!
nutjob2 12/4/2025|||
You're missing the point entirely. He's saying it's horses for courses, each tool has its use and you use the right tool for the job.

And he's right. LLMs are fancy text query engines and work very well as such.

The problem is when people try to shoehorn everything into LLMs. That's a disaster yet being perused vigorously by some.

WorldMaker 12/4/2025||
I think the conflicting point you missed is that "right tool for the job" also implies "right tool". If you don't think that probabilistic output counts as "query response" then LLMs are the "wrong tool" for any text query engine. If a database engine returned the right answer only X% of the time, you would say the database engine is faulty and find another. LLMs are probabilistic algorithms that by their very nature cannot hit 100% accuracy. (Especially as you get into the specifics of things like the lossy "compression" mechanics of tokenization and vectorization. The training set isn't 100% represented without some loss, either.) That doesn't seem like a good fit for a "query engine" tool in a database sense to some of us.

In practice they seem to work well for that at a surface level, most of the time. The complaint is not that LLMs are not a tool for the job of "fancy text query engine", the complaint is that at scale and in the long run, LLMs are not a good tool for that.

tinodb 12/5/2025||
For lots of jobs of “text querying” they do good enough of a job to be on par with humans (which are not infallible either).

And there are applications where you don’t have/wouldn’t pay another human, and the job that an AI does for mere cents is good enough most of the times. Like doing an analysis on a legacy codebase. I’ll read and verify, but running that “query” then saved me a lot of time.

Not everything needs to be deterministic to be of value.

WorldMaker 12/5/2025||
I agree, they can be "practical tools for the job", that's where I ended my comment. The disagreement seems to be that "practical tool for the job" is the same as "right tool for the job". A hammer can be a practical tool for the job of screwing a nail into a wall (once, at least) but few would call it the right tool for that job. An LLM can be a practical tool for a text query (at least as a first pass, at least with review and a grain of sand), but if you need reliability or repeatability or the ability to send results directly to a customer without a human in the loop it may not be the right tool for the job.

There's obviously a value in practical tools, deterministic or not. It's just worth making the distinction that a practical tool is not always fit for purpose as the "right" tool if you really are seeking the (most) right tool for the job.

mlavrent 12/4/2025||
I was at Microsoft until July of this year until I left for an SF-based company (not AI though).

The difference between the two with regards to AI tool usage couldn’t be more different- at Microsoft, they had started penalizing you in perf if you didn’t use the AI tools, which often were under par and you didn’t have a choice in. At the new place, perf doesn’t care if you use AI or not- just what you actually deliver. And, shocker, turns out they actually spend a lot building and getting feedback on internal AI tooling and so it gets a lot of use!

The Microsoft culture is a sort of toxic “get AI usage by forcing it down the engineer throats” vs the new “make it actually useful and win users” approach at that new place. The Microsoft approach builds resentment in the engineering base, but I’m convinced it’s the only way leadership there knows how to drive initiatives.

stickfigure 12/4/2025||
Microsoft is forcing the company to dogfood their own tools. They do this because they need the feedback so they can improve their tools, and they think these tools are a critical part of their future.

Presumably your new company isn't building AI tools, so they don't care what you use.

Imagine a developer in 1990s Microsoft saying "I want to use Borland C++ because it's better than the Microsoft IDE". Maybe it is, maybe it isn't, but that's not the point.

sedawk 12/4/2025|||
This is true, but is effective only if the dogfooders' feedback is accepted and worked upon. Which is not the case, I can tell you this from first-hand experience (currently work at Msft). Also unleashing and forcing tech savvy people to use immature tools is only asking for trouble when you don't have allocated enough manpower to deal with the fallout (that is, incessant downpour of improvement feedback). One cannot just force engineers to dogfood tools and then ignore them. This is precisely like Win-8 era mania, only this time it's infecting the whole company not just single org!
taurath 12/4/2025||
Not only are you discouraged from criticizing half baked manager-metric led implementations, you’re deeply incentivized to openly praise it if you want to be considered for the next well-funded initiative.

People with fiefdoms don’t like criticism. Microsoft pays their vassal dependent companies to use their products, no users actually like or would choose the products (Teams? 365 copilot? Azure?), and the whole enclosed ecosystem is pretty awful.

soco 12/5/2025|||
Count me as that weird user that would choose Azure any second over AWS. The integration and interface stability they offer is simply better. Teams sucks indeed, but as I don't know any less-suck alternative I'll have to trust you, and the with Copilot I never bothered much so again can't tell.
sedawk 12/12/2025|||
> ...deeply incentivized to openly praise... Sadly, you're not far off from describing the reality in many cases!
mlavrent 12/4/2025||||
They do actually build internal tooling! They key is that it’s actually good enough that feedback to the limited, targeted, and quickly actionable. Microsoft’s internal was immature enough that the general feedback you’d always have is “this is unusable”, which is something the teams building the tools could probably figure out themselves before making the whole company spend time beta testing the tools.

The main point is that the tools need to be of a certain quality/maturity for dogfooding to be effective.

capr 12/4/2025||||
Maybe if they would've let their engineers use Borland C++ they would've learned a thing or two for their own product.
pjmlp 12/5/2025||
I keep telling to the WinUI marketing team that instead of talking about how "great" doing XAML C++ is, they should actually buy a copy of C++ Builder.
roncesvalles 12/5/2025||||
Microsoft either doesn't care about feedback or doesn't have the engineering ability to act on them, otherwise Visual Studio and Microsoft Teams wouldn't be such terrible pieces of software despite tens of thousands of Microsoft employees using them daily.
pjmlp 12/5/2025||||
In 2025, C++ Builder is still better than Visual C++, in what concerns doing Windows GUI development in C++, some things never change, and management keeps being blind to them.

Regarding dog fooding, Project Reunion was also a victim of all engines AI, now the damage is done and only the Windows team cares, because their job depends on using it.

vkou 12/4/2025||||
Look, if you want the people to dig trenches with spoons, you can expect them to do it. But if all you're giving them is spoons, you're going to need to give them a lot of slack on the expected digging schedules.

Being forced to use a shit tool because <some other department somewhere in the company wants your feedback>, while your deadlines haven't been adjusted for all this wasted time is not acceptable behaviour. It's the kind of authoritarian horseshit that's that's so often pushed by unproductive parasites onto people who do actual work.

OhMeadhbh 12/5/2025|||
Dogfooding is great and all, but if you're forcing your engineering staff to dogfood something, you should make sure you're in the same industry as your customers. I've always had a bit of respect for MSFT products in the "I'm a company with about 5 reasonable, but not stellar developers" space. Do I want to build an operating system with Visual Basic? No. Do I force C++ on our loading dock foreman who upskilled to a VB4 dev 'cause he knows the problem domain inside and out? Also no. MSFT traditionally attracted "above average" devs who had the support to work on big projects for a (comparatively) long time.

As J. R. "Bob" Dobbs once said, "I don't practice what I preach because I'm not the kind of person I'm preaching to." ( see https://en.wikiquote.org/wiki/J._R._%22Bob%22_Dobbs )

Maybe the engineers complaining about dogfooding vibe-coding tools aren't the kind of developers you should have vibe-coding.

etruong42 12/5/2025|||
> At the new place, perf doesn’t care if you use AI or not- just what you actually deliver

I work at Google, and I am of the overall opinion that it doesn't matter what you deliver from an engineering perspective. I've seen launches that changed some behavior from opt-in to opt-out get lauded as worth engineering-years of investment. I've seen demos that were 1-2 years ahead of our current product performance get buried under bureaucracy and nitpicking while the public product languishes with nearly no usage. The point being, what you objectively deliver doesn't matter, but what ends up mattering is how the people in your orbit weave the narrative about what you built.

So if "leadership" wants something concretely done, they must mandate it in such a way that cuts through all the spin that each layer of bureaucracy adds before presenting it to the next layer of bureaucracy. And "leadership" isn't a single person, so you might describe leaders as individual vectors in a vector space, and a clear eigenvector in this space of leadership decisions in many companies is the vector of "increase employee usage of AI tools".

wseqyrku 12/4/2025|||
> they had started penalizing you in perf if you didn’t use the AI tools

That is kind of insane right? They are practically mining their own people for data, one wonders what they would not do to their customers.

indrora 12/4/2025|||
I too was at MSFT until the July layoffs.

Hang around old Microsofties and you'll encounter a phrase: "The Deal." The Deal is this informal agreement: Microsoft doesn't pay amazingly but you're given the time to have work-life balance, you can be relatively assured that upper leadership gives a shit about the ICs, there's space for "... So I was thinking..." to become real "... and that's our next product" discussions and that it's okay to fall so long as you can get back up and keep walking afterwards.

The Deal is dead.

People fired for performance after a bad review their manager didn't give them. The constant slimming of orgs and the relentless gnawing at budgets. I watched as a team went from reasonable to gutted because it got the short straw in "unregretted attrition quotas"

AI is driving this, and I want to see the chat logs between executives and copilot. What sycophantic shit is it producing that is driving them to make horrible decisions?

roncesvalles 12/5/2025|||
The Deal died when Microsoft got on the layoff bandwagon in Q1 2023 for no good reason and became very aggressive with perf after that. If Microsoft is just as toxic and unstable as Meta, why not just work at Meta for double the money?

Funnily, Apple also has an unspoken "deal" (pay a bit low but treat really well) and they stuck to it even through the layoff era.

taurath 12/4/2025||||
AI is busy quietly convincing every executive that uses it that they have no use for people to work out the details of their ideas anymore. It’s so frustrating to have these drive by executives come into a space you’re working in, drop in a 15 page deep think report they got from a 2 sentence prompt and call that contributing. Bonus points if the report is from an AI platform your company doesn’t have approved so you as a line employee can get written up for.
HacklesRaised 12/4/2025||||
Is it AI, or is it being run by people entirely divorced from the founder's vision?
balamatom 12/4/2025||
I think it's being compliant with the founder's vision entirely.
pjmlp 12/5/2025||||
From the outside this is clearly visible how Project Reunion crashed, C++/WinRT went into maintenance, VC++ losing steam to ISO compliance after bosting about C++20 and C11/17, .NET focus on Aspire/Blazor all AI in detriment of the rest,....

Thankfully I am technology mercenary, polyglot, and use whatever the clients need, regardless of my point of view, but it is sad to see the human part behind those decisions being affected.

joshwa 12/4/2025|||
URA quotas—I see the Amazon infection has spread from Seattle to Redmond.
zeristor 12/4/2025||
Perhaps the managers performance goals are linked to take up, this sometimes happens and it all becomes too blunt.
bccdee 12/3/2025||
> Engineers don't try because they think they can't.

This article assumes that AI is the centre of the universe, failing to understand that that assumption is exactly what's causing the attitude they're pointing to.

There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money). This isn't a strict dichotomy; often companies with real products will mix in tidbits of hype, such as Microsoft's "pivot to AI" which is discussed in the article. But moving toward one pole moves you away from the other.

I think many engineers want to stay as far from hype-driven tech as they can. LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated. I'd rather spend my time delivering value to customers than performing "big potential" to investors.

So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.

averyvery 12/3/2025||
Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out. Some models are better at some tasks, some tools are better at finding the right text and connecting it to the right actions, some tools provide a better wrapper around the text-generation process. Certain jobs are very easy for AI to do, others are impossible (but the AI lies about them).

A lot of us tried it and just said, "huh, that's interesting" and then went back to work. We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.

There's just not much value in chasing the endless AI news cycle, constantly believing that I'll fall behind if I don't read the latest details of Gemini 3.1 and ChatGPT 6.Y (Game Of The Year Edition). The engineers I know who use AI don't seem to have any particular insights about it aside from an encyclopedic knowledge of product details, all of which are changing on a monthly basis anyway.

New products that use gen AI are — by default — uninteresting to me because I know that under the hood, they're just sending text and getting text back, and the thing they're sending to is the same thing that everyone is sending to. Sure, the wrapper is nice, but I'm not paying an overhead fee for that.

layer8 12/3/2025|||
> Yeah, "Engineers don't try" is a frustrating statement. We've all tried generative AI, and there's not that much to it — you put text in, you get text back out.

"Engineers don't try" doesn’t refer to trying out AI in the article. It refers to trying to do something constructive and useful outside the usual corporate churn, but having given up on that because management is single-mindedly focused on AI.

One way to summarize the article is: The AI engineers are doing hype-driven AI stuff, and the other engineers have lost all ambition for anything else, because AI is the only thing that gets attention and helps the career; and they hate it.

cs02rm0 12/3/2025|||
> the other engineers have lost all ambition for anything else

Worse, they've lost all funding for anything else.

zwnow 12/3/2025||
Industries are built upon shit people built in their basements, get hacking
r14c 12/3/2025|||
I think it should be noted that a garage or basement in California costs like a million dollars.
mlrtime 12/4/2025||
That was true before Crypto and AI.
r14c 12/5/2025||
Yes, it just puts the whole "I started Apple in my garage"-style narrative into context.
cs02rm0 12/4/2025|||
I am! No one's interested in any of it though...
LtWorf 12/4/2025||
You need to buy fake stars on github, fake download it 2 millions time a day and ask an AI to spam about it on twitter/linkedin.
zdragnar 12/4/2025||||
ZIRP is gone, and so are the Good Times when any idiot could get money with nothing but a PowerPoint slide deck and some charisma.

That doesn't mean investors have gotten smarter, they've just become more risk averse. Now, unless there's already a bandwagon in motion, it's hard as hell to get funded (compared to before at least).

didibus 12/4/2025||||
Are you sure it refers to that? Why would it later say:

> now believes she's both unqualified for AI work

Why would she believe to be unqualified for AI work if the "Engineers don't try" wasn't about her trying to adopt AI?

FridgeSeal 12/3/2025|||
“Lost all ambition for anything else” is a funny way for the article to frame “hate being forced to run on the hampster wheel on ai, because an exec with the power to fire everyone is foaming at the mouth about ai and seemingly needs everyone to use it”
badbird33 12/3/2025||
To add another layer to it, the reason execs are foaming at the mouth is because they are hoping to fire the as many people as possible. Including those who implemented whatever AI solution in the first place.
shahbaby 12/4/2025||||
The most ironic part is that AI skills won't really help you with job security.

You touched on some of the reasons; it doesn't take much skill to call an API, the technology is in a period of rapid evolution, etc.

And now with almost every company trying to adopt "AI" there is no shortage of people who can put AI experience on their resume and make a genuine case for it.

gedy 12/3/2025||||
Maybe not what the OP or article is talking about, but it's super frustrating recently dealing with non/less technical mgrs, PMs, etc who now think they have this Uno card to bypass technical discussion just because they vibe coded some UI demo. Like no shit, that wasn't the hard part. But since they don't see the real/less visible past like data/auth/security, etc they act like engineers "aren't trying", less innovative, anti-AI or whatever when you bring up objections to their "whole app" they made with their AI snoopy snow cone machine.
spaniard89277 12/3/2025|||
My experience too. They are so convinced that AI is magical that pushing back makes you look bad.

Then things don't turn out as they expected and you have to deal with a dude thinking his engineers are messing with him.

It's just boring.

pzs 12/4/2025||||
Hmm, (whatever is in execs' head about) AI appears to amplify the same kind of thinking fallacies that are discussed in the eternal Mythical Manmonth essay, which was written like half a century ago. Funny how some things don't change much...
anonymars 12/3/2025|||
It reminds me of how we moved from "mockups" to "wireframes" -- in other words, deliberately making the appearance not look like a real, finished UI, because that could give the impression that the project was nearly done

But now, to your point: they can vibe-code their own "mockups" and that brings us back to that problem

palmotea 12/3/2025||||
> We hear AI advocates say that their workflow is amazing, but we watch videos of their workflow, and it doesn't look that great. We hear AI advocates say "the next release is about to change everything!", but this knowledge isn't actionable or even accurate.

There's a lot of disconnected-from-reality hustling (a.k.a lying) going on. For instance, that's practically Elon Musk's entire job, when he's actually doing it. A lot of people see those examples, think it's normal, and emulate it. There are a lot of unearned superlatives getting thrown around automatically to describe tech.

HumblyTossed 12/4/2025||
Yes, much the way some used to (still do?) try and emulate Steve Jobs. There's always some successful person these types are trying to be.
SV_BubbleTime 12/3/2025||||
This isn’t “unfair”, but you are intentionally underselling it.

If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

I’m not making any case for anything, but it’s just not that hard to get excited for something that sure does seem like magic sometimes.

Edit: lol this forum :)

nosianu 12/3/2025|||
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right

I AM very impressed, and I DO use it and enjoy the results.

The problem is the inconsistency. When it works it works great, but it is very noticeable that it is just a machine from how it behaves.

Again, I am VERY impressed by what was achieved. I even enjoy Google AI summaries to some of the questions I now enter instead of search terms. This is definitely a huge step up in tier compared to pre-AI.

But I'm already done getting used to what is possible now. Changes after that have been incremental, nice to have and I take them. I found a place for the tool, but if it wanted to match the hype another equally large step in actual intelligence is necessary, for the tool to truly be able to replace humans.

So, I think the reason you don't see more glowing reviews and praise is that the technical people have found out what it can do and can't, and are already using it where appropriate. It's just a tool though. One that has to be watched over when you use it, requiring attention. And it does not learn - I can teach a newbie and they will learn and improve, I can only tweak the AI with prompts, with varying success.

I think that by now I have developed a pretty good feel for what is possible. Changing my entire workflow to using it is simply not useful.

I am actually one of those not enjoying coding as such, but wanting "solutions", probably also because I now work for an IT-using normal company, not for one making an IT product, and my focus most days is on actually accomplishing business tasks.

I do enjoy being able to do some higher level descriptions and getting code for stuff without having to take care of all the gritty details. But this functionality is rudimentary. It IS a huge step, but still not nearly good enough to really be able to reliably delegate to the AI to the degree I want.

jandrese 12/3/2025||||
The big problem is AI is amazing at doing the rote boilerplate stuff that generally wasn't a problem to begin with, but if you were to point a codebot at your trouble ticket system and tell it to go fix the issues it will be hopeless. Once your system gets complex enough the AI effectiveness drops off rapidly and you as the engineer have to spend more and more time babysitting every step to make sure it doesn't go off the rails.

In the end you can save like 90% of the development effort on a small one-off project, and like 5% of the development effort on a large complex one.

I think too many managers have been absolutely blown away by canned AI demos and toy projects and have not been properly disappointed when attempting to use the tools on something that is not trivial.

DrewADesign 12/3/2025||
I think the 90/90 rule comes into play. We all know Tom Cargill quote (even if we’ve never seen it attributed):

The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.

It feels like a gigantic win when it carves through that first 90%… like, “wow, I’m almost done and I just started!”. And it is a genuine win! But for me it’s dramatically less useful after that. The things that trip up experienced developers really trip up LLMs and sometimes trying to break the task down into teeny weeny pieces and cajole it into doing the thing is worse than not having it.

So great with the backhoe tasks but mediocre-to-counterproductive with the shovel tasks. I have a feeling a lot of the impressiveness depends on which kind of tasks take up most of your dev time.

zeroonetwothree 12/4/2025||
The other problem is that if you didn't actually write the first 90% then the second 90% becomes 2x harder since you have to figure out wtf is actually going on.
DrewADesign 12/5/2025||
Right— that’s bitten me ‘whipping up’ prototypes. My assumption about the way the LLM would handle done minutiae ends up being wrong and finding out why something isn’t working ends up taking more time than doing it right the first time by hand. The worst part about that is you can’t even factor it in to your already inaccurate work time estimates because it could strike anywhere — including things you’d never mess up yourself.
sydd 12/3/2025||||
The more I use AI for coding the more I realize that its a toy for vibe coding/fun projects. Its not for serious work.

When you work with a large codebase which have a very high complexity level, then the bugs put in there by AI will not worth the cost of the easily added features.

Libidinalecon 12/3/2025||
Many people also program and have no idea what a giant codebase looks like.

I know I don't. I have never been paid to write anything beyond a short script.

I actually can't even picture what a professional software engineer actually works on day to day.

From my perspective, it is completely mind blowing to write my own audio synth in python with Librosa. A library I didn't know existed before LLMs and now I have a full blown audio mangling tool that I would have never been able to figure out on my own.

It seems to me professional software engineering must be at least as different to vibe coding as my audio noodlings are to being a professional concert pianist. Both are audio and music related but really two different activities entirely.

xwolfi 12/4/2025||
I work on a stock market trading system in a big bank, in Hong Kong.

The code is split between a backend in Java (no GC allowed during trading) and C++ (for algos), a frontend in C# (as complex as the backend, used by 200 traders), and a "new" frontend in Javascript in infinite migration.

Most of the code was made before 2008 but that was the cvs to svn switch so we lost history before that. We have employees dating back 1997 who remembers that platform already existing.

It's made of millions of lines of code, hundreds of people worked on it, it does intricate things in 10 stock markets across Asia (we have no clue how the others in US or EU do, not really at least - it's not the same rules, market vendors, protocols etc)

Sometimes I need to configure new trading robots for random little thing we want to do automatically and I ask the AI the company is shoving down our throat. It is HOPELESS, literally hopeless. I had to write a review to my manager who will never pass it along up the ladder for fear of their response that was absolutely destructive. It cannot understand the code let alone write some, it cannot write the tests, it cannot generate configuration, it cannot help in anything. It's always wrong, it never gets it, it doesn't know what the fuck these 20 different repos of thousands of files are and how they connect to each other, why it's in so many languages, why it's so quirky sometimes.

Should we change it all to make it AI compatible, or give up ? Fuck do I know... When I started working on it 7 years ago coming from little startups doing little things, it took me a few weeks to totally get the philosophy of it all and be productive. It's really not that hard, it's just really really really really large, so you have to embrace certain ways of working (for instance, you'll do bugs, and you'll find them too late, and you'll apologize in post mortems, dont be paralized by it). AIs costing all that money to be so dumb and useless, are disappointing :(

ungovernableCat 12/4/2025||
There’s a reason why it’s so much better at writing JavaScript than HFT C++.

The latter codebase doesn’t tend to be in github repos as much.

rconti 12/3/2025||||
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

Or your job isn't what AI is good at?

AI seems really good at greenfield projects in well known languages or adding features.

It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.

perardi 12/3/2025|||
> It's been pretty awful, IME, at working with less well-known languages, or deep troubleshooting/tweaking of complex codebases.

This is precisely my experience.

Having the AI work on a large mono repo with a front-end that uses a fairly obscure templating system? Not great.

Spinning up a greenfield React/Vite/ShadCN proof-of-concept for a sales demo? Magic.

Aeolun 12/3/2025|||
> It's been pretty awful, IME, at working with less well-known languages

Well, there’s your problem. You should have selected React while you had the chance.

bigstrat2003 12/3/2025||||
This shit right here is why people hate AI hype proponents. It's like it never crosses their mind that someone who disagrees with them might just be an intelligent person who tried it and found it was lacking. No, it's always "you're either doing it wrong or weren't really trying". Do you not see how condescending and annoying that is to people?
ModernMech 12/3/2025||||
> If you haven’t had a mind blown moment with AI yet...

Results are stochastic. Some people the first time they use it will get the best possible results by chance. They will attribute their good outcome to their skill in using the thing. Others will try it and will get the worst possible response, and they will attribute their bad outcome to the machine being terrible. Either way, whether it's amazing or terrible is kind of an illusion. It's both.

nutjob2 12/4/2025||||
You whole comment reads like someone who is a victim of hype.

LLMs are great in their own way, but they're not a panacea.

You may recall that magic is way to trick people into believing things that are not true. The mythical form of magic doesn't exist.

Jblx2 12/3/2025||||
I wonder if this issues isn't caused by people who aren't programmers, and now they can churn out AI generated stuff that they couldn't before. So to them, this is a magical new ability. Where as people who are already adept at their craft just see the slop. Same thing in other areas. In the before-times, you had to painstakingly handcraft your cat memes. Now a bot comes along and allows someone to make cat memes they didn't bother with before. But the real artisan cat memeists just roll their eyes.
etempleton 12/3/2025||
AI is better than you at what you aren’t very good at. But once you are even mediocre at doing something you realize AI is wrong / pretty bad at doing most things and every once in awhile makes a baffling mistake.

There are some exceptions where AI is genuinely useful, but I have employees who try to use AI all the time for everything and their work is embarrassingly bad.

Jblx2 12/4/2025||
>AI is better than you at what you aren’t very good at.

Yes, this is better phrased.

ggerni 12/3/2025||||
[flagged]
antonvs 12/3/2025|||
> If you haven’t had a mind blown moment with AI yet, you aren’t doing it right or are anchoring in what you know vs discovering new tech.

Much of this boils down to people simply not understanding what’s really happening. Most people, including most software developers, don’t have the ability to understand these tools, their implications, or how they relate to their own intelligence.

> Edit: lol this forum :)

Indeed.

senordevnyc 12/3/2025|||
I’ve been an engineer for 20 years, for myself, small companies, and big tech, and now working for my own saas company.

There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems. Maybe AI isn’t the right tool for the job, but that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction. Either way, the market is going to punish them accordingly.

jacquesm 12/3/2025|||
Punishment eh? Serves them right for being skeptical.

I've been around long enough that I have seen four hype cycles around AI like coding environments. If you think this is new you should have been there in the 80's (Mimer, anybody?), when the 'fourth generation' languages were going to solve all of our coding problems. Or in the 60's (which I did not personally witness on account of being a toddler), when COBOL, the language for managers was all the rage.

In between there was LISP, the AI language (and a couple of others).

I've done a bit more than looking at this and saying 'huh, that's interesting'. It is interesting. It is mostly interesting in the same way that when you hand an expert a very sharp tool they can probably carve wood better than with a blunt one. But that's not what is happening. Experts are already pretty productive and they might be a little bit more productive but the AI has it's own envelope of expertise and the closer you are to the top of the field the smaller your returns in that particular setting will be.

In the hands of a beginner there will be blood all over the workshop and it will take an expert to sort it all out again, quite possibly resulting in a net negative ROI.

Where I do get use out of it: to quickly look up some verifiable fact, to tell me what a particular acronym stands for in some context, to be slightly more functional than wikipedia for a quick overview of some subfield (but you better check that for gross errors). So yes, it is useful. But it is not so useful that competent engineers that are not using AI are failing at their job, and it is at best - for me - a very mild accelerator in some use cases. I've seen enough AI driven coding projects strand hopelessly by now to know that there are downsides to that golden acorn that you are seeing.

The few times that I challenged the likes of ChatGPT with an actual engineering problem to which I already knew the answer by way of verification the answers were so laughably incorrect that it was embarrassing.

dgacmu 12/3/2025||
I'm not a big llm booster, but I will say that they're really good for proof of concepts, for turning detailed pseudocode into code, sometimes for getting debugging ideas. I'm a decade younger than you, but I've programmed in 4GLs (yuch), lived through a few attempts at visual programming (ugh), and ... LLM assistance is different. It's not magic and it does really poorly at the things I'm truly expert at, but it does quite well with boring stuff that's still a substantial amount of programming.

And for the better. I've honestly not had this much fun programming applications (as opposed to students stuff and inner loops) in years.

jacquesm 12/3/2025||
> but it does quite well with boring stuff that's still a substantial amount of programming.

I'm happy that it works out for you, and probably this is a reflection of the kind of work that I do, I wouldn't know how to begin to solve a problem like designing a braille wheel or a windmill using AI tools even though there is plenty of coding along the way. Maybe I could use it to make me faster at using OpenSCAD but I am never limited by my typing speed, much more so by thinking about what it is that I actually want to make.

dgacmu 12/4/2025||
I've used it a little for openscad with mixed results - sometimes it worked. But I'm a beginner at openscad and suspect if I were better it would have been faster to just code it. It took a lot of English to describe the shape - quite possibly more than it would have taken to just write in openscad. Saying "a cube 3cm wide by 5cm high by 2cm deep" vs cube([5, 3, 2]) ... and as you say, the hard part is before the openscad anyway.
jacquesm 12/4/2025||
OpenSCAD has a very steep learning curve. The big trick is not to think sequentially but to design the part 'whole'. That requires a mental switch. Instead of building something and then adding a chamfered edge (which is possible, but really tricky if the object is complex enough) you build it out of primitives that you've already chamfered (or beveled). A strategic 'hull' here and there to close the gaps helps a lot.

Another very useful trick is to think in terms of vertices of your object rather than the primitives creates by those vertices. You then put hulls over the vertices and if you use little spheres for the vertices the edges take care of themselves.

This is just about edges and chamfers, but the same kind of thinking applies to most of OpenSCAD. If I compare how productive I am with OpenSCAD vs using a traditional step-by-step UI driven cad tool it is incomparable. It's like exploratory programming, but for physical objects.

mjr00 12/3/2025||||
> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

"There's not much there" is a totally valid critique of a lot of the current AI ecosystem. How many startups are simple prompt wrappers on top of ChatGPT? How many AI features in products are just "click here to ask Rovo/Dingo/Kingo/CutesyAnthropomorphizedNameOfAI" text boxes that end up spitting out wrong information?

There's certainly potential but a lot of the market is hot air right now.

> Either way, the market is going to punish them accordingly.

I doubt this, simply because the market has never really punished people for being less efficient at their jobs, especially software development. If it did, people proficient in vim would have been getting paid more than anyone else for the past 40 years.

afavour 12/4/2025|||
IMO if the market is going to punish anyone it’s the people who, today, find that AI is able to do all their coding for them.

The skeptics are the ones that have tried AI coding agents and come away unimpressed because it can’t do what they do. If you’re proudly proclaiming that AI can replace your work then you’re telling on yourself.

jacquesm 12/4/2025|||
> If you’re proudly proclaiming that AI can replace your work then you’re telling on yourself.

That's a very interesting observation. I think I'm safe for now ;)

bluesnowmonkey 12/4/2025|||
> it can’t do what they do

That's asking the wrong question, and I suspect coming from a place of defensiveness, looking to justify one's own existence. "Is there anything I can do that the machine can't?" is the wrong question. "How can I do more with the machine's help?" is the right one.

micromacrofoot 12/3/2025||||
What's "there" though is that despite being wrappers of chat gpt, the product itself is so compelling that it's essentially got a grip on the entire american economy. That's why everyone's crabs in a bucket about it, there's something real that everyone wants to hitch on to. People compare crypto or NFTs to this in terms of hype cycle, but it's not even close.
johnnyanmac 12/4/2025||
>there's something real that everyone wants to hitch on to.

Yeah, stock prices, unregulated consolidation, and a chance to replace the labor market. Next to penis enhancement, it's a CEO's wet dream. They will bet it all for that chance.

Granted, I think its hastiness will lead to a crash, so the CEO's played themselves short term.

micromacrofoot 12/4/2025||
Sure, but under it all there's something of value... that's why it's a much larger hype wave than dick pills
thewebguyd 12/3/2025||||
> simply because the market has never really punished people for being less efficient at their jobs

In fact, it tends to be the opposite. You being more efficient just means you get "rewarded" with more work, typically without an appropriate increase in pay to match the additional work either.

Especially true in large, non-tech companies/bureaucratic enterprises where you are much better off not making waves, and being deliberately mediocre (assuming you're not a ladder climber and aren't trying to get promoted out of an IC role).

In a big team/org, your personal efficiency is irrelevant. The work can only move as fast as the slowest part of the system.

YZF 12/4/2025|||
This is very true. So you can't just ask people to use AI and expect better output even if AI is all the hype. The bottlenecks are not how many lines of code you can produce in a typical big team/company.

I think this means a lot of big businesses are about to get "disrupted" because small teams can become more efficient because for them sheer generation of somtimes boilerplate low quality code is actually a bottleneck.

akra 12/4/2025|||
Sadly capitalism rewards scarcity at a macro level, which in some ways is the opposite of efficiency. It also grants "social status" to the scarce via more resources. As long as you aren't disrupted, and everyone in your industry does the same/colludes, restricting output and working less usually commands more money up to a certain point (prices are set more as a monopoly in these markets). Its just that scarcity was in the past correlated with difficulty which made it "somewhat fair" -> AI changes that.

Its why unions, associations, professional bodies, etc exist for example. This whole thread is an example -> the value gained from efficiency in SWE jobs doesn't seem to be accruing value to the people with SWE skills.

YZF 12/4/2025|||
I think part of this is that there is no one AI and there is no one point in time.

The other day Claude Code correctly debugged an issue for me, that was seen in production, in a large product. It found a bug a human wrote, a human reviewed, and fixed it. For those interested the bug had to do with chunk decoding, the author incorrectly re-initialized the decoder in the loop for every chunk. So single chunk - works. >1 chunk fails.

I was not familiar with the code base. Developers who worked on the code base spent some time and didn't figure out what was going on. They also were not familiar with the specific code. But once Claude pointed this out that became pretty obvious and Claude rewrote the code correctly.

So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.

That said, it does not handle all tasks with the same consistency. Some things it can really mess up. So you need to learn what it does well and what it does less well and how and when to interact with it to get the results you want.

It is automation on steroids with near human (lessay intern) capabilities. It makes mistakes, sometimes stupid ones, but so do humans.

johnnyanmac 12/4/2025||
>So when someone tells me "there's not much there" and when the evidence says the opposite I'm going to believe my own lying eyes. And yes, I could have done this myself but Claude did this much faster and correctly.

If the stories were more like this where AI was an aid (AKA a fancy auto complete), devs would probably be much more optimistic. I'd love more debugging tools.

Unfortunately, the lesson an executive here would see is "wow AI is great! fire those engineers who didn't figure it out". Then it creeps to "okay have AI make a better version of this chunk decoder". Which is wrong on multiple levels. Can you imagine if the result for using Intellisense for the first time was to slas your office in half? I'd hate autocomplete too?

handstitched 12/3/2025||||
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

I would argue that the "actual job" is simply to solve problems. The client / customer ultimately do not care what technology you use. Hell, they don't really care if there's technology at all.

And a lot of software engineers have found that using an LLM doesn't actually help solve problems, or the problems it does solve are offset by the new problems it creates.

senordevnyc 12/3/2025||
Again, AI isn’t the right tool for every job, but that’s not the same thing as a shallow dismissal.
bigstrat2003 12/3/2025|||
What you described isn't a shallow dismissal. They tried it, found it to not be useful in solving the problems they face, and moved on. That's what any reasonable professional should do if a tool isn't providing them value. Just because you and they disagree on whether the tool provides value doesn't mean that they are "failing at their job".
jacquesm 12/4/2025|||
It is however much less of a shallow dismissal of a tool than your shallow dismissal of a person, or in fact a large group of persons.
eschaton 12/3/2025||||
Or maybe it indicates that the person looking at the LLM and deciding there’s not much there knows more than you do about what they are and how they work, and you’re the one who’s wrong about their utility.
johnnyanmac 12/4/2025||||
>To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems

This feels like a mentality of "a solution trying to find a problem". There's enough actual problems to solve that I don't need to create more.

But sure, the extension of this is "Then they go home and research more usages and see a kerfluffle of legal, community, and environmental concerns". Then decides to not get involved in the politics".

>Either way, the market is going to punish them accordingly.

If you want to punish me because I gave evaluations you disagreed with, you're probably not a company I want to work for. I'm not a middle manager.

josephg 12/3/2025||||
It really depends on what you’re doing. AI models are great at kind of junior programming tasks. They have very broad but often shallow knowledge - so if your job involves jumping between 18 different tools and languages you don’t know very well, they’re a huge productivity boost. “I don’t write much sql, or much Python. Make a query using sqlalchemy which solves this problem. Here’s our schema …”

AI is terrible at anything it hasn’t seen 1000 times before on GitHub. It’s bad at complex algorithmic work. Ask it to implement an order statistic tree with internal run length encoding and it will barely be able to get off the starting line. And if it does, the code will be so broken that it’s faster to start from scratch. It’s bad at writing rust. ChatGPT just can’t get its head around lifetimes. It can’t deal with really big projects - there’s just not enough context. And its code is always a bit amateurish. I have 10+ years of experience in JS/TS. It writes code like someone with about 6-24 months experience in the language. For anything more complex than a react component, I just wouldn’t ship what it writes.

I use it sometimes. You clearly use it a lot. For some jobs it adds a lot of value. For others it’s worse than useless. If some people think it’s a waste of time for them, it’s possible they haven’t really tried it. It’s also possible their job is a bit different from your job and it doesn’t help them.

afavour 12/3/2025||||
> that kind of shallow dismissal indicates a closed mind, or perhaps a fear-based reaction

Or, and stay with me on this, it’s a reaction to the actual experience they had.

I’ve experimented with AI a bunch. When I’m doing something utterly formulaic it delivers (straightforward CRUD type stuff, or making a web page to display some data). But when I try to use it with the core parts of my job that actually require my specialist knowledge they fall apart. I spend more time correcting them than if I just write it myself.

Maybe you haven’t had that experience with work you do. But I have, and others have. So please don’t dismiss our reaction as “fear based” or whatever.

crote 12/4/2025||||
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

To me, any software engineer who tries Haskell, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

To me, any software engineer who tries Emacs, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

To me, any software engineer who tries FreeBSD, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job, which is using technology to solve problems.

We're getting paid to solve the problem, not to play with the shiniest newest tools. If it gets the job done, it gets the job done.

gishh 12/3/2025||||
> There are many valid critiques of AI, but “there’s not much there” isn’t one of them.

I have solved more problems with tools like sed and awk, you know, actual tools, more than I’ve entered tokens into an LLM.

Nobody seemed to give a fuck as long as the problem was solved.

This it getting out of hand.

Aeolun 12/3/2025|||
Just because you can solve problems with one class of tools doesn’t mean another class is pointless. A whole new class of problems just became solvable.
59nadir 12/4/2025||
> A whole new class of problems just became solvable.

This is almost by definition not really true. LLMs spit out whatever they were trained on, mashed up. The solutions they have access to are exactly the ones that already exist, and for the most part those solutions will have existed in droves to have any semblance of utility to the LLM.

If you're referring to "mass code output" as "a new class of problem", we've had code generators of differing input complexity for a very long time; it's hardly new.

So what do you really mean when you say that a new class of problems became solvable?

DonHopkins 12/3/2025|||
But sed and awk are problems.
justatdotin 12/4/2025||||
I would've thought that in 20 years you would have met other devs who do not think like you?

something I enjoy about our line of work is there are different ways to be good at it, and different ways to be useful. I really enjoy the way different types of people make a team that knows its strengths and weaknesses.

anyway, I know a few great engineers who shrug at the agents. I think different types of thinker find engagement with these complex tools to be a very different experience. these tools suit some but not all and that's ok

jacquesm 12/4/2025||
This is the correct viewpoint (in my opinion, of course). There are many ways that lead to a solution, some are better, some are worse, some are faster, some much slower. Different tools and different strokes for different folks and if it works for you then more power to you. That doesn't mean you get to discard everybody for whom it does not work in exactly the same way.

I think a big mistake junior managers make is that they think that their nominal subordinates should solve problems the way that they would solve them, without recognizing that there are multiple valid paths and that it doesn't so much matter which path is chosen as long as the problem is solved on time and within the allocated budget.

bluGill 12/3/2025||||
I use AI all the time, but the only gain they have is better spelling and grammar than me. Spelling and grammar has long been my weak point. I can write the same code they write just as fast without - typing has never been the bottleneck in writing code. The bottleneck is thinking and I still need to understand the code AI writes since it is incorrect rather often so it isn't saving any effort, other than the time to look up the middle word of some long variable name.
andrei_says_ 12/4/2025||||
My dismissal I think indicates exhaustion from the additional work I’d need to do to make an LLM write my code, annoyance at its inaccuracies, and disgust at the massive scam and grift that is the LLM influencers.

Writing code via a LLM feels like writing with a wet noodle. It’s much faster and write what I mean, myself, with the terse was and precision of my own thought.

jacquesm 12/4/2025||
> with the terse was and precision of my own thought

Hehe. So much for precision ;)

andrei_says_ 12/5/2025||
Autocorrected “terse-ness”
jacquesm 12/5/2025||
Autocorrect is my nemesis. And I suspect it has teamed up with email address completion.
awesome_dude 12/3/2025||||
I mean, this is the other extreme to the post being replied to (either you think it's useless and walk away, or you're failing at your job for not using it)

I personally use it, I find it helpful at times, but I also find that it gets in my way, so much so it can be a hindrance (think losing a day or so because it's taken a wrong turn and you have to undo everything)

FTR The market is currently punishing people that DO use it (CVs are routinely being dumped at the merest hint of AI being used in its construction/presentation, interviewers dumping anyone that they think is using AI for "help", code reviewers dumping any take home assignments that have even COMMENTS massaged by AI)

spamizbad 12/3/2025||||
> To me, any software engineer who tries an LLM, shrugs and says “huh, that’s interesting” and then “gets back to work” is completely failing at their actual job,

I don't understand why people seem so impatient about AI adoption.

AI is the future, but many AI products aren't fully mature yet. That lack of maturity is probably what is dampening the adoption curve. To unseat incumbent tools and practices you either need to do so seamlessly OR be 5-10x better (Only true for a subset of tasks). In areas where either of these cases apply, you'll see some really impressive AI adoption. In areas where AI's value requires more effort, you'll see far less adoption. This seems perfectly natural to me and isn't some conspiracy - AI needs to be a better product and good products take time.

sensanaty 12/4/2025||
> I don't understand why people seem so impatient about AI adoption.

We're burning absurd, genuinely farcical amounts of money on these tools now, so of course they're impatient. There's Trillions (with a "T") riding on this massive hypewave, and the VCs and their ilk are getting nervous because they see people are waking up to the reality that it's at best a kinda useful tool in some situations and not the new God that we were promised that can do literally everything ever.

spamizbad 12/4/2025||
Well that's capital's problem. Don't make it mine!
ElijahLynn 12/3/2025|||
Well said!
pjmlp 12/3/2025|||
In European consulting agencies the trend now is to make AI part of each RFP reply, you won't go through the sales team, if AI isn't crammed there as part of the solution being delivered, and we get evaluated for it.

This takes all the joy away, even traditional maintenance projects of big corps seems attractive nowadays.

mr_toad 12/3/2025||
I remember when everything had a have the word ‘digital’ in it. And I’m old enough to remember when ‘multimedia’ was a buzzword that was crammed into anywhere it would fit.
Fordec 12/3/2025|||
You know what, this clarifies something for me.

PC, Web and Smartphone hype was based on "we can now do [thing] never done before".

This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"

It all feels much more like a wealth grab for the corporations than a promise of improving a standard of living for end customers. Much closer to a Cloud or Server (replacing Mainframes) cycle.

burningChrome 12/3/2025|||
>> This time out it feels more like "we can do existing [thing], but reduce the cost of doing it by not employing people"

I was doing RPA (robotic process automation) 8 years ago. Nobody wanted it in their departments. Whenever we would do presentations, we were told to never, ever, ever talk about this technology replacing people - it only removes the mundane work so teams can focus more on the bigger scope stuff. In the end, we did dozens and dozens of presentations and only two teams asked us to do some automation work for them.

The other leaders had no desire to use this technology because they were not only fearful of it replacing people on their teams, they were fearful it would impact their budgets negatively so they just quietly turned us down.

Unfortunately, you're right because as soon as this stuff gets automated and you find out 1/3rd of your team is doing those mundane tasks, you learn very quickly you can indeed remove those people since there won't be enough "big" initiatives to keep everybody busy enough.

The caveat was even on some of the biggest automations we did, you still needed a subset of people on the team you were working with to make sure the automations were running correctly and not breaking down. And when they did crash, since a lot of these were moving time sensitive data, it was like someone just stole the crown jewels and suddenly you need two war rooms and now you're ordering in for lunch.

james_marks 12/3/2025||||
Yes and no. PC, Web, etc advancements were also about lowering cost. It’s not that no one could do some thing, it’s that it was too expensive for most people, e.g. having a mobile phone in the 80’s.

Or hiring a mathematician to calculate what is now done in a spreadsheet.

munificent 12/3/2025|||
100%.

"You should be using AI in your day to day job or you won't get promoted" is the 2025 equivalent of being forced to train the team that your job is being outsourced to.

Paianni 12/3/2025||||
or 'interactive' or 'cloud' (early 2010s).
pjmlp 12/3/2025|||
Same, doesn't make this hype phase more bearable though.
bwfan123 12/3/2025|||
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype product

I think there is a broader dichotomy between the people-persuation-plane, and the real-world-facts plane. In the people-persuation plane, it is all about convincing someone of something, and hype plays here, and marketing, religion and political persuation too. In the real world plane, it is all about tangible outcomes, and working code or results play here, and gravity and electromagnetism too. Sometimes there is a reflex loop between the two. I chose the engineering career because, what i produce is tangible, but I realize that a lot of my work is in the people-plane.

balamatom 12/3/2025||
>a broader dichotomy between the people-persuation-plane, and the real-world-facts plane

This right here is the real thing which AI is deployed to upset.

The Enlightenment values which brought us the Industrial Revolution imply that the disparity between the people-persuasion-plane and the real-world-facts-plane should naturally decrease.

The implicit expectation here is that as civilization as a whole learns more about how the universe works, people would naturally become more rational, and thus more persuadable by reality-compliant arguments and less persuadable by reality-denying ones.

That's... not really what I've been seeing. That's not really what most of us have been seeing. Like, devastatingly not so.

My guess is that something became... saturated? I'd place it sometime around the 1970s, same time Bretton Woods ended, and the productivity/wages gap began to grow. Something pertaining to the shared-culture-plane. Maybe there's only so much "informed" people can become before some sort of phase shift occurs and the driving force behind decisions becomes some vague, ethically unaccountable ingroup intuition ("vibes", yo), rather than the kind of explicit, systematic reasoning which actually is available to any human, except for the weird fact how nobody seems to trust it very much any more.

bwfan123 12/4/2025|||
> people would naturally become more rational, and thus more persuadable by reality-compliant arguments and less persuadable by reality-denying ones.

likely not. Our natural state tuned by evolution is one of an emotional creature persuaded by pleasing rhetoric - like a bird which responds to another bird's call.

balamatom 12/4/2025||
What's irrational about a bird responding to another bird's call, though?

I always figured, unlike human speech, bird song contained only truth - 100% real-time factual representation of reproductive fitness/compatibility, 0% fractal bullshitting (such as arguing about definitions of abstract notions, or endless rumination and reflection, or command hierarchies built to leak, or...).

Although who knows, really! I'm just guessing here. Maybe what we oughtta do is ask some actual ornithologists to ask an actual parrot to translate for us the songs of its distant relatives. Sounds crazy enough to work -- though probably not in captivity.

Overall I see your point, and I see many people sharing that perspective; personally, I find it rather disheartening. Tbh I'm not even sure what would be a convincing argument one way or the other.

saubeidl 12/3/2025|||
This sounds a lot like the Marxist concept of alienation: https://en.wikipedia.org/wiki/Marx%27s_theory_of_alienation
balamatom 12/4/2025||
Probably what it is, yeah. It's in the water.
hinkley 12/3/2025|||
I wonder if objectively Seattle got hit harder than SF in the last bust cycle. I don’t have a frame of comparison. But if the generational trauma was bigger then so too would the backlash against new bubbles.
bartread 12/3/2025|||
I've never worked at Microsoft. However, I do have some experience with the company.

I worked building tools within the Microsoft ecosystem, both on the SQL Server side, and on the .NET and developer tooling side, and I spent some time working with the NTVS team at Microsoft many years ago, as well as attending plenty of Microsoft conferences and events, working with VSIP contacts, etc. I also know plenty of people who've worked at or partnered with Microsoft.

And to me this all reads like classic Microsoft. I mean, the article even says it: whatever you're doing, it needs to align with whatever the current key strategic priority is. Today that priority is AI, 12 years ago it was Azure, and on and on. And, yes, I'd imagine having to align everything you do to a single priority regardless of how natural that alignment is (or not) gets pretty exhausting, and I'd bet it's pretty easy to burn out on it if you're in an area of the business where this is more of a drag and doesn't seem like it delivers a lot of value. And you'll have to dogfood everything (another longtime Microsoft pattern) core to that priority even if it's crap compared with whatever else might be out there.

But I don't think it's new: it's simply part and parcel of working at Microsoft. And the thing is, as a strategy it's often served them well: Windows[0], Xbox, SQL Server, Visual Studio, Azure, Sharepoint, Office, etc. Doesn't always work, of course: Windows Phone went really badly, but it's striking that this kind of swing and a miss is relatively rare in Microsoft's history.

And so now, of course, they're doing it with AI. And, of course, they're a massive company, so there will be plenty of people there who really aren't having a good time with it. But, although it's far from a foregone conclusion, it would not be a surprise for Microsoft to come from behind and win by repeating their usual strategy... again.

[0] Don't overread this: I'm not necessarily saying I'm a huge fan. In fact I do think Windows, at is core, is a decent operating system, and has been for a very long time. On the back end it works well, and I have no complaints. But I viscerally despise Windows 11 as a desktop operating system. That's right: DESPISE. VISCERALLY. AT A MOLECULAR LEVEL.

mips_avatar 12/3/2025|||
I do assume that, I legitimately think it's the most important thing happening in the next decade in tech. There's going to be an incredible amount of traditional software written to make this possible (new databases, frameworks, etc.) and I think people should be able to see the opportunity, but the awful cultures in places like Microsoft are hindering this.
jacquesm 12/3/2025|||
> But moving toward one pole moves you away from the other.

My assumption detector twigged at that line. I think this is just replacing the dichotomy with a continuum between two states. But the hype proponents always hope - and in some cases they are right - that those two poles overlap. People make and lose fortunes on placing those bets and you don't necessarily have to be right or wrong in an absolute sense, just long enough that someone else will take over your load and hopefully at a higher valuation.

Engineers are not usually the ones placing the bets, which is why they're trying to stay away from hype driven tech (to them it is neutral with respect to the outcome but in case of a failure they lose their job, so better to work on things that are not hyped, it is simply safer). But as soon as engineers are placing bets they are just as irrational as every other class of investor.

pico303 12/3/2025|||
This somewhat reflects my sentiment to this article. It felt very condescending. This "self-limiting beliefs" and the implication that Seattle engineers are less than San Francisco engineers because they haven't bought into AI...well, neither have all the SF engineers.

One interesting take away from the article and the discussion is that there seem to be two kinds of engineers: those that buy into the hype and call it "AI," and those that see it for the fancy search engine it is and call it an "LLM." I'm pretty sure these days when someone mentions "AI" to me I roll my eyes. But if they say, "LLM," ok, let's have a discussion.

antonvs 12/3/2025|||
> often companies with real products will mix in tidbits of hype

The wealthiest person in the world relies entirely on his ability to convince people to accept hype that surpasses all reason.

layer8 12/3/2025|||
> So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.

I understood “they think they can’t” to refer to the engineers thinking that management won’t allow them to, not to a lack of confidence in their own abilities.

balamatom 12/3/2025|||
>So, no. I don't think "engineers don't try because they think they can't." I think engineers KNOW they CAN and resent being asked to look pretty and do nothing of value.

Spot. Fucking. On.

Thank you.

zzzeek 12/3/2025|||
the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow. The sudden appearance of LLMs has had a really destabilizing effect on everything, and a vast portion of what LLMs can do and/or are being used for runs from intellectually stifling (using LLMs to write your term papers) to revolting (all kinds of non-artists/writers/musicians using LLM to suddenly think they are "creators" and displacing real artists, writers, and musicians) to utterly degenerate (political / sexual deepfakes of real people, generation of antivax propaganda, etc). Put on top of that the way corporate America is absolutely doing the very familiar "blockchain" dance on this and insisting everyone has to do AI all the time everywhere is a huge problem that hopefully will shake out some in the coming years.

But despite all that, for writing, refactoring, and debugging computer code, LLM agents are still completely game changing. All of these things are true at the same time. There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.

59nadir 12/4/2025|||
> There's no way someone that works with real code all day could spent an honest few weeks with a tool like Claude and come away calling it "hype". someone might still not prefer it, or it's not for them, but to claim it's "hype", that's not possible.

I've tried implementing features with Claude Code Max and if I had let that go on for a week instead of just a couple of days I would've lost a week's worth of work (it was pretty immediately obvious that it was too slow at doing pretty much everything, and even the slightest interaction with the LLM caused very long round-trips that would add additional time, over and over and over again). It's possible people simply don't do the kind of things I do. On the extreme end of that, had I spent my days making CRUD apps I probably would've thought it was magic and a "game changer"... But I don't.

I actually don't have a problem believing that there are people who basically only need to write 25% of their code now; if all you're doing for work is gluing together libraries and writing boilerplate then of course an LLM is going to help with that, you're probably the 1000th person that day to ask for the same thing.

The one part I would say LLMs seem to help me with is medium-depth questions about DirectX12. Not really how to use it, but parts of the API itself. MSDN is good for learning about it, but I would concede that LLMs have been useful for just getting more composite knowledge of DX12.

P.S.:

I have found that very short completions, 1-3 lines, is a lot more productive for me personally than any kind of "generate this feature", or even function-sized generation. The reason is likely that LLMs just suck at the things I do, but they can figure out that a pattern exists in the pretty immediate context and just spit out that pattern with some context clues nearby. That remains my best experience with any and all LLM-assisted coding. I don't use it often because we don't allow LLMs for work, but I have a keybind for querying for a completion when I do side projects.

zzzeek 12/4/2025||
my current job /role combinations has me working in a variety of projects which feature tasks to be done in: Python/SQLAlchemy (which I maintain), Go, k8s, Ansible, Bash, Groovy, Java, Typescript, javascript, etc. If I'm doing an architecture-intensive thing in SQLAlchemy, obviously I'm not going to say "Claude here go do this feature for me". I will have it do things like write change notes (where I'll write out the changelog in the convoluted and overly technical way I can do in 10 seconds, and it produces something presentable and readable from it), set up test cases, and sometimes I will give it very specific instructions for a large refactoring that has a predictable pattern (basically, instead of me figuring out a complex search and replace or doing it manually). For stuff I do in Ansible and especially Groovy (a horrible language which heavily resists being lintable), these are very simple declarative playbooks or Jenkins pipeline jobs, I use Claude heavily to write out directives and such because it will do so without syntax errors and without me having to google every individual pattern or directive; it's much easier to check what it writes and debug from there. But I'm also not putting Claude in charge in these places, it's doing the boring stuff for me and doing it a lot faster and without my having to spend cognitive overhead (which is at a premium when you're in your late 50s like me).

> The one part I would say LLMs seem to help me with is medium-depth questions about DirectX12. Not really how to use it, but parts of the API itself. MSDN is good for learning about it, but I would concede that LLMs have been useful for just getting more composite knowledge of DX12.

see there you go, I have things like this I have to figure out many times per week. so many of them are one-off things I really dont need to learn deeply at the moment (like TypeScript). It's also very helpful to bounce off ideas, like when I need to achieve something in the Go/k8s realm, it can sanity check how I'm approaching a problem and often suggest other ways that I would not have considered (which it knows because it's been trained on millions of tech blogs).

sensanaty 12/4/2025||||
> the list of people who write code, use high quality LLM agents (not chatbots) like Claude, and report not just having success with the tools but watching the tools change how they think about programming, continues to grow.

My company is basically writing blank cheques for "AI" (aka LLM, I hate the way we've poisoned AI as a term))tooling so that people can use any and all tooling they want and see what works and doesn't. This is a company with ~1500ish engineers, ranging from hardware engineers building POS devices to the junior frontenders building out our simplest UIs. There's also a whole lot more people who aren't technical, and they're also encouraged to use any and all AI tooling they can.

Despite the entire company trying to figure out how to use these effectively precisely because we're trying to look at things objectively and separate out the hype from the reality, the only people I've seen with any kind of praise so far (and this has been going on since the early ChatGPT days) have been people in Marketing and Sales, because for them it doesn't matter if the AI hallucinates some pure bullshit since that's 90% of their job anyway.

We have spent god knows how much time and resources trying to get these tools doing anything more useful than simple demos that get thrown out immediately, and it's just not there. No one is pushing 100x the code or features they were before, projects aren't finishing any faster than they were before, and nobody even bothers turning on the meeting transcription tools either anymore because more often than not it'll interpret things said in the meeting just plain wrong or even make up entire discussion points that were never had.

Just recently, like last week recently, we had some idiotic PR review bot from coderabbit or some other such company be activated. I've never seen so many people complain all at once on Slack, there was a thread with hundreds of individuals all saying how garbage it was and how much it was distracting from reviews. I didn't see a single person say they liked the tool, not 1 single person had anything good to say about it.

So as far as I'm concerned, it's just a MASSIVE fucking hype bubble that will ultimately spawn some tooling that is sorta useful for generating unimportant scripts, but little else.

zzzeek 12/4/2025||
never give an LLM to your junior engineers. The LLM itself is mostly like a junior engineer and will make a complete mess of things if not guided by someone with a lot of experience.

Basically if people are producing code or documentation that looks like an LLM wrote it, that's not really what I see as the model that makes these tools useful.

senordevnyc 12/3/2025|||
The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions. It’s sad, I always thought of my fellow engineers as more open-minded.
blibble 12/3/2025|||
> The last few years has revealed the extent to which HN is packed with middle-aged, conservative engineers who are letting their fear and anxiety drive their engineering decisions.

so, people with experience?

senordevnyc 12/3/2025|||
Obviously. Turns out experience can be self-limiting in the face of paradigm-shifting innovation.

In hindsight it makes sense, I’m sure every major shift has played out the same way.

bigstrat2003 12/3/2025||
> Turns out experience can be self-limiting in the face of paradigm-shifting innovation.

It also turns out that experience can be what enables you to not waste time on trendy stuff which will never deliver on its promises. You are simply assuming that AI is a paradigm shift rather than a waste of time. Fine, but at least have the humility to acknowledge that reasonable people can disagree on this point instead of labeling everyone who disagrees with you as some out of touch fuddy-duddy.

senordevnyc 12/4/2025||
I'm not assuming anything, I'm relying on my own experience of being an engineer for two decades and building stuff for all kinds of organizations in all kinds of stacks and languages. AI has radically increased my velocity and quality, though it's got a steep learning curve of its own, and many frustrations to deal with. But it's pretty obviously a paradigm shift, and not "trendy stuff which will never deliver on its promises". Even if the current LLMs never improve at all from here, they're still incredibly useful tools.
zzzeek 12/3/2025|||
ive been programming for more than 40 years
nutjob2 12/4/2025|||
Do you mean people who have been through several hype cycles and know their nature and how new novel tech, no matter how useful, takes time to be integrated and understood?

Get over yourself, and try to tone down the bigotry and stereotyping.

binary132 12/3/2025|||
Bitcoin is at 93k so I don’t think it’s entirely accurate to say blockchain is insubstantive or without value
rpcope1 12/3/2025|||
There can be a bunch of crazy people trading each other various lumps of dog feces for increasing sums of cash, that doesn't mean dogshit is particularly valuable or substantive either.
triceratops 12/3/2025|||
I'd argue even dogshit has more practical use than Bitcoin, if no one paid money for Bitcoin. You can throw it for self-defence, compost it (under high heat to kill the germs), put it on your property to scare away raccoons (it works sometimes).
FrancisMoodie 12/4/2025||
Bitcoin and other crypto coins have a practical use. You can use them to buy whatever is being sold on the darkweb with the main product categories being drugs and guns. I honestly believe much of the valuation of Crypto is tied to these marketplaces.
nutjob2 12/4/2025||
Don't forget scamming people out of billions of dollars of their hard earned life savings.
sodafountan 12/4/2025|||
And by "dog feces," I assume you mean fiat currency, correct?

Cryptocurrency solves the money-printing problem we've had around the world since we left the gold standard. If governments stopped making their currencies worthless, then bitcoin would go to zero.

skybrian 12/3/2025||||
This seems to be almost purely bandwagon value, like preferring Coca-Cola to some other drink. There are other blockchains that are better technically along a bunch of dimensions, but they don't have the mindshare.

Bitcoin is probably unkillable. Even if were to crash, it won't be hard to round up enough true believers to boost it up again. But it's technically stagnant.

OvidNaso 12/3/2025|||
True, but then so is a lot of "tech". There were certainly, at least equivalent, social applications before and all throughout Facebooks dominance, but like Bitcoin the network effect becomes primary, after a minimum feature set.
skybrian 12/3/2025||
For Bitcoin, it doesn't exactly seem to be a network effect? It's not like choosing a chat app because that's what your friends use.

Many other cryptocurrencies are popular enough to be easily tradable and have features to make them work better for trade. Also, you can speculate on different cryptocurrencies than your friends do.

sodafountan 12/4/2025|||
Technically stagnant is a good thing; I'd prefer the term technically mature. It's accomplished what it set out to do, which is to be a decentralized, anonymous form of digital currency.

The only thing that MIGHT kill it is if governments stopped printing money.

venturecruelty 12/3/2025||||
Beanie Babies were trading pretty well, too, although it wasn't quite "solving sudokus for drugs", so I guess that's why they didn't have as much staying power.
mgaunard 12/3/2025||||
very little of the trading actually happens on the blockchain, it's only used to move assets between trading venues.

The values of bitcoin are:

- easy access to trading for everyone, without institutional or national barriers

- high leverage to effectively easily borrow a lot of money to trade with

- new derivative products that streamline the process and make speculation easier than ever

The blockchain plays very little part in this. If anything it makes borrowing harder.

blibble 12/3/2025||
I agree with "easy access to trading for everyone, without institutional or national barriers"

how on earth does bitcoin have anything to do with borrowing or derivatives?

in a way that wouldn't also work for beanie babies

mgaunard 12/4/2025||
Those are the main innovations tied to crypto trading. They do indeed have little to do with the blockchain or bitcoin itself, and do apply to any asset.

There are actually several startups whose pitch is to bring back those innovations to equities (note that this is different from tokenized equities).

airstrike 12/3/2025||||
If you can't point to real use cases at scale, it's hard to argue it has intrisinc value even though it may have speculative value.
adastra22 12/3/2025||||
With almost zero fundamentals. That’s the part you are glossing over.
ejoso 12/3/2025||||
Uh… So the argument here is that anticipated future value == meaningful value today?

The whole cryptocurrency world requires evangelical buy-in. But there is no directly created functional value other than a historic record of transactions and hypothetical decentralization. It doesn’t directly create value. It’s a store of it - again, assuming enough people continue to buy into the narrative so that it doesn’t dramatically deflate when you need to recover your assets. States and other investors are helping make stability happen to maintain it as a value store, but you require the story propagating to achieve those ends.

senordevnyc 12/3/2025|||
You’re wasting your breath. Bitcoin will be at a million in 2050 and you’ll still get downvoted here for suggesting it’s anything other than a stupid bubble that’s about to burst any day now.
nutjob2 12/4/2025|||
Quite the opposite, if you need to defend a technical idea with its price in the in a largely speculative market, you've already lost the argument.

That people are greedy and ignorant and bid up BTC doesn't prove anything about its value.

binary132 12/4/2025|||
It’s hard to understand how people can be so determined to ignore reality.
throwout4110 12/3/2025||
> There's a dichotomy in the software world between real products (which have customers and use cases and make money by giving people things they need) and hype products (which exist to get investors excited, so they'll fork over more money).

AI is not both of these things? There are no real AI products that have real customers and make money by giving people what they need?

> LLMs are a more substantive technology than blockchain ever was, but like blockchain, their potential has been greatly overstated.

What do you view as the potential that’s been stated?

mingus88 12/3/2025|||
Not OP but for starters LLMs != AI

LLMs are not an intelligence, and people who treat them as if they are infallible Oracles of wisdom are responsible for a lot of this fatigue with AI

pixl97 12/3/2025||
>Not OP but for starters LLMs != AI

Please don't do this, make up your own definitions.

Pretty much anything and everything that uses neural nets is AI. Just because you don't like how the definition has been since the beginning doesn't mean you get to reframe it.

In addition, if humans are not infallible oracles of wisdom, they wouldn't be an intelligence in your definition.

ponector 12/3/2025||
Why then there is an AI-powered dishwasher, but no AI car?
fedsocpuppet 12/3/2025||
https://www.tesla.com/fsd ?

I also don't understand the LLM ⊄ AI people. Nobody was whining about pathfinding in video games being called AI lol. And I have to say LLMs are a lot smarter than A*.

ponector 12/3/2025|||
Cannot find any mention of AI there.

Also it's funny how they add (supervised) everywhere. It looks like "Full self driving (not really)"

fedsocpuppet 12/3/2025||
Yes one needs some awareness of the technology. Computer vision: unambiguously AI, motion planning: there are classical algorithms but I believe tesla / waymo both use NNs here too.

Look I don't like the advertising of FSD, or musk himself, but we without a doubt have cars using significant amounts of AI that work quite well.

nutjob2 12/4/2025||
None of those things contain actual intelligence. On that basis any software is "intelligent". AI is the granddaddy of hype terms, going back many decades, and has failed to deliver and LLMs will also fail to deliver.
bigstrat2003 12/3/2025|||
It's because nobody was trying to take video game behavior scripts and declare them the future of all things technology.
fedsocpuppet 12/3/2025||
Ok? I'm not going to change the definition of a 70 year old field because people are annoyed at chatgpt wrappers.
Fraterkes 12/3/2025|||
A way to state this point that you may find less uncharitable is that a lot of current LLM applications are just very thin shells around ChatGPT and the like.

In those cases the actual "new" technology (ie, not the underlying ai necessarily) is not as substantive and novel (to me at least) as a product whose internals are not just an (existing) llm.

(And I do want to clarify that, to me personally, this tendency towards 'thin-shell' products is kind of an inherent flaw with the current state of ai. Having a very flexible llm with broad applications means that you can just put Chatgpt in a lot of stuff and have it more or less work. With the caveat that what you get is rarely a better UX than what you'd get if you'd just prompted an llm yourself.

When someone isn't using llms, in my experience you get more bespoke engineering. The results might not be better than an llm, but obviously that bespoke code is much more interesting to me as a fellow programmer)

throwout4110 12/3/2025|||
Yes ok then I definitely agree
pydry 12/3/2025|||
Shells around chatgpt are fine if they provide value.

Way better than AI jammed into every crevice for no reason.

groos 12/3/2025||
It's not just that AI is being pushed on to employees by the tech giants - this is true - but that the hype of AI as a life changing tech is not holding up and people within the industry can easily see this. The only life-changing thing it's doing is due to a self-fulfilling prophecy of eliminating jobs in the tech industry and outside by CEOs who have bet too much on AI. Everyone currently agrees that there is no return on all the money spent on AI. Some players may survive and do well in the future but for a majority there is only the prospect of pain, and this is what all the negativity is about.
pnathan 12/3/2025|
As a layoff justification and a hurryup tool, it is pretty loathesome. People use their jobs for their housing, food, etc.
elzbardico 12/3/2025|||
More than this man. AI is making me re-appreciate part of the Marxist criticism of capitalism. The concept of worker alienation could be easily extended in new forms to the labor situation in an AI-based economy. FWIW, humans derive a lot of their self-evaluation as people from labor.
adastra22 12/3/2025||
Marx was correct in his identification of the problem (the communist manifesto still holds up today). Marx went off the rails with his solution.
venturecruelty 12/4/2025||
Getting everyone to even agree that this is a problem is impossible. I'm open to the universe of solutions, as long as it isn't "Anthropic and OpenAI get another $100 billion dollars while we starve". We can probably start there.
alexashka 12/4/2025|||
It's a problem, it's just not the root problem.

The root problem is nepo babies.

Whether it's capitalism or communism or whatever China has currently - it's all people doing everything to give their own children every unfair advantage and lie about it.

Why did people flee to America from Europe? Because Europe was nepo baby land.

Now America is nepo baby land and very soon China will be nepo baby land.

It's all rather simple. Western 'culture' is convincing everyone the nepo babies running things are actually uber experts because they attended university. Lol.

voidhorse 12/4/2025|||
Yeah, unfortunately Marx was right about people not realizing the problem, too. The proletariat drowns in false consciousness :(

In reality, the US is finally waking up to the fact that the "golden age" of capitalism in the US was built upon the lite socialism of the New Deal, and that all the bs economic opinions the average american has subscribed to over the past few decades was completely just propaganda and anyone with half a brain cell could see from miles away that since reagonomics we've had nothing but a system that leads to gross accumulation to the top and to the top alone and this is a sure fire way (variable maximization) in any complex system to produce instability and eventual collapse.

adastra22 12/4/2025||
There's a false dichotomy in that conclusion.
int_19h 12/4/2025|||
> humans derive a lot of their self-evaluation as people from labor.

We're conditioned to do so, in large part because this kind of work ethic makes exploitation easier. Doesn't mean that's our natural state, or a desirable one for that matter.

"AI-based economy" is too broad a brush to be painting with. From the Marxist perspective, the question you should be asking is: who owns the robots? and who owns the wealth that they generate?

paxys 12/3/2025||
All big corporate employees hate AI because it is incessantly pushed on them by clueless leadership and mostly makes their job harder. Seattle just happens to have a much larger percent of big tech employees than most other cities (>50% work for Microsoft or Amazon alone). In places like SF this gloom is balanced by the wide eyed optimism of employees of OpenAI, Anthropic, Nvidia, Google etc. and the thousands of startups piggybacking off of them hoping to make it big.
artifaxx 12/3/2025||
Definitely, AI sentiment is positive among most people at the small startup I work at in the Seattle area. I do see the "AI fatigue" too, I bet the majority is from using AI as a repeated layoff rationalization. Personally AI is a tool, one of the more useful ones (e.g. Claude and Gemini thinking models make quite helpful code reviewers once given a checklist) The hype often overshadows these benefits.
mips_avatar 12/3/2025||
That's probably the difference
r0m4n0 12/3/2025||
I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread.

I do believe that the product leadership is shoehorning it into every nook and cranny of the world right now and there are reasons to be annoyed by that but there are also countless incredible use cases that are mind blowing, that you can use it every day for.

I need to write about some absolutely life changing scenarios, including: got me thousands of dollars after it drafted a legal letter quoting laws I knew nothing about, saved me countless hours troubleshooting an RV electrical problem, found bugs in code that I wrote that were missed by everyone around me, my wife was impressed with my seemingly custom week long meal plan that fit her short term no soy/dairy allergy diet, helped me solve an issue with my house that a trained professional completely missed the mark on, completely designed and wrote code for a halloween robot decoration I had been trying to build for years, saves my wife hundreds of hours as an audio book narrator summarize characters for her audio books so she doesn't have to read the entire book before she narrates the voices.

I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too. Today it's quite amazing to have these tools at our disposal and as we add them in smart ways to systems that exist today, things will only get better.

Call me glass half full... but maybe it's because I don't live in Seattle

nutjob2 12/4/2025||
Its not about the tech, the negativity is due to the mismatch between hype and reality. LLMs are incredibly useful, for certain things, like the ones you have found. Others simply don't work.

Is it going to deliver on even 1% of the hype any time soon? Unlikely.

senordevnyc 12/4/2025|||
1% of what hype? AGI? Because other than AGI, I think it's delivered on most of the hype already.

I think our tooling is holding us back more than the actual models, and even if they never advance at all from here (unlikely), we'll still get years of improvement and innovation.

r0m4n0 12/4/2025|||
I'm mostly saying the hype is real on a lot of things today. Is it working perfectly for everything? Definitely not, but I'm of the opinion giving it another 10 years and it just might be. I'm amongst the many working to make it better and all I see is a million possibilities of what can be done that we have only worked through a few of the issues. Did it change EVERYTHING over night? No, it was a big breakthrough, the rest is still catching up.
nutjob2 12/5/2025||
Hype along the lines of people not having work anymore and that "AGI" is around the corner, etc are real?

Yes strong AI is always about 10 years off.

But yes any new tech takes time to work itself out. No question that LLMs are useful but they will wildly under-deliver by current hype standards. They have their own strengths and weaknesses like everything, but they can be very misleading, thus the hype.

Esophagus4 12/4/2025|||
> I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread

Yep.

I feel like actually, being negative on AI is the common view now, even though every other HN commenter thinks they’re the only contrarian in the world to see the light and surely the masses must be misguided for not seeing it their way.

The same way people love to think they’re cooler than the masses by hating [famous pop artist]. “But that’s not real music!” they cry.

And that’s fine. Frankly, most of my AI skeptic friends are missing out on a skill that’s helped me a fair bit in my day to day at work and casually. Their loss.

Like it or not, LLMs are here to stay. The same way social media boomed and was here to stay, the same way e-commerce boomed and was here to stay… there’s now a whole new vertical that didn’t exist before.

Of course there will be washouts over time as the hype subsides, but who cares? LLMs are still wicked cool to me.

I don’t even work in AI, I just think they’re fascinating. The same way it was fascinating to me when I made a computer say “Hello, world!” for the first time.

doom2 12/4/2025|||
I think the disconnect for me is that I want AI to do a bunch of mundane stuff in my job where it is likely to be discouraged so I can focus on my work. My employer's CEO just implemented an Elon-style "top 5" bi-weekly report. Would they find it acceptable for me to submit AI-generated writing? I just had to do my annual self and peer reviews. Is AI writing valid here? A company wanted to put me, a senior engineer, through a five stage interview process, including a software-graded Leetcode style assessment. Should I be able to use AI to complete it?

These aren't meant to be gotcha rhetorical questions, just parts of my professional life where AI _isn't_ desirable by those in power, even if they're some of the only real world use cases where I'd want to use it. As someone said upthread, I want AI to do my dishes and laundry so I can focus on leisure and creative pursuits (or, in my job, writing code). I don't want AI doing creative stuff for me so I can do dishes and laundry.

Capricorn2481 12/4/2025|||
> I feel like actually, being negative on AI is the common view now, even though every other HN commenter thinks they’re the only contrarian in the world to see the light and surely the masses must be misguided for not seeing it their way

I have mostly seen people on HN criticizing the few people in tech who have attached themselves to the hype and senselessly push it everywhere, not "the masses." The masses don't particularly like AI. It seems like it's only people hyping it that think everyone but Luddites are into it.

You're both painting a narrative that anti-AI sentiment is a popular bandwagon everyone is doing to be cool, as well as not that big actually and everyone is loving AI. Which is it?

latexr 12/4/2025||
> I feel like there is an absurd amount of negative rhetoric about how AI doesn't have any real world use cases in this comment thread.

What I feel is people are denouncing the problems and describing them as not being worth the tradeoff, not necessarily saying it has zero use cases. On the other end of the spectrum we have claims such as:

> countless incredible use cases that are mind blowing, that you can use it every day for.

Maybe those blow your mind, but not everyone’s mind is blown so easily.

For every one of your cases, I can give you a counter example where doing the same went horribly wrong. From cases being dismissed due to non-existent laws being quoted, to people being poisoned by following LLM instructions.

> I'm worried about some of the problems LLMs will create for humanity in the future but those are problems we can solve in the future too.

No, they are not! We can’t keep making climate change worse and fix it later. We can’t keep spreading misinformation at this rate and fix it later. We can’t keep increasing mass surveillance at this rate and fix it later. That “fix it later” attitude is frankly naive. You are falling for the narrative that got us into shit in the first place. Nothing will be “fixed later”, the powerful actors will just extract whatever they can and bolt.

> and as we add them in smart ways to systems that exist today, things will only get better.

No, they will not. Things are getting worse now, it’s absurd to think it’s inevitable they’ll get better.

r0m4n0 12/4/2025||
Yea I do think you make a lot of valid points about the tradeoffs of the advances. I think anything we do to progress humanity technologically will have negative outcomes on everything else. I think as humans make things better for ourselves it will almost always rely on destroying something in nature in return. The capitalistic world we live in will almost always drive that to the extreme quickly.

As for the other points, are the LLMs wrong sometimes, yes. But so are humans so it's not really a novel thing to point out. The question is, are they more correct than humans? I have seen they can be more accurate, less biased, etc... and we are driving toward higher accuracy and other ways to make them right.

And the fix later attitude is not great toward everything and I was referring to the accuracy issues that people often point out as why AI is hype. The things you mention are side effects and those should be controlled because the cat is out of the bag. You can spend your time yelling at the clouds or try to do something to make it better. I assure you, capitalism is a tough enemy. This is no different than another type of combustable engine that was created that has negative consequences on the environment in different ways.

I'm not disagreeing with you... mostly just saying: the hype is warranted

latexr 12/4/2025||
> are the LLMs wrong sometimes, yes. But so are humans so it's not really a novel thing to point out.

The thing with humans is that you can build trust. I know exactly who to ask if I have a question about music, or medicine, or a myriad of other topics. I know those people will know the answers and be able to assess their level of confidence in them. If they don’t know, they can figure it out. If they are mistaken, they’ll come back and correct themselves without me having to do anything. Comparing LLMs to random humans is the wrong methodology.

> This is no different than another type of combustable engine that was created that has negative consequences on the environment in different ways.

Combustible engines don’t make it easy to spy on people, lie to them, and undermine trust in democracy.

whoamiopa 12/3/2025||
Very new ex-MSFT here. I couldn’t relate more with your friend. That’s exactly what happened. I left Microsoft about 5 weeks ago and it’s been really hard to detox from that culture.

AI pushed down everywhere. Sometimes shitty-AI that needed to be proved at all cost because it should live up to the hype.

I was in one of such AI-orgs and even there several teams felt the pressure from SLT and a culture drift to a dysfunctional environment.

Such pressure to use AI at all costs, as other fellows from Google mentioned, has been a secret ingredient to a bitter burnout. I’m going to therapy and under medication now to recover from it.

DaiPlusPlus 12/4/2025||
(Fellow ex-msftie here too; but I left for a startup almost exactly 10 years ago, and I miss how that older culture is apparently gone).

What I don't understand is where the AI irrationality is coming from: the C-suite (still in B37?) are all incredibly smart people who must surely be aware of the damage this top-down policy is having on morale, product-quality, and how the company is viewed by its own customers - and yet, they do.

I'm not going to pretend things were being run perfectly when I was at MS: there were plenty of slow-motion mistakes playing-out right in front of us all[1] - and as I look back, yes, I was definitely frustrated at these clear obvious mistakes and their resultant unimaginable waste of human effort and capital investment.

Actually, come to think about it... maybe perhaps things really haven't changed as much? Clearly something neurotoxic got into the Talking Rain cans sometime around 2010-2011 - then was temporarily abated in 2014-2015; then came back twice as hard in 2022.

-------

[1]: Windows 8 and the Start Screen; the SurfaceRT; Visual Studio 2012 with SHOUTY MENUS and monochrome toolbar icons; the laggy and sluggish Office 2013; the crazy simultaneous development of entirely separate new reimplementations of the Office apps for iOS, Android, WinRT, the web. While ignoring the clear market-demand for cloud-y version of Active Directory without on-prem DCs (instead we got Entra, then InTune).

mips_avatar 12/4/2025|||
Best of luck with this transition. I had a really hard time when I left Microsoft. It’s took longer than I expected to feel better but I do now.
sph 12/4/2025||
Jesus, is it actually a thing to grieve after leaving a job? (doesn't sound like you were laid off)
valbaca 12/5/2025|||
Yes, absolutely
mips_avatar 12/4/2025|||
Yes
droopybuns 12/3/2025|||
Hey man- hang in there.

FWIW: I realized this year that there are whole cohorts of management people who have absolutely zero relationship with the words that they speak. Literal tabula rasas who convert their thoughts to new words with no attachment to past statements/goals.

Put another way: Liars exist and operate all around you in the top tier of the FAANGS rn.

ch_fr 12/4/2025||
Reminded me of: https://ludic.mataroa.blog/blog/brainwash-an-executive-today...
vunderba 12/3/2025|
From the article:

> I wanted her take on Wanderfugl , the AI-powered map I've been building full-time.

I can at least give you one piece of advice. Before you decide on a company or product name, take the time to speak it out loud so you can get a sense of how it sounds.

mips_avatar 12/3/2025||
I grew up in Norway and there's this idea in Europe of someone who breaks from corporate culture and hikes and camps a lot (called wandervogel in german). I also liked how when pronounced in Norwegian or Swedish it sounds like wander full. I like the idea of someone who is full of wander.
59nadir 12/3/2025|||
In Swedish the G wouldn't be silent so it wouldn't really be all that much like "wonderful"; "vanderfugel" is the closest thing I could come up with for how I'd pronounce it with some leniency.
throw-qqqqq 12/3/2025|||
Same in Danish FWIW.

In English, I’d pronounce it very similar to “wonderful”.

nutjob2 12/4/2025|||
Are you a native speaker? Because said quickly in typical English it sounds like "wonderfukl" which isn't great.
throw-qqqqq 12/5/2025||
Not a native English speaker, no.
adastra22 12/3/2025|||
If OP dropped the g, it would be a MUCh better product name.
throw-qqqqq 12/3/2025|||
Solid advice. Seeing how many here would pronounce it differently, I totally agree hahah
isqueiros 12/4/2025||||
this would make it even closer to the dangerously similar travel planning app "wanderlog"
mips_avatar 12/3/2025|||
I actually own wanderfull.ai
adastra22 12/3/2025||
Drop an l would be better I think
Barrin92 12/4/2025|||
I think the more pressing advice here is, limit yourself to one name (https://wanderfugl.com/images/guides.png)

this must be one of the incredible AI innovations the folks in Seattle are missing out on

littlekey 12/3/2025|||
The weird thing is that half of the uses of the name on that landing page spell it as "Wanderfull". All of the mock-up screencaps use it, and at the bottom with "Be one of the first people shaping Wanderfull" etc.

So even the creator can't decide what to call it!

sleazebreeze 12/4/2025||
AI probably generated all of that and the OP didn't even review its output.
epolanski 12/3/2025|||
Also, do it assuming different linguistic backgrounds. It could sound dramatically different by people that speak English but as second language, which are going to be a whole lot of your users, even if the application is in English.
Ekaros 12/3/2025|||
If there is a g in there I will pronounce a g there. I have some standards and that is one. Pronouncing every single letter.
basscomm 12/3/2025|||
> Pronouncing every single letter.

Now I want to know how you pronounce words like: through, bivouac, and queue.

adastra22 12/3/2025||
You don’t pronounce all the letters?
4ggr0 12/4/2025||
no. ever heard of silent letters?
adastra22 12/4/2025||
I'm a native speaker of English, northern California dialect. I pronounce every one of those letters, to varying degrees. Some just affect the mouth shape by subtle amounts, but it is there.
basscomm 12/4/2025||
> I pronounce every one of those letters, to varying degrees

That must be fun any time you talk about Worcestershire (the sauce or the place).

adastra22 12/5/2025||
I was only talking about the examples given.
badc0ffee 12/3/2025||||
That's a gnarly standard you have there.
paddleon 12/3/2025|||
obviously not a native French speaker
mips_avatar 12/3/2025|||
It's pronounced wanderfull in Norwegian
epolanski 12/3/2025|||
And how many of your users are going to have Nordic backgrounds?

I personally thought it was wander _fughel_ or something.

Let alone how difficult it is to remember how to spell it and look it up on Google.

quickthrowman 12/3/2025||||
Just FYI, I would read it out loud in English as “wander fuggle”. I would assume most Americans would pronounce the ‘g’.

I thought ‘wanderfugl’ was a throwback to ~15 years ago when it was fashionable to use a word but leave out vowels for no reason, like Flickr/ /Tumblr/Scribd/Blendr.

thinkling 12/3/2025||||
The one current paying user of the app I've seen in this discussion called it "Wanderlog". FYI on the stickiness of the current name.
richiebful1 12/4/2025||
wanderlog is a separate web service

https://wanderlog.com/

hbosch 12/3/2025|||
"Wanderful" would be a better name.
adastra22 12/3/2025|||
And if you manage to say it outloud, say it to someone else and ask them to spell it. If they can’t spell it, they can’t type it into the url bar.
efskap 12/3/2025|||
Maybe that's why they didn't go with the English cognate i.e. Wanderfowl, since being foul isn't great branding
svth 12/4/2025|||
What's wrong with wahn-der-fyoo-gull?
isomorphic 12/3/2025||
What? You don't want travel tips from an itinerant swinger? Or for itinerant swingers?
More comments...