Top
Best
New

Posted by mips_avatar 12/3/2025

Everyone in Seattle hates AI(jonready.com)
967 points | 1065 commentspage 8
tasspeed 12/3/2025|
textbook way to NOT rollout AI for your org. AI has genuine benefits to white collar workers, but they are not trained for the use-cases that would actually benefit them, nor are they trained in what the tech is actually good at. they are being punished for using the tools poorly (with no guidance on how to use them "good"), and when they use the tools well, they fear being laid off once an SOP for their AI workflows are written.
danieltanfh95 12/4/2025||
The reason is quite straightforward. LLMs excel at mapping tasks but suck at first principles reasoning and validation.

When you are working on the AI map app, you are mapping your new idea to code.

When people are working with legacy code and fixing bugs, they are employing reasoning and validation.

The problem is management doesn't allow the engineers to discern which is which and just stuff it down their throats.

QuiEgo 12/4/2025||
Making no statement about the value or lack of value in AI itself:

When people talk about it like this (this author is hardly the only example) they sound like an evangelist proselytizing and it feels so weird to me.

This thing could basically read “people in Seattle don’t want to believe in God with me, people in San Francisco have faith though. I’m sad my friends in Seattle won’t be going to heaven.”

elendee 12/4/2025||
AI for decades has been the word to mean "frontier capability which is not fully developed yet". It is not a pitch for end users. Perhaps your product produces quality code. Perhaps it produces highly novel trip itineries. Say that, but don't say AI. The end user does not know the difference between a neural net and a for loop.
petterroea 12/4/2025||
Basically everyone I know in engineering share this resentment in some way, and the AI industry has itself to blame.

People are fed up and burned out from being forced to try useless AI tools by non-technical leaders who do not understand how LLM works nor understand how they suck, and now resent anything related to AI. But for AI companies there is a perverse incentive to push AI on people until it finally works, because the winner of the AI arms race won't be the company that waits until they have a perfect, polished product.

I have myself had "fun" trying to discuss LLMs with non technical people, and met a complete wall trying to explain why LLMs aren't useful for programming - at least not yet. I argue the code is often of low quality, very unmaintainable, and usually not useful outside quick experimentation. They refuse to believe it, even though they do hit a wall with their vibe-coded project after a few months when claude stops generating miracles any more - they lack the experience with code to understand they are hitting maintainability issues. Combine that with how every "wow!" LLM example is actually just the LLM regurgitating a very common thing to write tutorials about, and people tend to over-estimate its abilities.

I use claude multiple times a week because even though LLM-generated code is trash I am open to try new tools, but my general experience is that Claude is unable to do anything well that I can't have my non-technical partner do. It has given me a sort of superiority complex where I immediately disregard the opinion of any developer who thinks its a wondertool, because clearly they don't have high standards for the work they were already doing.

I think most developers with any skill to their name agree. Looking at how Microsoft developers are handling the forced AI, they do seem desperate: https://news.ycombinator.com/item?id=44050152 even though they respond with the most "cope" answers I've ever read when confronted about how poorly it is going.

finaard 12/4/2025|
> and met a complete wall trying to explain why LLMs aren't useful for programming - at least not yet. I argue the code is often of low quality, very unmaintainable, and usually not useful outside quick experimentation.

There are quite a few things they can do reasonably well - but they mostly are useful for experienced programmers/architecs as a time safer. Working with a LLM for that often reminds me of when I had many young, inexperienced Indians to work with - the LLM comes up with the same nonsense, lies and excuses, but unlike the inexperienced humans I can insult it guilt free, which also sometimes gets it back on track.

> They refuse to believe it, even though they do hit a wall with their vibe-coded project after a few months when claude stops generating miracles any more - they lack the experience with code to understand they are hitting maintainability issues.

For having a LLM operate on a complete code base there currently seems to be a hard limit of something like 10k-15k LOC, even with the models with the largest context windows - after that, if you want to continue using a LLM, you'll have to make it work only on a specific subsection of the project, and manually provide the required context.

Now the "getting to 10k LOC" _can_ be sped up significantly by using a LLM. Ideally refactor stupid along the way already - which can be made a bit easier by building in sensible steps (which again requires experience). From my experiments once you've finished that initial step you'll then spend roughly 4-5 times the amount of time you just spent with the LLM to make the code base actually maintainable. For my test projects, I roughly spent one day building it up, rest of the week getting it maintainable. Fully manual would've taken me 2-3 weeks, so it saved time - but only because I do have experience with what I'm doing.

petterroea 12/4/2025||
I think there's a lot of reason to what you are saying. The 4-5 amount of time to make the codebase readable resonates.

If i really wanted to go 100% LLM as a challenge I think I'd compartmentalize a lot and maybe rely on OpenAPI and other API description languages to reduce the complexity of what the LLM has to deal with when working on its current "compartment" (i.e the frontend or backend). Claude.md also helps a lot.

I do believe in some time saving, but at the same time, almost every line of code I write usually requires some deliberate thought, and if the LLM makes that thought, I often have to correct it. If i use English to explain exactly what I want it is some times ok, but then that is basically the same effort. At least that's my empirical experience.

finaard 12/4/2025||
> almost every line of code I write usually requires some deliberate though

That's probably the worst case for trying to use a LLM for coding.

A lot of the code it'll produce will be incorrect on the first try - so to avoid sitting through iterations of absolute garbage you want the LLM to be able to compile the code. I typically provide a makefile which compiles the code, and then runs a linter with a strict ruleset and warnings set to error, and allow it to run make without prompting - so the first version I get to see compiles, and doesn't cause lint to have a stroke.

Then I typically make it write tests, and include the tests in the build process - for "hey, add tests to this codebase" the LLM is performing no worse than your average cheap code monkey.

Both with the linter and with the tests you'll still need to check what it's doing, though - just like the cheap code monkey it may disable lint on specific lines of code with comments like "the linter is wrong", or may create stub tests - or even disable tests, and then claim the tests were always failing, and it wasn't due to the new code it wrote.

MobiusHorizons 12/4/2025||
Well written! I’m Seattle based (although at Google) I think the mood is only slightly better than what you describe. But the general feeling that the company has no interest in engineering innovation is alive and well. Everything needs to be standardized and engineers get shuffled between products in a way that discourages domain knowledge.
mips_avatar 12/4/2025|
Thanks!
nice_byte 12/4/2025||
Reading some of these comments from fellow seattleites, I'm really quite thankful for having the privilege of being able to completely ignore all of this noise.

There is zero push in my org to use any of these tools. I don't really use them at all but know some coworkers who do and that's fine. Sounds like this is a rare and lucky arrangement.

thefz 12/4/2025||
> Every time I shared Wanderfugl with a Seattle engineer, I got the same reflexive, critical, negative response. This wasn't true in Bali, Tokyo, Paris, or San Francisco—people were curious, engaged, wanted to understand what I was building

Believe me, the same reflexive, critical, negative response is true for most of Europe too

runarberg 12/3/2025||
I live in Seattle (well a 20 min ferry from Seattle) and I too hate AI. In fact I have a Kanji learning app which I am trying to push on to people, and I brand it as AI free. No AI was used to develop it, no AI used to write content, no AI is there to “help you learn”.

When I see apps like Wanderfugl, I get the same sense of disgust as OPs ex coworker. I don‘t want to try this app, I don’t want to see it, just get it away from me.

More comments...