Top
Best
New

Posted by mips_avatar 12/3/2025

Everyone in Seattle hates AI(jonready.com)
967 points | 1065 commentspage 11
0_____0 12/3/2025|
My 2¢... LLMs are kind of amazing for structured text output like code. I have a completely different experience using LLMs for assistance writing code (as a relative novice) than I do in literally every other avenue of life.

Electricl engineering? Garbage.

Construction projects? Useless.

But code is code everywhere, and the immense amount of training data available in the form of working code and tutorials, design and style guides, means that the output as regards software development doesn't really resemble what anybody working in any other field sees. Even adjacent technical fields.

knollimar 12/3/2025|
I'm experimenting with the gemini 3 and will do opus 4.5 soon, but I've seen huge jumps doing EE for construction over the last batch of models.

I'm working on a harness but I think it can do some basic revit layouts with coaxing (which with a good harness should be really useful!)

Let me know what you've experienced. Not many construction EE on HN.

alexeischiopu 12/6/2025||
EE on HN and from construction!

I use to draft in AutoCad and Revit before switching to software.

Saw your comment around using Gemini. I’d love to chat with you. I started building something for the build side of the electrical world, but eventually want to make the jump to the design side of the house.

knollimar 12/6/2025||
I'm on the build side, too. I think the root of the problem is the design side is very cautious of giving out their revit models to the build side, so you get pdfs in order to protect a bit of IP.
hoppersoft 12/3/2025||
This person crafts quite the straw man!

> This belief system—that AI is useless and that you're not good enough to work on it anyway—hurts three groups

I don't know anyone who thinks AI is useless. In fact, I've seen quite a few places where it can be quite useful. Instead, I think it's massively overhyped to its own detriment. This article presents the author as the person who has the One True Vision, and all us skeptics are just tragically undereducated.

I'm a crusty old engineer. In my career, I've seen RAD tooling, CASE tools, no/low-code tools, SGML/XML, and Web3 not live up to the lofty claims of the devotees and therefore become radioactive despite there being some useful bits in there. I suspect AI is headed down the same path and see (and hear of) more and more projects that start out looking really impressive and then crumble after a few promising milestones.

ben_w 12/3/2025||
> I don't know anyone who thinks AI is useless.

By my reading, there are several people on this discussion thread right now who think it (in the form of LLMs) is useless?

kentm 12/3/2025||
This person wrote a blog post admitting to tone-deafness in cheerleading AI and talking about all the ways AI hype has negatively impacted peoples' work environment. But then they wrap up by concluding that its the anti-AI people that are the problem. Thats a really weird conclusion to come to at the end of that blog post. My expectation was that the end result was "We should be measured and mindful with our advocacy, read the room, and avoid aggressively pushing AI in ways that negatively impact peoples' lives."
keeda 12/3/2025||
As I've said before: AI mandates, like RTO mandates, are just another way to "quiet fire" people, or at least "quiet renegotiate" their employment.

That said, AI resistance is real too. We see it on this forum. It's understandable because the hype is all about replacing people, which will naturally make them defensive, whereas the narrative should be about amplifying them.

A well-intentioned AI mandate would either come with a) training and/or b) dedicated time to experiment and figuring out what works well for you. Instead what we're seeing across the industry is "You MUST use AI to do MORE with LESS while we layoff even more people and move jobs overseas."

My cynical take is, this is an intentional strategy to continue culling headcount, except overindexing on people seen as unaligned with the AI future of the company.

charles_f 12/4/2025|
> AI mandates, like RTO mandates, are just another way to "quiet fire" people

That's a recurring argument, and I don't believe it, especially in large tech companies. They have no problem doing multiple large non-quiet lay-offs, why would they need moustache-twirling level schemes to get people to quit.

I don't believe companies to be well intentioned, but the simplest explanation is often the best:

1. RTO are probably driven by people in power who either like to be in the office, believe being in the office is the most efficient way to work (be that it's true or not), or have financial stakes in having people occupy said offices.

2. "AI" mandate is probably driven by people in power who either do see value in AI, think it's the most efficient way to work (be that it's true or not), have FOMO on AI, or have financial stakes in having people use it.

keeda 12/4/2025||
> They have no problem doing multiple large non-quiet lay-offs, why would they need moustache-twirling level schemes to get people to quit.

So the thing about all large layoffs is that there is actually some non-obvious calculus behind them.

One thing for instance, is that typically in the time period soon after layoffs, there is some increased attrition in the surviving employees, for a multitude of reasons. So if you layoff X people you actually end up with X + Y lower headcount shortly after. There are also considerations like regulations.

What this means is that planning layoffs has multiple moving parts:

1) The actual monetary amount to cut -- it all starts with $$$;

2) The absolute number of headcount that translates to;

3) The expected follow-on attrition rate;

4) The severance (if any) to offer;

5) The actual headcount to cut with a view of the attrition and severance;

6) Local labor regulations (e.g. WARN) and their impact, monetary or otherwise;

7) Consideration, impact on internal morale and future recruitment.

So it's a bit like tuning a dynamic system with several interacting variables at play

Now the interesting bit of tea here is that in the past couple of years, the follow-on (and all other) attrition has absolutely plummeted, which has thrown the standard approaches all out of whack. So companies are struggling a bit to "tune" their layoffs and attrition.

I had an exec frankly tell me this after one of the earliest waves of layoffs a couple years ago, and I heard from others that this was happening across the industry. Sure enough, there have been more and more seemingly haphazard waves of layoffs and the absolute toxicity this has introduced into corporate culture.

Due to all this and the overal economy and labor market, employee power has severely weakened, so things like morale and future recruitment are also lower priorities.

Given all this calculus, a company can actually save quite some money (severance) and trouble if people quit by themselves, with minimal negative repercussions.

Not quite moustache-twirling but not quite savory either.

jesse_dot_id 12/3/2025||
AI is in the Radium phase of its world-changing discovery life cycle. It's fun and novel, so every corporate grifter in the world is cramming it into every product that they can, regardless of it making sense. The companies being the most reckless will soon develop a cough, if they haven't already.
excalibur 12/3/2025||
Always amazed to see people who don't hate AI.
1vuio0pswjnm7 12/3/2025||
"But in San Francisco, people still believe they can change the world-so sometimes they actually do."

? For the better, or for the worse ?

gverrilla 12/3/2025||
Tech professionals that depend on their work to survive and have not been thinking about capital vs labor are on delulu land.
aviel 12/3/2025||
False.

- The entire community @ https://seattlefoundations.org

mips_avatar 12/4/2025|
Are they good?
6thbit 12/4/2025||
I wonder how is Satya perceived internally?

He's above the 10 year mark, which is a long time for fortune 500 ceos.

Animats 12/3/2025|
A sizable fraction of current AI results are wrong. The key to using AI successfully is imposing the costs of those errors on someone who can't fight back. Retail customers. Low-level employees. Non-paying users.

A key part of today's AI project plan is clearly identifying the dump site where the toxic waste ends up. Otherwise, it might be on top of you.

More comments...