Posted by mips_avatar 12/3/2025
Electricl engineering? Garbage.
Construction projects? Useless.
But code is code everywhere, and the immense amount of training data available in the form of working code and tutorials, design and style guides, means that the output as regards software development doesn't really resemble what anybody working in any other field sees. Even adjacent technical fields.
I'm working on a harness but I think it can do some basic revit layouts with coaxing (which with a good harness should be really useful!)
Let me know what you've experienced. Not many construction EE on HN.
I use to draft in AutoCad and Revit before switching to software.
Saw your comment around using Gemini. I’d love to chat with you. I started building something for the build side of the electrical world, but eventually want to make the jump to the design side of the house.
> This belief system—that AI is useless and that you're not good enough to work on it anyway—hurts three groups
I don't know anyone who thinks AI is useless. In fact, I've seen quite a few places where it can be quite useful. Instead, I think it's massively overhyped to its own detriment. This article presents the author as the person who has the One True Vision, and all us skeptics are just tragically undereducated.
I'm a crusty old engineer. In my career, I've seen RAD tooling, CASE tools, no/low-code tools, SGML/XML, and Web3 not live up to the lofty claims of the devotees and therefore become radioactive despite there being some useful bits in there. I suspect AI is headed down the same path and see (and hear of) more and more projects that start out looking really impressive and then crumble after a few promising milestones.
By my reading, there are several people on this discussion thread right now who think it (in the form of LLMs) is useless?
That said, AI resistance is real too. We see it on this forum. It's understandable because the hype is all about replacing people, which will naturally make them defensive, whereas the narrative should be about amplifying them.
A well-intentioned AI mandate would either come with a) training and/or b) dedicated time to experiment and figuring out what works well for you. Instead what we're seeing across the industry is "You MUST use AI to do MORE with LESS while we layoff even more people and move jobs overseas."
My cynical take is, this is an intentional strategy to continue culling headcount, except overindexing on people seen as unaligned with the AI future of the company.
That's a recurring argument, and I don't believe it, especially in large tech companies. They have no problem doing multiple large non-quiet lay-offs, why would they need moustache-twirling level schemes to get people to quit.
I don't believe companies to be well intentioned, but the simplest explanation is often the best:
1. RTO are probably driven by people in power who either like to be in the office, believe being in the office is the most efficient way to work (be that it's true or not), or have financial stakes in having people occupy said offices.
2. "AI" mandate is probably driven by people in power who either do see value in AI, think it's the most efficient way to work (be that it's true or not), have FOMO on AI, or have financial stakes in having people use it.
So the thing about all large layoffs is that there is actually some non-obvious calculus behind them.
One thing for instance, is that typically in the time period soon after layoffs, there is some increased attrition in the surviving employees, for a multitude of reasons. So if you layoff X people you actually end up with X + Y lower headcount shortly after. There are also considerations like regulations.
What this means is that planning layoffs has multiple moving parts:
1) The actual monetary amount to cut -- it all starts with $$$;
2) The absolute number of headcount that translates to;
3) The expected follow-on attrition rate;
4) The severance (if any) to offer;
5) The actual headcount to cut with a view of the attrition and severance;
6) Local labor regulations (e.g. WARN) and their impact, monetary or otherwise;
7) Consideration, impact on internal morale and future recruitment.
So it's a bit like tuning a dynamic system with several interacting variables at play
Now the interesting bit of tea here is that in the past couple of years, the follow-on (and all other) attrition has absolutely plummeted, which has thrown the standard approaches all out of whack. So companies are struggling a bit to "tune" their layoffs and attrition.
I had an exec frankly tell me this after one of the earliest waves of layoffs a couple years ago, and I heard from others that this was happening across the industry. Sure enough, there have been more and more seemingly haphazard waves of layoffs and the absolute toxicity this has introduced into corporate culture.
Due to all this and the overal economy and labor market, employee power has severely weakened, so things like morale and future recruitment are also lower priorities.
Given all this calculus, a company can actually save quite some money (severance) and trouble if people quit by themselves, with minimal negative repercussions.
Not quite moustache-twirling but not quite savory either.
? For the better, or for the worse ?
- The entire community @ https://seattlefoundations.org
He's above the 10 year mark, which is a long time for fortune 500 ceos.
A key part of today's AI project plan is clearly identifying the dump site where the toxic waste ends up. Otherwise, it might be on top of you.