Posted by mips_avatar 11 hours ago
That said, AI resistance is real too. We see it on this forum. It's understandable because the hype is all about replacing people, which will naturally make them defensive, whereas the narrative should be about amplifying them.
A well-intentioned AI mandate would either come with a) training and/or b) dedicated time to experiment and figuring out what works well for you. Instead what we're seeing across the industry is "You MUST use AI to do MORE with LESS while we layoff even more people and move jobs overseas."
My cynical take is, this is an intentional strategy to continue culling headcount, except overindexing on people seen as unaligned with the AI future of the company.
That's a recurring argument, and I don't believe it, especially in large tech companies. They have no problem doing multiple large non-quiet lay-offs, why would they need moustache-twirling level schemes to get people to quit.
I don't believe companies to be well intentioned, but the simplest explanation is often the best:
1. RTO are probably driven by people in power who either like to be in the office, believe being in the office is the most efficient way to work (be that it's true or not), or have financial stakes in having people occupy said offices.
2. "AI" mandate is probably driven by people in power who either do see value in AI, think it's the most efficient way to work (be that it's true or not), have FOMO on AI, or have financial stakes in having people use it.
So the thing about all large layoffs is that there is actually some non-obvious calculus behind them.
One thing for instance, is that typically in the time period soon after layoffs, there is some increased attrition in the surviving employees, for a multitude of reasons. So if you layoff X people you actually end up with X + Y lower headcount shortly after. There are also considerations like regulations.
What this means is that planning layoffs has multiple moving parts:
1) The actual monetary amount to cut -- it all starts with $$$;
2) The absolute number of headcount that translates to;
3) The expected follow-on attrition rate;
4) The severance (if any) to offer;
5) The actual headcount to cut with a view of the attrition and severance;
6) Local labor regulations (e.g. WARN) and their impact, monetary or otherwise;
7) Consideration, impact on internal morale and future recruitment.
So it's a bit like tuning a dynamic system with several interacting variables at play
Now the interesting bit of tea here is that in the past couple of years, the follow-on (and all other) attrition has absolutely plummeted, which has thrown the standard approaches all out of whack. So companies are struggling a bit to "tune" their layoffs and attrition.
I had an exec frankly tell me this after one of the earliest waves of layoffs a couple years ago, and I heard from others that this was happening across the industry. Sure enough, there have been more and more seemingly haphazard waves of layoffs and the absolute toxicity this has introduced into corporate culture.
Due to all this and the overal economy and labor market, employee power has severely weakened, so things like morale and future recruitment are also lower priorities.
Given all this calculus, a company can actually save quite some money (severance) and trouble if people quit by themselves, with minimal negative repercussions.
Not quite moustache-twirling but not quite savory either.
- The entire community @ https://seattlefoundations.org
I think some of the reasons that they gave were bullshit, but in fairness I have grown pretty tired of how much low-effort AI slop has been ruining YouTube. I use ChatGPT all the time, but I am growing more than a little frustrated how much shit on the internet is clearly just generated text with no actual human contribution. I don’t inherently have an issue with “vibe coding”, but it is getting increasingly irritating having to dig through several-thousand-line pull requests of obviously-AI-generated code.
I’m conflicted. I think AI is very cool, but it is so perfectly designed to exploit natural human laziness. It’s a tool that can do tremendous good, but like most things, it requires people use it with effort, which does seem to be the outlier case.
[1] basically the hall of shame for bad threads.
? For the better, or for the worse ?
Electricl engineering? Garbage.
Construction projects? Useless.
But code is code everywhere, and the immense amount of training data available in the form of working code and tutorials, design and style guides, means that the output as regards software development doesn't really resemble what anybody working in any other field sees. Even adjacent technical fields.
I'm working on a harness but I think it can do some basic revit layouts with coaxing (which with a good harness should be really useful!)
Let me know what you've experienced. Not many construction EE on HN.
It might just be an ESL issue on my end, but I seriously feel some huge dissonance between the explanations of "how the tech was made the main KPI, used to justify layoffs and forced in a way that hinders productivity", and the conclusion that seems to say "the real issue with those people complaining is that they just don't believe in AI".
I don't understand this article, it seems to explain all the reasons people in Seattle might have grievances, and then completely dismisses those to adopt the usual "you're using it wrong".
Is this article just a way to advertise for Wanderfugl? Because this reads like the usual "Okay your grievances are fine and all, but consider the following: it allows me to make a SaaS really fast!" that I became accustomed to see in HN discussions.