Posted by milkglass 6 hours ago
The problem is a management pattern: removing people and organizational slack because they don’t generate immediate profit, and then expecting the knowledge to still be there when it’s needed.
Short-term cost cutting leads to less junior hiring, and removes the slack that experienced engineers need in order to teach. As a result, tacit knowledge stops being transferred.
What remains is documentation and automation.
But documentation is not the same as field experience. Automation is not the same as judgment. Without people who have actually worked with the system, you end up with a loss of tacit knowledge—and eventually, declining productivity.
AI is following the same pattern.
What AI is being sold as right now is not really productivity. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.
The West has seen this before, especially in the case of General Electric.
GE pursued aggressive short-term financial optimization, cutting costs, focusing on quarterly results, and maximizing shareholder returns. In the process, it hollowed out its own long-term capabilities. It effectively traded its future for short-term gains.
The same mindset is visible today.
The core problem is that decision-makers—often far removed from actual engineering work— believe that tacit knowledge can be replaced with documentation, tools, and processes.ti cannot.
Tacit knowledge comes from direct experience with real systems over time. If you remove the people and the learning pipeline, that knowledge does not stay in the organization. It disappears.
I am not so certain:
For example, I think that a lot of my knowledge about the system that I work on could be documented, and based on this documentation someone new could take over the system.
The problem rather is: the volume of documentation that I would have to write would be insane; I'd consider ten thousands of dense DIN A4 pages to be realistic - and this is a rather small system.
So, a new person who could take over this system would have to cram and understand basically all the details of this documentation insanely well.
This insane effort (write the documentation; new workers on the project then have to cram and understand every detail of this incredibly bulky documentation) is something that no employer wants to spend money on: this is in my experience the real reason why it isn't done.
I don't think so: the problem is that there exist lots of parts in the system that are quite complicated but which one very rarely has to touch - except in the rare (but happening) case that something deep in such a part goes wrong a for requirement for this part pops up.
If you "learned by doing" instead of reading, you are suddenly confronted with a very subtle and complicated subsystem.
In other words: there mostly exist two kinds of tasks:
- easy, regular adjustments
- deep changes that require a really good understanding of the system
You are spot on w.r.t every assertion you've made. When bean-counters took over the ecosystem they optimised immediate profitability over everything else. Which in turn means, in their mind, every part of the system needs to be firing at 100% all the time. There's no room for experimentation, repair, or anything else.
I've commented about lack of slack on several times here on HN because when I notice a broken system now a days, 90% of it is due to lack of slack in the system to absorb short term shocks.
Few care if you have a lifetime warranty and excellent service or replacement parts if the majority will upgrade in a few years! Mature technologies increasingly become cheaply available as services, eg. laundry, food, transportation. That further reduces demand on production, as many can get by with the bare minimum and don't need the highest quality, longest lasting appliances. Software is even more ephemeral and specialized.
Developing education and training pipelines is wasting money if the skills you need are constantly changing! There is plenty of "slack" in the workforce so this works just fine in most cases - somebody will learn what they need to get paid. There are very few fields where qualified worker shortages are a real problem.
R&D can be outsourced or bought and subsidized by the government in universities, so why do everything yourself? Open source software has even further muddied the waters. Applications have only a limited lifetime before being replicated and becoming free products (this has only been intensified by the introduction of AI), so companies develop services instead.
Technology and knowledge deepening and rapidly becoming more specialized makes the monolithic corporation much less practical, so companies also need to specialize in order to effectively compete. Going too far in the name of efficiency can destroy core competencies, but moving away from the old model was necessary and rational.
Because some problems that many companies in very specialized industries work on are so special that outside of this industry, nearly all people won't even have heard about them.
Additionally, many problems companies have where research would make sense are not the kind of problems that are a good fit for universities.
This is only fair, because they themselves are firing at 100% all the time IYKWIM ;)
Bell Labs greatest work came out when AT&T was a monopoly. Once they were broken up (1984?) they started feeling the pain.
When the Lucent spinoff took place, the new entities had no Monopoly money to fund unconstrained research while management's behaviour never changed.
I don't know how BL fared under Alcatel and now Nokia, but haven't heard of anything interesting for years.
Per wikipedia:
IBM employees have garnered six Nobel Prizes, seven Turing Awards,
20 inductees into the U.S. National Inventors Hall of Fame, 19 National Medals of Technology,
five National Medals of Science and three Kavli Prizes. As of 2018,
the company had generated more patents than any other business in each of 25 consecutive years.A couple things about those patents, from a former IBMer who has quite a few in his time there.
First, not all patents are created equal. Most of those IBM patents are software-related, and for pretty trivial stuff.
Second, most of those patents are generated by the rank and file employees, not research scientists. The IBM patent process is a well-oiled machine but they ain't exactly patenting transistor-level breakthroughs thousands of times a year.
We did that at Meta and Amazon too (for polycarbonate puzzle pieces, with no monetary award at all!). Every now and then something meaningful came out of it
Patents do, but in most cases it's trivial patents or patents for a "mutually assured destruction" portfolio (aka, you keep them in hand should someone ever decide to sue you).
That's a fundamental problem with how the Western sphere prioritizes and funds R&D. Either it has direct and massive ROI promises (that's how most pharma R&D works), some sort of government backing (that's how we got mRNA - pharma corps weren't interested, or how we got the Internet, lasers, radar and microwaves) or some uber wealthy billionaire (that's how we got Tesla and SpaceX, although government aids certainly helped).
All while we are cutting back government R&D funding in the pursuit of "austerity", China just floods the system with money. And they are winning the war.
A Nobel in 2026 doesnt carry the same weight as a Nobel in 1955.
The beancounters have cut all the corners on physical products that they could find. Now even design and manufacturing is outsourced to the lowest bidder, a bunch of monkeys paid peanuts to do a job they're woefully unqualified for.
And the end result is just a market for lemons. Nobody trusts products to be good anymore, so they just buy the cheapest garbage.
Which, inevitably, is the stuff sold directly by Chinese manufacturers. And so the beancounters are hoisted by their own petard.
We've seen it happen to small electronics and general goods.
We're seeing it happen right now to cars. Manufacturers clinging on to combustion engines and cutting corners. Why spend twice the money on a western brand when their quality is rapidly declining to meet BYD models half the price.
---
And we're seeing it happen to software. It was already kind of happening before AI; So much of software was enshittifying rapidly. But AI is just taking a sledgehammer to quality. (Setting aside whether this is an AI problem or a "beancounters push everyone into vibecoding" problem)
E.g. Desktop Linux has always been kind of a joke. It hasn't gotten better, the problems are all still there. Windows is just going down in flames. People are jumping ship now.
SaaS is quickly going that way as well. If it's all garbage, why pay for it. Either stop using it or just slop something together yourself.
---
And in the background of this something ominous: Companies can't just pivot back to higher quality after they've destroyed all their inhouse knowledge. So much manufacturing knowledge is just gone, starting a new manufacturing firm in the west is a staffing nightmare. Same story with cars, China has the EV knowledge. And software's going the same way. These beancounters are all chomping at the bit to fire all their devs and replace them with teenagers in the developing world spitting out prompts. They can't move back upmarket after that's done.
Even when the knowledge still lives, when the people with the skills requires have simply moved to other industries and jobs, who's going to come back? Why leave your established job for the former field, when all it takes is the management or executive in charge being replaced by another dipshit beancounter for everyone to be laid off again.
Desktop Linux has gotten better, though much of the improvement happened decades ago. I believe the first person to prematurely declare "the year of Linux on the desktop" was Dirk Hohndel in 1999: https://www.linux.com/news/23-years-terrible-linux-predictio...
And speaking as someone who was running desktop Linux in 1999, I remember just how bad it was. Xfce, XFree86 config files, and endless messing around with everything. The most impressive Linux video game of 2000 was Tux Racer.
But over the next 10 years, Gnome and KDE matured, X learned how to auto-detect most hardware, and more-and-more installs started working out of the box.
By the mid-2010s, I could go to Dell's Ubuntu Linux page and buy a Linux laptop that Just Worked, and that came with next day on-site support. I went through a couple of those machines, and they were nearly hassle free over their entire operational life. (I think one needed an afternoon of work after an Ubuntu LTS upgrade.)
The big recent improvement has been largely thanks to Valve, and especially the Steam Deck. Valve has been pushing Proton, and they're encouraging Steam Deck support. So the big change in recent years is that more and more new game releases Just Work on Linux.
Is it perfect? No. Desktop Linux is still kind of shit. For examples, Chrome sometimes loses the ability to use hardware acceleration for WebGPU-style features. But I also have a Mac sitting on my desk, and that Mac also has plenty of weird interactions with Chrome, ones where audio or video just stops working. The Mac is slightly less shit, but not magically so.
And yet I run it every day, and it's by FAR the most enjoyable platform and tooling to use (for me).
This is a blindspot to many. People working on entrepreneurial projects need to build a lot. They start with nothing. They need (for example) features. There's a lot to do.
Most firms are not that. Visa, Salesforce, LinkedIn or whatnot. They have a product. They have features. They have been at it for a while. They also have resources. They are very often in a position of finding nails for a "write more software" hammer.
It's unintuitive because they all have big wishlist and to do lists and and a/b testing system for pouring software into but...
If there were known "make more software, make more money" opportunities available, they would have already done them.
Actual growth and new demand needs to come from arenas outside of this. Eg companies that suck at software(either making or acquiring) might be able to get the job done.
The Problem, bringing this back to the article, is fungibility. A lot of this "human capital" stuff cannot be easily repackaged. It's a "living" thing. Talent and skills pipelines can be cut off, and vanish.
A danger in Ai coding (and other fields) is that it leverages preexisting human capital and doesn't generate any for later.
Sometimes they're available, but not palatable, when the opportunity could threaten their existing investments or patterns. That might mean "self-cannibalism", or changing the ecology so that the main product niche is threatened.
Then those opportunities are ignored, or actively worked-against via lobbying, embrace-extend-extinguish, etc.
Whether the reason of strategic (like your example), internal politics, insufficient knowledge.... The point is that there is a local equilibrium, and most mature firms are at this equilibrium.
More resources via Ai, at first order, goes after that diminishing returns part of the curve... which is a cliff especially for highly resourced firms topping the S&P500.
A lot of Ai-optimist:s " mental model" of the economy do not account for this stuff at all.
"Save time/money" outcomes are not similar at all to "make more stuff" outcomes. Firing employees does freeze up labour... but reutilizing this labour is non-trivial... as this article demonstrates quite well.
The Problem is wider than management, it is understanding the extended ramifications of action, understanding the larger systems one is a member and then identifying with them, protecting them, because you and all your peers understand their extended foundational need.
That type of critical analysis and secondary considerations tacit knowledge is developed through effective communications training, which is an entire perspective, a way of seeing the world. This can be gained by reading a wide diversity of literature, of the Nobel Literature quality; the reason being such literature is first person accounts of institutions crushing individuals, and individuals finding the power within themselves to defeat the institutions. That personal transformation is practically a Nobel Trope, but it teaches the reader how to have such insight and perseverance. Read a half dozen or more such novels, and you are materially a different person. A better, deeper considering person with a longer perspective horizon. We need this civilization wide.
The thought crossed my mind the other day — if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.
It’s not just in coding, it’s everything. With ChatGPT always available in your pocket, what social interactions is it replacing?
The thing that gets me is, we are meant to fundamentally be social creatures, yet we have come to streamline away socialisation any chance we get.
I’m guilty of this too — I much prefer Doordash to having to call up the restaurant like in the old days, for example.
Increasingly we have people join who tell us they've been struggling with a problem "for days". Per routine, we ask for their configuration, and it turns out they've been asking ChatGPT, Claude or some other LLM for assistance and their configuration is a total mess.
Something about this feels really broken, when a channel full of domain experts are willing to lend a hand (within reason) for free. But instead, people increasingly turn to the machines which are well-known to hallucinate. They just don't think it will hallucinate for them.
In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.
A lot of the passerbys nowadays feel like trolls. They come in copy pasting chatgpt responses spamming they need help instead of chit chatting asking questions. We fix their problems, they don't trust us or understand at all. Or worse we tell them their situation is unreasonably bad and they should start over, they scream at us about how some unimaginably bad code passes tests and compiles just fine and how we are dumb.
They tell us we don't need to exist anymore in one way or another. They try to show off terrible code we try to offer real suggestions to improve it, they don't care. Then they leave the community once their vibe/agentic coding leaves that part of their code base. Complete waste of time, they learned nothing, contribute nothing, no fun was had, no ah-hahs, just grimey interactions.
Importantly, you're removing a signal: If I'm not asked things anymore, I don't know which aspects of our domain are causing the most confusion/misunderstandings and would as such benefit most from simplifying the boundaries of.
At the end of the day chatgpt won't be there to hold our hands in the hospital, have a laugh over failing to pick up a date, get invited to a bbq, groan over the state of the code in utils.c, or recommend us for our next job/promotion. They say software is social for a different reason than most of these examples.
It's good to be efficient, whatever that means, but there are no metrics on the gains that get made by talking to people. In a lot of ways those gains are what life is about.
Are you sure it won't?
Also when companies grow big enough "business" becomes the main business of the company. By that I mean everything unrelated to the actual original domain, such as playing in the financial markets, doing stock buybacks, lobbying, cheating etc. When your CEO is an MBA and your real market is Wall Street any actual product RD and support is a real annoying cost that just cuts into the profits and thus into the exec compensation.
Vesting schedules, conditional grants, contractual equity ownership requirements
In those filthy low margin industries that HN loves to regulated across the oceans out of sight out of mind capital investments have service lives measured in decades.
Worse, it might not generate a return. If you have enough profits, you just buy anyone who successfully produced something innovative. Let them take the risks. As Cisco used to say, "Silicon Valley is our R&D lab."
It is a very difficult mindset to argue against.
In ideal world (where we don't live):
* Corporation - optimizes for mid-to-short term profits (remove slack, run everything thin)
* Government - optimizes for long term profits (introduce regulations to keep the slack time, keep and attract the talent so state gets better)
* Individual - optimizes for their life time (career, family and tries to leverage market conditions to learn skills and get more opportunities from existing pool)
In the west, government is optimizing for "loads and loads of moooney", because of lobby groups and MBAs controlling the corporations which are pushing these ideas through lobbies
Even if it were, creating good documentation or assessing its quality requires experience in using good and bad documentation. And how would juniors build up that experience if they are using AI for everything.
It’s called The Beer Game[1].
One of the funny things about it is even people that have played and discussed it before _still_ make the same fundamental mistakes next time.
Short-termism is the death of companies.
The point of the beer game is that buffering in the supply chain makes the bullwhip effect worse.
Absolutely agree with this. Most MBAs are taught to optimize and reduce the slack.
It works fine with machinery and materials, but not with humans.
When machinery is optimized and run thin, when one of them breaks, you can get exact same in couple days (you usually prepare for it earlier), but with humans, they train their brain and next person is different from the first person.
Humans also break in different ways:
* They stop caring - you wouldn't notice it immediately, they will close tickets, but give bare minimum thought
* Communal brain will not be trained when there is not enough room for experiments and learning - which reduces the innovation eventually
This is exactly the reason it is difficult for US companies to compete with Chinese companies in manufacturing, because their communal brain have already trained and produced very good talent.
Next is the knowledge, more you outsource, more you lose it
Any exec using AI to pay fewer people lacks imagination.
No idea how this should take form, though, and if it’s even realistic. But it seems like due to AI, formal specs and all kinds of “old school” techniques are having a renaissance while we figure out how to distribute load between people and AI.
There are three legs to the stool: specification, implementation, and verification. Implementation and verification both take low-level knowledge and sophisticated knowledge of how things break.
This is the same with compilers. Most of the time a programmer needs to know only the high-level language that is used for writing the program. Nevertheless, when there is a subtle bug or just the desired performance cannot be reached, a programmer who also understands the machine language of the processor has a great advantage by being able to solve the bug or the performance problem, which without such knowledge would be solved in much more time or never.
2) unfortunately, LLM's, by their very nature (not having a model of what they do, are prone to introducing subtle bugs, i.e. it is like programming in high-level language whose compiler likes to wing it
I am sure programmers cherish every case when they can do micro optimization but in the retrospect the high level cuts is what made the system fit the perf or memory budget.
It's always seemed to me that the problem is corporate profit and personal profit above all. 'Management' is a subset of this, and so is pretty much everything else, including the current drive for AI.
It's the Western, perhaps American, approach to business and emphasived by MBAs and the media. Lowering costs, driving share price, dividends and corporate profit.
This race over the few decades has hollowed out most Western companies.
Listen to any entrepreneur podcast, or read any website, and it's all about 'how quickly can I get to exit', i.e. personal profit.
Capitalism is the worst form of economic system, apart from all the rest.
I think the striking thing is how US companies tend to have no idea how to be wealthy. Record profits, so the ceos use all of their tricks to get rich quick? They are already rich! Don't fix what isn't broken. Not every company needs to expand into 10 new markets, or have 5% lay offs or double in revenue. Some of this is investor pressure, but often it's not. Some guy who made it to the top is bored, doesn't feel like he is obviously doing enough, so he keeps making decisions to justify his position.
This isn't to insight flames but the European companies I worked for knew how to be wealthy! The market took a down turn from COVID, they ate the cost to keep their people. Some flashy new vertical is trending. They decided it's not for them, they have a brand and customers that they should focus on while everyone else works out the kinks. The company decides, why go public at all, we are successful and don't need anyone else's influence over us.
People say "you cannot project beyond 1 quarter". This is true in terms of catastrophe or gambler success. But its not true, if you act in q1 like there will be a q2 or even 5 years from now or heaven forbid a second or third generation you make different moves. You value different things.
Plus, of course, each European country has to support their own defense industry, so each one of them needs to have their own howitzer/tank/whatever and they can't agree on common approach that would actually allow for the economy of scale.
https://time.com/archive/6731121/how-clinton-decided-on-nato...
I think that's still a symptom. The real problem is ideology: the monomaniacal focus on profit-making business, which infects our political leaders, down to capitalists and business leaders, down to the indoctrinated rank-and-file. Towards the end of the cold war, the last constraint on it were abolished, the the victory over the Soviet Union made it unquestioned.
The Chinese don't have that ideological problem. Their government appears to not give a shit about how much profit individual business make, they care about building out supply chains and a capabilities. They will bury the West, so long as the West remains in the thrall of libertarian business ideology.
In general productive economic activity generates a surplus and that surplus allows for slack. Human beings intuitively understand this. Hobbies are frequently de facto training for things that aren't currently happening but might later. Family-owned and operated businesses are much less likely to try to outsource their core competency for the sake of quarterly profits.
But regulatory capture and market consolidation causes the surplus to go to the corporate bureaucracies capturing the regulators instead of human beings with self-determination and goals other than number go up, and then the system optimizes for capturing the government rather than satisfying the people. "When you legislate buying and selling the first things to be bought and sold are the legislators." You throw away the competitive market and subject yourselves to the unaccountable bureaucracy, and then try to pretend it's not the same thing because this time the central planners are wearing business suits.
You just described Lucent.
Vision for the future is limited to grandiose fantasies straight out of 1950s pulps and the "heroic" creation of narcissistic corporations that are cynically extractive and treat employees and customers with equal contempt.
The differences which used to provide a convincing cover story - no single Great Leader, a functional consumer economy, votes that appear to make a difference - are being dismantled now.
What's left are the same mechanisms of total monitoring (updated with modern tech) and reality-denying totalitarian oppression, run for the exclusive benefit of a tiny oligarchy which self-selects the very worst people in the system.
This is only an illusion created by the fact that the communists were careful to rename all important things, to fool the weaker minds that the renamed things are something else than what they really are.
In reality, the "socialist" economies were more capitalist than the capitalist economies of USA and Western Europe. They behaved exactly like the final stage of capitalism, where monopolies control every market and there is no longer any competition.
Unfortunately, after a huge sequence of mergers and acquisitions started in the late nineties of the last century, the economies of USA and of the EU states resemble more and more every year the former socialist economies, instead of resembling the US and W. European economies of a few decades ago.
Witness the people who keep proposing to solve market consolidation with higher taxes. Higher taxes go to the government, and therefore the interests that have captured the government. Are we going to solve it by taking money from Warren Buffet and giving it to Larry Ellison? Do we benefit from increased funding for Palantir? No, you have to break up the consolidated markets through some combination of antitrust enforcement and peeling back the regulatory capture that prevents new competitors from entering the market.
There is at least a chance for it to be redistributed, unlike private wealth.
This is very complex problem that needs to be tackled from all sides simultaneously, the entrenched interests are already well setup to defend themselves.
Inequality society producea inequal economy (and vice versa) which is the economy of any developing country. Few rich,. miniscule middle class and lots of poor people in slums snd poverty.
China: We need to build this useful thing and then later let’s try to make profits, too.
Russia has no need for Eastern Europe (they have enough land and resources, why saddle yourself with hostile population?), as long as the said Easter Europe is not threatening them with NATO bases/missiles (US has repeatedly shown that they do not hesitate to use their muscle if they think they can get away with it, so Russia's paranoia is not entirely unfounded).
Even if Russia somehow took over Eastern Europe (most likely way: they learn from US how to do soft 'regime change'), they have no chance against China (China is just so much bigger and better organized; the population's mentality also matters a lot). China and Russia are rather complementary, there is not reason for confrontation between them.
But you are correct, what US is doing is really totally stupid ... although it seems designed by Netanyahu, not Putin.
If NATO expansion is the reason for the war in Ukraine (not imperialism) then why has the war not stopped now we know Ukraine will never join NATO?
This tracks the experience throughout my carreer, in all sorts of companies. From established body-shop consulting, to minor early-stage startup, to FAANG, and everything in between.
Essentially everywhere I worked, you would benefit to switch jobs. Companies would at times do quite an effort to hire you, but wouldn't try anything to keep you around.
This always sounded bonkers to me, but as I directly benefited with a rapidly increasing salary when I job-hopped, my response was a vague shrug. "Those who care don't know and those who know don't care".
The thing is, in every place, you typically is at your least useful when you just joined. It takes months, sometimes years, to learn the intricacies of the business, the knowledge that informs your skills so you can make better decisions, better designs, better implementation, better initiatives.
This is, of course, just one facet of a larger trend of how things are typically mismanaged. The article brushes on it when it talks about how governments in the US and Europe had to scramble to get 50-year old manufacturing going anywhere.
This is why I laugh whenever I hear someone talking about "governments should be administered like a business". Bitch, businesses are typically mismanaged due to terrible incentive loops, institutional blindness and corporate rot. That anything seemingly works is more a result of inertia and conformity than a sign that things are well managed.
in shootings technically the guns are not the issue since they dont fire on their own.. they do enable the ability to shoot though
And on shorter timescales you aren't really predicting anything of consequence. You're just assuring all that effort trying to predict Apple's next move (for example) keeps Apple itself alive in the public debate whether they do the thing or not; they'll have missteps but our 24/7 fetishizing of what they'll do next, overall, just distracts us from our own lives and boosting the lives of the mega rich
You really don't seem to have a grasp of how gamified and propagandized you are
So you’re saying we are being distracted from boosting the lives of the mega rich, which we should get back to doing
And workforce reduction is a nobel goal. In fact, I think it's one of the most important things humanity should focus on. We should strive for a workforce of zero. Humans currently was an enormous amount of their life working instead of more worthwhile pursuits.
I despise the rhetoric around this, we didn't "lose jobs" over AI, we saved ourselves a lot of work. What it does do is highlight a problem in our current society: the link between labour and the access to resources (e.g. money).
I don't think that AI is the ultimate answer to the problem of work, but it can contribute to it.
And uh, healthcare. Among other things.
The company is manufacturing special computers. The initial owner/founder ordered CPU modules and memory cards always looking at the price break. His question was always „how many to buy to get best price?“. So he ordered sometimes 200-300 parts more than needed immediately. Then the follow up order came and he emptied the storage. Now new manager always orders EXACT amount memory cards as ordered computers. Price is secondary thing, most important thing to work without warehouse and get things delivered just in time. What doesn’t work at all for the while already. The high prices buying small quantities is eating up the profit, so people are getting fired to save costs. It is pure greed dominating western world. Everything is done to look accounting nicely at every cost, get whole bonus despite ruining the company long term. I see this pattern recently very often.
My main point against using AI is that I do not want to depend basically on anything when I'm in front of the screen (obviously not including, documentation, books, SO and alike).
I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.
Giving away that at least for me means to become a dependent zombie. Knowledge comes basically from manual trial/error almost daily.
Technology being technology if anything has shown us that we can be pushed and manipulated in every single conceivable way. And in my opinion depending on AI is the ultimate way for companies to penetrate and manipulate a very delicate ability of a human being: to think and wonder about things.
Helps me keep sane tbh. And keeps the edge sharp.
What amazing breakthroughs were achieved thanks to brain juice freed by AI usage? What great works of art were created?
I find myself thinking more and my thinking is of higher quality. Now I have 30 years of fucked up projects experience, so I know all the rakes I could step into.
As in every little thing that used to be too much effort before, I can just easily get the info, the data now with prompt. The data analysis of something, which otherwise might have taken hours to figure out, I can just have AI write scripts for everything, which allows me to see more data about everything that previously was out of touch. Now you will probably ask of course "how do I know the data is accurate?" -- I can still cross reference things and it is still far faster because even if I spent hours before trying to access that data there wouldn't have been similarly guarantees that it was accurate.
I am thinking so much more about the things now that I couldn't have possibly time to think about before because they were so far out of reach, or even unimaginable to do in my lifetime. Now I'm thinking about automating everything, having perfect visualizations, data about everything, being able to study/learn everything quickly etc.
It doesn't seem to me a thing that I could suddenly forget?
Without AI I will feel frustrated that I'm now much slower, but ultimately it's just describing logic. So I'm a bit skeptical of the claim.
My brain effort is also on other things now, such as how to orchestrate guardrails, how to build pipelines to enable multiple agents work on the same thing at the same time, how to understand their weaknesses and strengths, how to automate all of that. So there's definitely a lot of mental effort going into those things.
You could forget maybe how a certain lib or framework worked or things like that, or more so how you wouldn't have been up to date with all the new ones, but ultimately code can be represented as just functions with input and output, and that's all there is to it.
As in how could I possibly forget what loops, conditionals or functions are?
I haven't written code myself for 1+ year (because AI does it), but I feel like I have forgot absolutely nothing, in fact I feel like I have learned more about coding, because I see what patterns AI uses vs what I did or people did, and I am able to witness different patterns either work out or not work out much faster in front of my eyes.
Now writing is something totally different. In some cases writing ability is not about writing, it's about your thoughts and understanding of life and human nature.
You could simply become a better writer without not writing anything by just observing.
If you are using an LLM to write, what is the purpose of that? Are you writing news articles or are you writing a story reflecting your observations of human nature with novel insights? In the latter case you couldn't utilize AI in the first place as you'd have to convey what you are trying to say within your own words, as AI would just "average" your prompt or meaning, which takes away from the initial point.
With code it's desired that it's to be expected, with good writing it's supposed to be something that is unexpectedly insightful. It's completely different.
To become a better X to must do more of X. There are few shortcuts worthwhile.
Although we were discussing about the decay of skill in something. While in some things the decay is super clear (as in running - pace, not the technique), I think there's many areas where there's no clear decay and other activities will actually significantly boost it, and any decay that there is, will be removed in just few days of practice or remembering.
There's many more ways to evaluate a writer skill in terms of what they are doing vs what is coding. Coding can be creative, but in most cases you are not evaluating coding as writing, unless it's possibly technical writing, which is still different compared to coding.
So you may remember all your high school math, but not doing it every day, means you are slower than some of the students. So your knowledge of programming will be there, bit you will be slower because you no longer have the reflex that comes with doing things over and over.
There's also plenty of things that I have got for life just by having practiced them when I was child. E.g. I think everyone gets bicycling, but there's also handstand, walking on hands, etc, which I learned as a kid for few years, and I can still do it even if I only do it once a year. In my view code is exactly the same, and maybe in a way even more straightforward, it's easier than obscure math since you don't have to memorize any formulas to solve it easily, albeit I think a lot of math is great because you don't have to memorize formulas in the first place you just have to internalize or figure out the logic or the idea behind it, and then you just have it. I think repetition in math is specifically the wrong way to go about it, it's about understanding, not repetition.
But if I didn't need those things, and there was a simple pseudolang syntax which acted exactly the same in all versions, didn't have any breaking changes, I would argue I'd be much better at it now.
Internet, search etc is needed to understand how to setup libs/frameworks/APIs, but logic at itself isn't something that I could possibly forget. AI will help to get those setups quicker without me having to search, but arguably it's all useless information, that will get out of date, that I really don't even need to know. I don't need to know top of my head what the perfect modern tsconfig setup should look like or what is the best monorepo framework and how to set it up, so it would scalably support all different coding languages for different purposes.
The irony is how difficult it is to read this obviously AI-generated article due to its unnatural prose and choppy flow full of LLM-isms. The ability to write is also a skill that atrophies.
Even when AI is understandably used due to language fluency, I’d prefer to read an AI translation over a generated article.
If you don’t care enough to write it, why should I care enough to read it?
Note: My comment is not specific to this comment. I just wanted to express myself at somewhere and this is where I think it may be suitable.
The only purpose of the written word is to be read.
That’s the problem.
What you read here are bots and those invested in AI and an occasional retired person who uses AI as a crutch.
What’s really happening is that we are all forgetting how to think
The distinction between junior, mid, senior, lead is a facade. It is a soft gradient that spans multiple areas, but is tainted and skewed by the technology du jour.
Technically you don't have to be an employed developer to become a senior developer. It boils down to your personal willingness to learn and invest time building.
What companies seek these days are people having the experience with (dysfunctional) organizational structure and working around the shortcomings of the organizations communication and funding patterns, nothing more.
Does that really make you senior or just politically versed?
The pattern shows up the most whenever failing software pokes holes in perception.
There's the kind that, when given a problem, will jump in, learn what they need to learn to solve the parts they don't fully understand yet, deliver meaningful iterative results, talk to people as needed, keep you posted on their progress, loop in other team members and offer/request help to/from them, take initiative on the obvious missing parts that would benefit the project as a whole, etc.
And then there's the rest.
Within the first few years of someone's career, you can quickly tell which kind they are. It's almost impossible to turn someone from the latter group into the former.
Yes, everything else is a façade. You can be a "senior" developer with 30 years of experience and still be in the latter group. And you can be fresh out of college and be in the former.
Now some people are extremely good at other skills (politics, interpersonal communication, bullshit, whatever you want to call it) and will be able to seem to be in the first group to the people who matter (managers, execs, etc) while actually being in the second group. But then we're not talking about actual software-making skills anymore.
You can also totally be in the first group and be underpaid, never promoted, etc. There's little correlation with actually career success.
This is depressing and seems right. And yet this is something I desperately want to be ignorant of. I don’t want to peel apart my brain for anyone. Working within these kinds of problems is pure pain.
That's incredibly unlikely. Do you need to be an employed surgeon to become a senior (or whatever they call it) surgeon??
I very much doubt you can be senior without having actually spent years doing it professionally. The experience is everything, no book will give you the sort of understanding you need. That's unfortunately human nature, we are not capable to learn and internalize things simply from reading or watching others do it, we absolutely need to do it ourselves to truly learn. Didactic books always have exercises for this reason.
You can learn facts and techniques from books, obviously. But just because you've read a book about Michelin restaurants that you can now be a Michelin Chef.
That is, and has always been, true. Currently, however, the narrative that is sold (and unfortunately accepted by so many of the senior developers who post here) is that the experience of telling someone else to do something is just as valuable.
There’s plenty of people in this world who are expert programmers without following any traditional path.
“Oh yeah, like who”, you say.
Con Kolivas, anaesthetist, work on kernel schedulers including the Staircase Deadline (RSDL) scheduler which was a precursor to the Completely Fair Scheduler in Linux and the Brain Fuck Scheduler and the ck Patchset.
AI code generators are trolls. They confidently plausible content which is partly wrong. Then humans try to find their errors.
This is not fun. It has no flow.
I can do that too. Most programmers can.
That's because it requires less skill! Critiquing something is always easier than doing it.
I can literally keep an LLM fixing things forever by just saying things like "This is not scalable", or "this is not maintainable", or "this is not flexible" or "this is not robust", ... etc ad nausem.
That doesn't take skill at the level to actually write the software. For the market which is hoping to switch to mostly LLM coding, the prize they are eyeing is skill devaluation and not just, as many think, productivity gains.
They have no reason to double output, but they'd sure love to first halve the people employed, and then halve the salaries of those people (supply/demand + a glut of programmers in the market), and then halve salaries again because almost no skill necessary...
No, it was always the other way around. Mediocre programmers always wanted to rewrite everything because reading and understanding an existing codebase was always harder than writing some greenfield thing with a “modern language” or “modern libraries” or “modern idioms.” So they’d go and do that and end up with 100x the bugs.
You are comparing writing something with rewriting something. You don't know what the difference is?
There is a very valid reason why the Creator of erlang back in the day said something along the line of "you need to iteratively remake your software, improving it each time"
As your knowledge about a topic grows, your initial mistaken implementation may become more and more obvious, and it may even mean a full rewrite.
But yes, a person which instantly says "rewrite" before they understood the software is likely very inexperienced and has only worked with greenfield projects with few contributers (likely only themselves) before.
I find the real way to review other people's code is to program with it and then I start seeing where the problems are much more clearly. I would do a review and spot nothing important then start working on my own follow-on change and immediately run into issues.
This is a whole different discussion, but I just see it as part of the job that I'm getting paid for, I don't need to enjoy it to do it.
Functional testing is a must now that writing tests is also automated away by LLMs as you can get a better understanding if it does what it says on the box, but there will still be a lot of hidden gotchas if you're not even looking at the code.
Plenty of LLM-written code runs excellent until it doesn't, though we see this with human written code too, so it's more about investing more time in the hopes of spotting problems before they become problems.
Well, there you go. Letting AI write the tests is a mistake IMO. When I'm working with other people I write tests too and when I see their tests I know what they're missing out because I know the system and the existing tests. Sometimes I see the problem in their tests when I'm working on some of my own. If you absent yourself from that process then ....
I think it becomes a chore when there are too many trivial mistakes, and you feel like your time would have been better spent writing it yourself. As models and agent frameworks improve I see this happening less and less.
Most people don't spend nearly enough time going through a code review. They certainly don't think as hard as needed to question the implementation or come up with all the edge cases. It's active vs passive thinking.
I, for one, have found numerous issues in other people's code that makes me wonder, "would they have ever made such a mistake if they hand coded this?"
btw, a side effect is that nobody really understands the codebase. People just leave it to AI to explain what code does. Which is of course helpful for onboarding but concerning for complex issues or long term maintenance.
I would not be surprised if many open source projects will outright stop taking PRs. I have had the same feeling several times - if I'm communicating with an LLM through the GitHub PR interface, I'd rather just directly talk to an LLM myself.
But ending PRs is going to be painful for acquiring new contributors and training more junior people. Hopefully the tooling will evolve. E.g. I'd love have a system where someone has to open an issue with a plan first and by approving you could give them a 'ticket' to open a single PR for that issue. Though I would be surprised if GitHub and others would create features that are essentially there to rein in Copilot etc.
>In defense, the substitute was the peace dividend. In software, it’s AI.
Before it was AI, the cheaper alternative was remote contract dev teams in Eastern Europe, right?
Also over here, east of 15°E we were fired all the same.
I believe the plan is to quite simply "do less overall unless it's about AI", but everyone was waiting for others to start layoffs first.
I spent six months working part time and the decision makers made it clear that this is preferable for them long term. Beats getting fired, but I couldn't sustain this lifestyle - I'm frugal but not that frugal.
They really, really do not want to spend money. Especially not on Americans and their health insurance.
It's really strange how we're just letting them get away with this. They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.
Choosing to pay less is what almost all people do, and it is consistent with almost all of human history.
> They're on a fast trajectory toward putting Americans completely out of work and without aid, even though they're American companies first and foremost.
When push comes to shove, i.e. paying lower prices to consume more goods and services or paying higher prices to ensure your countrymen can buy more goods and services, almost everyone will choose to pay lower prices. See political unpopularity of sufficient tariffs to stop imports.
“American” is a nebulous term, and Americans have been choosing lower prices for many decades before the current crop of employees at the global big tech companies chose lower prices. It is no different than when someone picks up lower priced workers outside waiting Home Depot, who are there because they do not have legal work authorization in the US.
What does make a difference is the company they work for. Large hourly "body shops" gives you coders whose quality tends to be lower, regardless if we are talking about an Indian firm or an American firm. Direct hires of independent individuals tend to be higher. But there is always individual variation.
You see people from India more, sure. There are more of them. Over a billion of them, to be precise. Anyone who dismisses a billion people as "always the same" is not being clever, they are being racist. And you know that, otherwise you wouldn't have pre-empted this response with "everyone who is ready to accept it."
Say that there are communication gaps to overcome. Say there are cultural differences. Say that those cultural differences change the assumed business expectations and the mechanisms by which people express their thoughts and opinions. Those things are all true. My recommendation to anyone who has an urge to dismiss an entire population is to instead get to know them: Step up and learn how your teammates think and work. It will make for a better team, better communication, and better results.
Yeah. Companies didn't want to train new employees any more as that costs money (both for paying the trainees and the teachers) so they shifted to requiring academic degrees. That in turn shifted the cost to students (via student loans) and governments.
People call it a red flag for scams if you are supposed to pay your employer for training or whatever as a condition of getting employed... but the degree mill system is conveniently ignored.
No lender would have been stupid enough to give 18 to 22 year olds $200k for bullshit degrees and sports facilities.
The onus would have remained on employers and government to pay for education, rather than a certification, because they would have been the ones paying.
My current pet peave is using period instead of comma, as in:
> My people lived the other side of this equation. Not the factory floor. The receiving end.
Ostensibly this is supposed to add gravitas, but it's very often done in places where that gravitas isn't needed, and it comes off as if I'm reading the script for an action movie trailer.
Quite paradoxical: when its a person's native language we can spot it a mile away but there's no shortage of engineers who claim how good the code output is.
Whatever the reason for the default tone of AI in English, it's still there when generating code. It makes me think that the senior engineers who claim that it produces awesome output just don't understand the specific programming language as a someone who thinks in it almost natively.
The text has few of the obvious AI tells. The only thing that, to me, looks characteristic of LLM-generated text is the short and terse sentence structure, but this has been a "prestigious" way to write in English since Hemingway.
This article is clearly LLM-generated, even the title. A key indicator is that it almost makes sense: we forgot how to manufacture because that got sent to a different nation. The coding thing isn’t getting sent anywhere, so humanity is forgetting how to code. The distinction undermines a lot of the emotional baggage about offshoring that the article wants you to bring along.
The most obvious patterns here are: antithesis constructions, words choices and distribution, attempt at profundity in every paragraph but instead are runs of text that doing say anything, and even the perfect use of compound hyphenation. I think and can appreciate that there is definitely an attempt at personalization and guidance to make it less LLM-y and not just a default prompt, but it’s still kind of obvious. You could use a detector tool too of course.
Hemingway writes simple sentences with a kind of detachment to make the emotional flow of his stories as transparent as possible.
LLM slop reads more like slide bullet points extrapolated to prose-length text
Find some pre 2020 that are, and you'd have a point.