Posted by napolux 1/11/2026
Agreed but it's not an easy charge to fulfill.
In today's corporate environment, 70% of the costs are in management and admin muddlement, do we really think these "people skills" translate into anything useful in an AI economy?
The junior devs have far more hope.
Middle-managers output exactly what LLMs do: chats, documents, summaries. Particularly working remotely. They don't even generate tickets/requirements – that's pushed to engineers and product people.
Now, it's expecting senior engineers to "orchestrate" 10 coding agents, then it was expecting them to orchestrate 10 cheap developers on the other side of the world. Then, the reckoning came when those offshore developers realised that if they produced code as good as that of a "1st world" engineer, they can ask a similar salary, too, and those offshoring clients who didn't want to pay up were left with those contractors who weren't good enough to do that. This time, it will be agent pricing approaching the true costs. Both times, the breaking point is when managers realise that writing code was never the bottleneck in the first place.
In my opinion we always needed to be versatile to stand any chance of being comfortable in these insanely rapid changing times.
There's an implicit assumption in the article that the coding models are here to stay in development. It's possible that assumption is incorrect for multiple reasons.
Maybe (as some research indicates) the models are as good as they are going to get. They're always going to be a cross between a chipper stochastic parrot and that ego inflated junior dev that refuses to admit a mistake. Maybe when the real (non-subsidized) economics present themselves, the benefit isn't there.
Perhaps the industry segments itself to a degree. There's a big difference in tolerance for errors in a cat fart app and a nuclear cooling system. I can see a role for certified 100% AI free development. Maybe vibe coders go in one direction, with lower quality output but rapid TTM, but a segment of more highly skilled developers focus on AI free development.
I also think it's possible that over time the AI hyper-productivity stuff is revealed to be mostly a mirage. My personal experience and a few studies seem to indicate this. The purported productivity boost is a result of confirmation bias and ridiculous metrics (like LOC generated) that have little to do with actual value creation. When the mirage fades, companies realize they are stuck with heaps of AI slop and no technical talent able to deal with it. A bitter lesson indeed.
Since we're reading tea leaves, I think the most likely outcome is that the massive central models for code generation fade due to enormous costs and increased endpoint device capabilities. The past 50 years have shown us clearly that computing will always distribute, and centralized mainframe style compute gets pushed down to powerful local devices.
I think it settles at an improved intellisense running locally. The real value of the "better search engine" that LLMs hold today reduces as hard economics drive up subscription fees and content is manipulated by sponsors (same thing that happened to the Google search results).
For end users, I think the models get shoved into a box to do things they're really good at, like giving a much more intuitive human-computer interface, but structured data from that is handed off to a human developer to reason about, MCP will expand and become the glue.
I think that over time market forces will balance between AI and human created content, with a premium placed on the latter. McDonalds vs a 5 star steakhouse.
I'd put my money on this. From my understanding of LLMs, they are basically mashing words together via markov chains and have added a little bit of subject classification with attention, a little bit of short-term memory, and enough grammar to lay things out correctly. They don't understand anything they are saying, they are not learning facts and trying to build connections between them, they are not learning from their conversations with people. They aren't even running the equivalent of a game loop where they can even think about things. I would expect something we're trying to call an AI to call you up sometimes and ask you questions. Trillions of dollars have got us this far, how far can it actually take us?
I want my actual AI personal assistant that I have to coerce somehow into doing something for me like an emo teen.
A few key fallacies at play here.
- Assuming a closed world assumption: we'll do the same amount of work but with less people. This has never been true. As soon as you meaningfully drop the price of a unit of software (pick your favorite), demand goes up and we'll need more of them. Also it opens the door to building software that previously would have been too expensive. That's why the amount of software engineers has consistently increased over the years. This despite a lot of stuff getting a lot easier over time.
- Assuming the type of work always stays the same. This too has never been true. Stuff changes over time. New tools, new frameworks, new types of software, new jobs to do. And the old ones fade away. Being a software engineer is a life of learning. Very few of us get to do the same things for decades on end.
- Assuming people know what to ask for. AIs do as you ask, which isn't necessarily what you want. The quality of what you get correlates very much to your ability you ask for it. The notion that you get a coherent bit of software in response to poorly articulated incoherent prompts is about as realistic as getting a customer to produce coherent requirements. That never happened either. Converting customer wishes into maintainable/valuable software is still a bit of a dark art.
The bottom line: many companies don't have a lot of in house software development capacity or competence. AI doesn't really help these companies to fix that in exactly the same way that Visual Basic didn't magically turn them into software driven companies either. They'll use third party companies to get the software they need because they lack the in house competence to even ask for the right things.
Lowering the cost just means they'll raise the ambition level and ask for more/better software. The type of companies that will deliver that will be staffed with people working with AI tools to build this stuff for them. You might call these people software engineers. Demand for senior SEs will go through the roof because they deliver the best AI generated software because they know what good software looks like and what to ask for. That creates a lot of room for enterprising juniors to skill up and join the club because, as ever, there simply aren't enough seniors around. And thanks to AI, skilling up is easier than ever.
The distinction between junior and senior was always fairly shallow. I know people that were in their twenties that got labeled as senior barely out of college. Maybe on their second or third job. It was always a bit of a vanity title that because of the high demand for any kind of SEs got awarded early. AI changes nothing here. It just creates more opportunities for people to use tools to work themselves up to senior level quicker. And of course there are lots of examples of smart young people that managed to code pretty significant things and create successful startups. If you are ambitious, now is a good time to be alive.
You say that belongs in a trade school? I might agree, if you think trade schools and not universities should teach electrical engineering, mechanical engineering, and chemical engineering.
But if chemical engineering belongs at a university, so does software engineering.
Glad I did CS, since SE looked like it consisted of mostly group projects writing 40 pages of UML charts before implementing a CRUD app.
The bigger problem when I was there was undergrads (me very much included) not understanding the difference at all when signing up.
To be clear, I did not go into CS. But I do live in this world
Devops isn’t even a thing, it’s just a philosophy for doing ops. Ops is mostly state management, observability, and designing resilient systems, and we learned about those too in 1987. Admittedly there has been a lot of progress in distributed systems theory since then, but a CS degree is still where you’ll find it.
School is typically the only time in your life that you’ll have the luxury of focusing on learning the fundamentals full time. After that, it’s a lot slower and has to be fit into the gaps.
For example, the primitives of cloud computing are largely explained by papers published by Amazon, Google, and others in the early '00s (DynamoDB, Bigtable, etc.). If you want to explore massively parallel computation or container orchestration, etc, it would be natural to do that using a public cloud, although of course many of the platform-specific details are incidentals.
Part of the story here is that the scale of computing has expanded enormously. The DB class I took in grad school was missing lots of interesting puzzle pieces around replication, consistency, storage formats, etc. There was a heavy focus on relational algebra and normalization forms, which is just... far from a complete treatment of the necessary topics.
We need to extend our curricula beyond the theory that is require to execute binaries on individual desktops.
However, in the decades since this curricula was established, it's clear that the foundation has expanded. Understanding how containerization works, how k8s and friends work, etc is just as important today.
See doctors for example, you learn a bit of everything. But then if you want to specialise, you choose one.