Top
Best
New

Posted by dinvlad 3 days ago

The Eternal Promise: A History of Attempts to Eliminate Programmers(www.ivanturkovic.com)
222 points | 156 commentspage 3
antonvs 9 hours ago|
This topic always reminds me of "The Last One", https://en.wikipedia.org/wiki/The_Last_One_(software) :

> "The name derived from the idea that The Last One was the last program that would ever need writing, as it could be used to generate all subsequent software."

That was released in 1981. Spoiler alert: it was not, in fact, the last one.

miljanm 7 hours ago||
what's wrong with eliminating programmers?
pixelsort 9 hours ago||
> There is every reason to believe that those who invest in deep understanding will continue to be valuable, regardless of what tools emerge.

I don't take issue with this, except that it's a false comfort when when you consider the demand will naturally ebb and individual workload will naturally escalate. In that light, I find it downright dishonest because the rewards for attaining deep knowledge will continue to evaporate; necessitating AI-assistance.

The reason is it different this time around is because the capabilities of LLMs have incentivized the professional class to betray the institutions that enabled their specializations. I am talking about the amazing minds at Adobe, Figma, and the FAANGS who are bridging agentic reasoners and diffusion models with domain-specific needs of their respective professional users.

Humans are class of beings, and the humans accelerating the advance of AI in creative tools are the reason that things are different this time. We have class traitors among us this time, and they're "just doing their jobs". For most, willful disbelief isn't even a factor. They think they're helping while each PR just brings them closer to unemployment.

nz 9 hours ago||
Most of these "class traitors" live in high cost of living areas, and for them, the choice is "become unemployed within two weeks for not complying", or "become unemployed within a few years for complying". They are being betrayed by the shareholder class, and they in turn are betraying their customers and their species.

The only thing that we can do is to not make it worth their time in the long run. Don't let greed and fear slide. Don't hate someone for choosing their family and comfort over your own, hate the system that forces them to make that choice. Hold them accountable, but attack the system, instead of its hostages and victims.

pixelsort 5 hours ago||
The level of compliance and enthusiasm varies. Some believe they are making the world a better place. Some feel they're adding value but suspect they are trapped within a cycle they refuse to examine. Some are more connected to the truth, and comply willingly but resentfully.

Where you fall depends on where you work and what you work on.

You make a great points about the chain of accountability. But, in my opinion, working professionals are the only agents in the system with the potential to realize their own culpability and divert their actions.

Perhaps, it isn't fair to point to them and call them traitors. Still, they are the only ones with enough agency to potentially organize and collectively push for the kind of ethics that could save us all.

zozbot234 8 hours ago||
Bridging software with domain-specific needs of its professional users is nothing new: that is how domain-specific professional software gets built. What is new is that the people doing this are being referred to hysterically as "class traitors", when the improvements they're working on will bring massive and widely available benefits to professionals the world over.
pixelsort 5 hours ago||
While the desire is not new, advancements in LLMs and diffusion models have made this sort of bridging effective and attractive to an unprecedented degree.

Those massively and widely available benefits will continue to deflate the value of human intelligence until even most of innovators currently working on them lose their seats at the table too.

nsjdjdkdz 13 hours ago||
[flagged]
prsheetraj 13 hours ago|
Same phenomena noticed here at IBM Mumbai sir.
bananaflag 16 hours ago||
Yeah but this time it's for real.

All the other attempts failed because they were just mindless conversions of formal languages to formal languages. Basically glorified compilers. Either the formal language wasn't capable enough to express all situations, or it was capable and thus it was as complex as the one thing it was designed to replace.

AI is different. You tell it in natural language, which can be ambiguous and not cover all the bases. And people are familiar with natural language. And it can fill in the missing details and disambiguate the others.

This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it. Now the AI takes the place of the engineer.

Also, I personally never believed before AI that programming will disappear, so the argument that "this has been hyped before" doesn't touch my soul.

I have no idea why this is so hard to understand. I'd like people to reply to me in addition to downvoting.

danhau 15 hours ago||
Programmers have enjoyed an occupation with solid stability and growing opportunities. AI challenging this virtually over night is a tough pill to swallow. Naturally, many subscribe to the hope that it will fail.

How far AI will succeed in replacing programmers remains to be seen. Personally I think many jobs will disappear, especially in the largest domains (web). But I think this will only be a fraction and not a majority. For now, AI is simply most useful when paired with a programmer.

aleph_minus_one 13 hours ago|||
> Programmers have enjoyed an occupation with solid stability and growing opportunities.

This is not the case:

- Before the 90s, programming was rather a job for people who were insanely passionate about technology, and working as a programmer was not that well-regarded (so no "growing opportunities").

- After the burst of the first dotcom bubble, a lot of programmers were unemployed.

- Every older programmer can tell you how fast the skills that they have can become and became irrelevant.

Over the last decade, the stability and opportunities for programmers was more like a series of boom-bust cycles.

danhau 4 hours ago|||
Thanks for chiming in. I appreciate your comments on my young views.

What do you make of AI?

aleph_minus_one 3 hours ago||
> What do you make of AI?

Let me put it this way: I do have my opinion on this topic, but this whole topic is insanely multi-faceted, and some claims that I am rather certain about are more at the boundaru of the Overton window of HN, so I won't post it here.

But the article which the whole discussion is about

> https://www.ivanturkovic.com/2026/01/22/history-software-sim...

offers in my opinion a rather balanced perspective regarding using AI for coding (which does not mean that this article is near to my opinion).

I will just give some less controversial thoughts and advices concerning AI:

- A huge problem when discussing AI is that the whole topic is a hodgepodge of various very diverse topics.

- The (current) AI industry has invested a lot of marketing efforts to re-define what AI stood for in the past (it basically convinced the mass of people that "AI = what we are offering")

- I cannot say whether AI will be capable of replacing lots of people in office jobs or not (I have serious doubts). Media loves to disseminate this topic, but in my opinion it does not really matter: the agenda is rather to spread fear among employees to make them more obedient.

- Even if AI will be capable of replacing only few office workers (a scenario that I rather believe in), it does not mean that management will not use "AI"/"replace by AI" as a very convenient excuse to get rid of lots of employees. The dismissed workers will then mostly vent their spleen on the AI companies instead of the management; in other work: AI is a very convenient scapegoat for inconvenient management decisions. And yes, I consider it to be possible that some event that leads to mass layoffs might happen in a few years (but this is speculative).

- While I cannot say how much quality improvement is possible for current AI models (i.e. I don't know whether there exists a technological barrier), the signs are clear that as of today AI companies have hit some soft "cost barriers". I don't know whether these are easily solvable or not, but be aware of their existence.

- So, my advice is: if an AI model is of use for some project that you have (e.g. generating graphics/content for your web platform; using it as a tool for developing the next scientific breakthrough; ...), do it now. Don't assume that the models will do this nearly freely for you anymore in the future (it can be that this will stay possible in the possible, but be cautious).

aleph_minus_one 11 hours ago|||
Correction: "Over the last decade" -> "Over the last decades [plural]".
cafebabbe 14 hours ago||||
AI is useful when paired with an experienced programmer.

Experienced through old-school (pre-LLM) practice.

I don't clearly see a good endgame for this.

duggan 14 hours ago|||
Motivated novices will just learn differently, and produce different kinds of systems for different audiences with different expectations.

Some will dig into obscurities that LLMs don't or can't touch, others will orchestrate the tools, Gastown-style, into some as-yet-unknown form.

People will vibe themselves into a corner and either start learning or flame out.

citrin_ru 13 hours ago|||
Endgame is to produce AI which will not need any supervision by the time the current generation of experienced developers will retire or even sooner. I don’t know if it will happen but many bet on this and models are still improving, flattening is not yet seen.
ajshahH 12 hours ago||
This implies programming is done and there will be no other advancements.

And flattening is being seen, no? Recent advancements are mostly from RL’ing, which has limitations (and tradeoffs) too. Are there more tricks after that?

Verdex 11 hours ago||
Yeah, even the AI CEOs are admitting that training scaling is over. They claim that we can keep the party going with post training scaling, which I personally find hard to believe but I'm not really up to speed on those techs.

I mean, maybe you can just keep an eye on what people are using the tools for and then monkey patch your way to sufficiently agi. I'll believe it when we're all begging outside the data centers for bread.

[Based on other history of science and technology advancements since the stone ages, I would place agi at 200-500 years out at least. You have to wait decades after a new toy is released for everyone to realize everything they knew was wrong and then the academics get to work then everyone gets complacent then new accidental discovery produces a new toy etc.]

Tanjreeve 3 hours ago|||
For a brief blip in time the last few years it was possible to jump from a code camp to a decent paying job and vaguely disappear for a while like Milton from office space. The current period from a bad economy is more of a reversion to the mean.
t_mahmood 13 hours ago|||
A manager is not going to handle all the nitty gritty details, that an engineer knows, fine say, they can ask a LLM to make a web portal.

Does he know about SQL injection? XSS?

Maybe he knows slightly about security stuffs and asks the LLM to make a secure site with all the protection needed. But how the manager knows it works at all? If you figure out there's a issue with your critical part of the software, after your users data are stolen, how bad the fallback is going to be?

How good a tool is also depends on who's using it. Managers are not engineers obviously unless he was an engineer before becoming a manager, but you are saying engineers are not needed. So, where's the engineer manager is going to come from? I'm sure we're not growing them in some engineering trees

edgyquant 11 hours ago|||
There are already companies that exist to audit the security of codebases programmatically so this will just be part of the flow
skydhash 13 hours ago|||
It's like saying "I want a bridge" and then expect steel beams and cables to appear (or planks and ropes) and that's all you need. The user needs are usually clear enough (they need a way to cross that body of water or that chasm), but the how is the real catch.

In the real world, the materials are visible so people have a partial understanding on how it gets done. But most of the software world is invisible and has no material constraints other than the hardware (you can't use RAM that is not there). If the hardware is like a blank canvas, a standard web framework is like a draw by the numbers book (but one with lines drawn by a pencil so you can erase it easily). Asking the user to code with LLM is like asking a blind to draw the Mona Lisa with a brick.

ajshahH 12 hours ago|||
> And it can fill in the missing details and disambiguate the others.

Are you suggesting “And Claude, make no mistakes” works?

Because otherwise you need an expert operating the thing. Yes, it can answer questions, but you need to know what exactly to ask.

> This has been known to be possible for decades, as (simplifying a bit) the (non-technical) manager can order the engineer in natural, ambiguous language what to do and they will do it

I have yet to see vibe coding work like this. Even expert devs with LLMs get incorrect output. Anytime you have to correct your prompt, that’s why your argument fails.

mexicocitinluez 12 hours ago||
I truly believe that people that see entire, non-trivial applications being bult without serious human intervention have not in fact worked on non-trivial applications.

And while these tools can be invaluable in some cases, I still don't know how we get from "Hazy requirements where the user doesn't know what they even want" to "Production-ready apps built at the finger-tips of the PM".

Another really important detail people keep missing is that we have to make thousands of micro-decisions along the way to build up a cohesive experience to the user. LLM's haven't really shown they're great at not building assumptions into code. In fact, they're really bad at it.

Lastly, do people not realize how easy it to so convince an LLM of something that isn't true or vice versa? i love these tools but even I find myself trying to steer it into the direction that makes sense to me, not the direction that makes sense generally.

mexicocitinluez 12 hours ago|||
> All the other attempts failed because they were just mindless conversions of formal languages to formal languages.

This is just categorically false.

No-code tools didn't fail because they were "mindless conversions of formal languages to formal languages". They failed because the people who were supposed to benefit the most (non-developers) neither had the time nor desire to build stuff in the first place.

quotemstr 14 hours ago|||
The thing about talking to computers is less the formality and more the specificity. People don't know what they want. To use an LLM effectively, you need to think about what you want with enough clarity to ask for it and check that you're getting it. That LLMs accept your wishes in the form of natural language instead of something with a LALR(1) grammar doesn't magically obviate the need for specificity and clarity in communication.
medi8r 11 hours ago|||
There are a lot of people who can't program but can do specifity. Researchers and lawyers for a start. It does widen the pool and there might be suprising people who never coded who can now build. Maybe people previosuly dismissed as not academic or "blue collar".

Paradoxically this may mean there are more jobs for programmer and programmer-likes alike as new cottage industries are born. AI for dentists is coming.

bananaflag 14 hours ago|||
Agree that one needs clarity, but how does that differ from my example with the manager and the engineer? The manager also (ideally) learns in time that, when they are more clear, the engineer does the work better.
elasticeel 13 hours ago|||
Do they though? Our do they learn that having a good engineer means they can assign ambiguous tasks and the software developer can reason through good decision making and follow up with clarifying questions.

LLMs need to get better at asking clarifying questions and trying to show the initial solution might not work. Even when they get better at that, this article states that managers not capable of thinking through the answers well enough will fall short and this is the space that developers live in.

skydhash 13 hours ago|||
TLDR: Clarity in software engineering means detailing all the constraints, which no user (apart from lawyers and engineers) usually do, as the real world has constraints that software does not.

The hardware offers so little guarantees that the whole OS job is to offer that. All layers are formal, but usefulness doesn't comes from that. Usefulness comes from a consistent models that embodies a domain. So you have the hardware that has capabilities but no model. Then you add the OS's kernel that will impose a model on the hardware, then you have the system libraries that will further restrict it to a certain domains. Then you have the general libraries that are more useful because they present another perspective. And then you have the application that use this last model according to a certain need.

A good example is that you go from the sound card to the sound subsystem, the the alsa libraries, to pipewire, to an audio player or a media framework like the one in the browser. This particular tower has dozens of engineers that has contributed to it, and most developers only deal with the last layers, but the lesson is that the perspective of a user differs from the building blocks that we have in hand. Software engineering is to reconcile the twos.

So people may know how the things should look or behave on their hand, but they have no idea on what the building blocks on the other hand. It's all abstract. The only thing real is the hardware and the energy powering it. Everything else needs to be specified with code. And in that world that forms the middle layer, there's a lot of rules to follow to make something good, but laws that prevent something bad are little. It's not like physical engineering where there are things you just cannot do.

Just like on a canvas you can draw anything as long as it's inside the boundary of the canvas, you can do anything in software as long as it's inside the boundary of the hardware. OS in personal computers adds a little more restrictions, but it's not a lot. It's basically fantasia in there.

empath75 13 hours ago||
I spent the last two weeks at work building a whole system to deploy automated claude code agents in response to events and even before i finished it was already doing useful work and now it is automatically handling jira tickets and making PRs.
Havoc 15 hours ago|
History reviews is not a great way to approach ground breaking tech
elcapitan 15 hours ago||
"Not learning from history because the present is the present" is a pretty accurate description of the world in 2026, at least.
g947o 12 hours ago|||
You are not going to stop people from reading into history, ever. If anything, people need to learn more about what happened in the past.
forgetfreeman 15 hours ago||
We have yet to invent ground breaking tech that transcends either human nature or the banal depravity that stems from the profit motive at scale. Prior history of major tech innovations therefore may have some insight to offer regarding expected outcomes of the current hype wave around AI. The notion that technology so cleanly breaks from underlying social paradigms as to be wholly unpredictable is one of the tech industries most persistently naive and destructive mythologies.