Posted by gearnode 1/12/2026
I can understand the source of concern but I wouldn’t expect innovation to stop. The world isn’t going to pause because of a knowledge cutoff date.
I really like Zig, I wish it appeared several years earlier. But rewriting everything in Zig might just not have practical sense soon.
However there is still a strong argument to be made for protections/safety that languages can provide.
e.g. would you expect a model (assuming it had the same expertise in each language) to make more mistakes in ASM, C, Zig, or Rust?
I imagine most would agree that ASM/C would be likely to have the most mistakes simply because fewer constraints are enforced as you go closer to the metal.
So, while we might not care about how easy it is for a human to read/write, there will still be a purpose for innovation in programming languages. But those innovations, IMO, will be more focused on how to make languages easier for AI.
"assuming it had the same expertise in each language" is the most important part here, because the expertise of AI with these languages is very different. And, honestly, I bet on C here because its code base is the largest, the language itself is the easiest to reason about and we have a lot of excellent tooling that helps mitigate where it falls short.
> I imagine most would agree that ASM/C would be likely to have the most mistakes simply because fewer constraints are enforced as you go closer to the metal.
We need these constraints because we can't reliably track all the necessary details. But AI might be much more capable (read — scalable) in that, so all the complexity that we need to accumulate in a programming language it might just know out of the way it's built.
> "assuming it had the same expertise in each language" is the most important part here, because the expertise of AI with these languages is very different.
You are correct, but I am trying to illustrate that assuming some ideal system with equal expertise, the languages with more safety would win out in productivity/bugs over those with less safety.
As in to say that it could be worth investing further in safer programming languages because AI would benefit.
> We need these constraints because we can't reliably track all the necessary details.
AI cannot reliably track the details either (yet, though I am sure it can be done). Even if it could, it would be a complete waste of resources (tokens).
Why have an AI determine the type of a variable when it could be done in a deterministic manner with a compiler or linter?
To me these arguments closely mirror/follow arguments of static/dynamically typed languages for human programmers. Static type systems eliminate certain kinds of errors and can produce higher quality programs. AI systems will benefit in the same way if not more by getting instant feedback on the validity of their program.
The thing about programming languages is that both for their creators and advocates a significant part of motivation to drive is emotions and not the rational necessity alone. Learning a new programming language along with its ecosystem is an investment of time and effort, it is something that our brains mark as important and therefore protected (I'm looking at Rust). Now when AI is going to write all the code, that emotional part might eventually dissolve and move to something else, leaving the question of choice of a programming language much less relevant. Like the list of choices Claude Code shows to you in planning mode: "do you wish to use SQLite, PostgreSQL or MySQL as a database for your project?" (*picking the "Recommended" option)
That said, I hope that Zig will make it to version 1.0 before AI turns all the tables and sweeps many things away. It might be my bias and I'm wrong and overestimating the irrational part, then I'll be glad to admit my mistake.
So even if they don't get to train much on some technology, all you need is some guidance docs in AGENTS.md
There's a plus in being fresh too: LLMs aren't going to be heavily trained on outdated tutorials and docs. Like React for example.
I use it constantly, and it never occurred to me that someone might think there was a problem to be solved there.