Posted by ColinWright 13 hours ago
If you're someone with a background in Computer Science, you should know that we have formal languages for a reason, and that natural language is not as precise as a programming language.
But anyway we're peek AI hype, hitting the top on HN is worth more than a reasonable take, reasonableness doesn't sell after all.
So here we're seeing yet another text about how the world of software was solved by AI and being a developer is an artifact of the past.
Right? At least on HN, there's a critical mass of people loudly ignoring this these days, but no one has explained to me how replacing formal language with an english-language-specialized chatbot - or even multiple independent chatbots (aka "an agent") - is good tradeoff to make.
English or any other natural language can of course be concise enough, but when being brief they leave much to imagination. Adding verbosity allows for greater precision, but I think as well that that is what formal languages are for, just as you said.
Although, I think it's worth contemplating whether the modern programming languages/environments have been insufficient in other ways. Whether by being too verbose at times, whether the IDEs should be more like databases first and language parsers second, whether we could add recommendations using far simpler, but more strict patterns given a strongly typed language.
My current gripes are having auto imports STILL not working properly in most popular IDEs or an IDE not finding referenced entity from a file, if it's not currently open... LLMs sometimes help with that, but they are extremely slow in comparison to local cache resolution.
Long term I think more value will be in directly improving the above, but we shall see. AI will stay around too of course, but how much relevance it'll have in 10 years time is anybody's guess. I think it'll become a commodity, the bubble will burst and we'll only use it when sensible after a while. At least until the next generation of AI architecture will arrive.
I'll miss it not because the activity becomes obsolete, but because it's much more interesting than sitting till 2am trying to convince LLM to find and fix the bug for me.
We'll still be sitting till 2am.
> They can write code better than you or I can, and if you don’t believe me, wait six months.
I've been hearing this for the last two years. And yet, LLMs, given abstract description of the problem, still write worse code than I do.
Or did you mean type code? Because in that case, yes, I'd agree. They type better.
And for the record, I'm impressed at issues it can diagnose. Being able to query multiple data sources in parallel and detect anomalies, it sometimes can find the root cause for an incident in a distributed system in a matter of minutes. I have many examples when LLMs found bugs in existing code when tasked to write unit tests (usually around edge cases).
But complex issues that stem from ambiguous domain are often out of reach. By the time I'm able to convey to an LLM all the intricacies of the domain using plain English, I'm usually able to find the issue myself.
And that's my point: I'd be more eager to run the code under debugger till 2am, than to push an LLM to debug for me (can easily take till 2am, but I'd be less confident I can succeed at all)
I might be mistaken, but I bet they said the same when Visual Basic came out.
After ten years of professional coding, LLMs have made my work more fun. Not easier in the sense of being less demanding, but more engaging. I am involved in more decisions, deeper reviews, broader systems, and tighter feedback loops than before. The cognitive load did not disappear. It shifted.
My habits have changed. I stopped grinding algorithm puzzles because they started to feel like practicing celestial navigation in the age of GPS. It is a beautiful skill, but the world has moved on. The fastest path to a solution has always been to absorb existing knowledge. The difference now is that the knowledge base is interactive. It answers back and adapts to my confusion.
Syntax was never the job. Modeling reality was. When generation is free, judgment becomes priceless.
We have lost something, of course. There is less friction now, which means we lose the suffering we often mistook for depth. But I would rather trade that suffering for time spent on design, tradeoffs, and problems that used to be out of reach.
This doesn't feel like a funeral. It feels like the moment we traded a sextant for a GPS. The ocean is just as dangerous and just as vast, but now we can look up at the stars for wonder, rather than just for coordinates.
It's also important to step back and realize that it goes way beyond coding. Coding is just the deepest tooth of the jagged frontier. In 3 years there will be blog posts lamenting the "death of law firms" and the "death of telemedicine". Maybe in 10 years it will be the death of everything. We're all in the same boat, and this boat is taking us to a world where everyone is more empowered, not less. But still, there will be that cutting edge in any field that will require real ingenuity to push forward.
I agree that it's very destabilizing. It's sort of like inflation for expertise. You spend all this time and effort saving up expertise, and then those savings rapidly lose value. At the same time, your ability to acquire new expertise has accelerated (because LLMs are often excellent private tutors), which is analogous to an inflation-adjusted wage increase.
There are a ton of variables. Will hallucinations ever become negligible? My money is on "no" as long as the architecture is basically just transformers. How will compiling training data evolve with time? My money is on "it will get more expensive". How will legislators react? I sure hope not by suppressing competition. As long as markets and VC are functioning properly, it should only become easier to become a founder, so outsized corporate profits will be harder to lock down.
from other sources: 6-12 months, by end of 2025, ChatGPT 7.
It's a concern trolling and astroturfing at it best.
One camp of fellow coders who are saying how their productivity grew 100x but we are all doomed, another camp of AI enthusiasts who got ability to deliver products and truly believe in their newly acquired superiority.
It's all either true or false, but if in six months it becomes true, we'll know it, each one of us.
However, if it's all BS and in six months there will be Windows 95 written by LLM but real code still requires organic intelligence, there won't be any accountability, and that's sad.
My headspace is now firmly in "great, I'm beginning to understand the properties and affordances of this new medium, how do I maximise my value from it", hopefully there's more than a few people who share this perspective, I'd love to talk with you about the challenges you experience, I know I have mine, maybe we have answers to each others problems :)
I assume that the current set of properties can change, however it seems like some things are going to be easier than others, for example multi modal reasoning still seems to be a challenge and I'm trying to work out if that's just hard to solve and will take a while or if we're not far from a good solution