Top
Best
New

Posted by ColinWright 13 hours ago

We mourn our craft(nolanlawson.com)
405 points | 535 commentspage 4
lp4v4n 12 hours ago|
People have to stop talking like LLMs solved programming.

If you're someone with a background in Computer Science, you should know that we have formal languages for a reason, and that natural language is not as precise as a programming language.

But anyway we're peek AI hype, hitting the top on HN is worth more than a reasonable take, reasonableness doesn't sell after all.

So here we're seeing yet another text about how the world of software was solved by AI and being a developer is an artifact of the past.

12_throw_away 12 hours ago||
> we have formal languages for a reason

Right? At least on HN, there's a critical mass of people loudly ignoring this these days, but no one has explained to me how replacing formal language with an english-language-specialized chatbot - or even multiple independent chatbots (aka "an agent") - is good tradeoff to make.

zeroonetwothree 11 hours ago||
It's "good" from the standpoint of business achieving their objectives more quickly. That may not be what we think of as objectively good in some higher sense, but it's what matters most in terms of what actually happens in the world.
stoneforger 7 hours ago||
Should it be what matters most? Idiots leading idiots in a circle.
knaeckeKami 7 hours ago|||
Yes, but the people who talk to me as Software Engineer about what to build also talk to me only in natural language, not a formal language.
sgsjchs 10 hours ago|||
Does it really matter that English is not as precise if the agent can make a consistent and plausible guess what my intention is? And when it occasionally guesses incorrectly, I can always clarify.
zeroonetwothree 11 hours ago|||
You're right, of course, but you should consider that all formal language starts as an informal language idea in the mind of someone. Why shouldn't that "mind" be an LLM vs. a human?
flowerbreeze 11 hours ago||
I think mostly because an LLM is not a "mind". I'm sure there'll be an algorithm that could be considered a "mind" in the future, but present day an LLM is not it. Not yet.
flowerbreeze 11 hours ago||
This is in my opinion the greatest weakness of everything LLM related. If I care about the application I'm writing, and I believe I should if I bother doing it at all, it seems to me that I should want to be precise and concise at describing it. In a way, the code itself serves as a verification mechanism for my thoughts and whether I understand the domain sufficiently.

English or any other natural language can of course be concise enough, but when being brief they leave much to imagination. Adding verbosity allows for greater precision, but I think as well that that is what formal languages are for, just as you said.

Although, I think it's worth contemplating whether the modern programming languages/environments have been insufficient in other ways. Whether by being too verbose at times, whether the IDEs should be more like databases first and language parsers second, whether we could add recommendations using far simpler, but more strict patterns given a strongly typed language.

My current gripes are having auto imports STILL not working properly in most popular IDEs or an IDE not finding referenced entity from a file, if it's not currently open... LLMs sometimes help with that, but they are extremely slow in comparison to local cache resolution.

Long term I think more value will be in directly improving the above, but we shall see. AI will stay around too of course, but how much relevance it'll have in 10 years time is anybody's guess. I think it'll become a commodity, the bubble will burst and we'll only use it when sensible after a while. At least until the next generation of AI architecture will arrive.

oytis 12 hours ago||
Write a blog post promoting inevitability of AI in software development while acknowledging feelings of experienced software engineers.
koiueo 11 hours ago||
> We’ll miss the sleepless wrangling of some odd bug that eventually relents to the debugger at 2 AM.

I'll miss it not because the activity becomes obsolete, but because it's much more interesting than sitting till 2am trying to convince LLM to find and fix the bug for me.

We'll still be sitting till 2am.

> They can write code better than you or I can, and if you don’t believe me, wait six months.

I've been hearing this for the last two years. And yet, LLMs, given abstract description of the problem, still write worse code than I do.

Or did you mean type code? Because in that case, yes, I'd agree. They type better.

cmiles74 9 hours ago|
I am not confident that AI tooling can diagnose or fix this kind of bug. I’ve pointed Claude Opus at bugs that puzzle me (with only one code base involved) and, so far, it has only introduced more bugs in other places.
koiueo 8 hours ago||
I'm not saying it can btw. I'm arguing for the opposite.

And for the record, I'm impressed at issues it can diagnose. Being able to query multiple data sources in parallel and detect anomalies, it sometimes can find the root cause for an incident in a distributed system in a matter of minutes. I have many examples when LLMs found bugs in existing code when tasked to write unit tests (usually around edge cases).

But complex issues that stem from ambiguous domain are often out of reach. By the time I'm able to convey to an LLM all the intricacies of the domain using plain English, I'm usually able to find the issue myself.

And that's my point: I'd be more eager to run the code under debugger till 2am, than to push an LLM to debug for me (can easily take till 2am, but I'd be less confident I can succeed at all)

etamponi 9 hours ago||
> So as a senior, you could abstain. But then your junior colleagues will eventually code circles around you, because they’re wearing bazooka-powered jetpacks and you’re still riding around on a fixie bike. Eventually your boss will start asking why you’re getting paid twice your zoomer colleagues’ salary to produce a tenth of the code.

I might be mistaken, but I bet they said the same when Visual Basic came out.

peheje 11 hours ago||
I get the grief about AI, but I don't share it.

After ten years of professional coding, LLMs have made my work more fun. Not easier in the sense of being less demanding, but more engaging. I am involved in more decisions, deeper reviews, broader systems, and tighter feedback loops than before. The cognitive load did not disappear. It shifted.

My habits have changed. I stopped grinding algorithm puzzles because they started to feel like practicing celestial navigation in the age of GPS. It is a beautiful skill, but the world has moved on. The fastest path to a solution has always been to absorb existing knowledge. The difference now is that the knowledge base is interactive. It answers back and adapts to my confusion.

Syntax was never the job. Modeling reality was. When generation is free, judgment becomes priceless.

We have lost something, of course. There is less friction now, which means we lose the suffering we often mistook for depth. But I would rather trade that suffering for time spent on design, tradeoffs, and problems that used to be out of reach.

This doesn't feel like a funeral. It feels like the moment we traded a sextant for a GPS. The ocean is just as dangerous and just as vast, but now we can look up at the stars for wonder, rather than just for coordinates.

bloppe 12 hours ago||
The acceleration of AI has thrown into sharp relief that we have long lumped all sorts of highly distinct practices under this giant umbrella called "coding". I use CC extensively, and yet I still find myself constantly editing by hand. Turns out CC is really bad at writing kubernetes operators. I'd bet it's equally bad at things like database engines or most cutting edge systems design problems. Maybe it will get better at these specific things with time, but it seems like there will always be a cutting edge that requires plenty of human thought to get right. But if you're doing something that's basically already been done thousands of times in slightly different ways, CC will totally do it with 95% reliability. I'm ok with that.

It's also important to step back and realize that it goes way beyond coding. Coding is just the deepest tooth of the jagged frontier. In 3 years there will be blog posts lamenting the "death of law firms" and the "death of telemedicine". Maybe in 10 years it will be the death of everything. We're all in the same boat, and this boat is taking us to a world where everyone is more empowered, not less. But still, there will be that cutting edge in any field that will require real ingenuity to push forward.

zeroonetwothree 11 hours ago|
I think there's clearly a difference in opinion based on what you work on. Some people were working on things that pre-CC models also couldn't handle and then CC could, and it changed their opinions quickly. I expect (but cannot prove of course) that the same will happen with the area you are describing. And then your opinion may change.
bloppe 11 hours ago||
I expect it to, eventually. But then the cutting edge will have simply moved to something else.

I agree that it's very destabilizing. It's sort of like inflation for expertise. You spend all this time and effort saving up expertise, and then those savings rapidly lose value. At the same time, your ability to acquire new expertise has accelerated (because LLMs are often excellent private tutors), which is analogous to an inflation-adjusted wage increase.

There are a ton of variables. Will hallucinations ever become negligible? My money is on "no" as long as the architecture is basically just transformers. How will compiling training data evolve with time? My money is on "it will get more expensive". How will legislators react? I sure hope not by suppressing competition. As long as markets and VC are functioning properly, it should only become easier to become a founder, so outsized corporate profits will be harder to lock down.

bigstrat2003 8 hours ago||
I've been hearing "the LLM can write better code than a human, and if you don't believe me, wait six months" for years now. Such predictions haven't been true before and I don't believe they are true now.
oxag3n 8 hours ago||
> wait six months

from other sources: 6-12 months, by end of 2025, ChatGPT 7.

It's a concern trolling and astroturfing at it best.

One camp of fellow coders who are saying how their productivity grew 100x but we are all doomed, another camp of AI enthusiasts who got ability to deliver products and truly believe in their newly acquired superiority.

It's all either true or false, but if in six months it becomes true, we'll know it, each one of us.

However, if it's all BS and in six months there will be Windows 95 written by LLM but real code still requires organic intelligence, there won't be any accountability, and that's sad.

Folcon 11 hours ago||
I suspect my comment will not be well received, however I notice in myself that I've passed the event horizon of being a believer and am past the honeymoon period and I'm beginning to think about engineering

My headspace is now firmly in "great, I'm beginning to understand the properties and affordances of this new medium, how do I maximise my value from it", hopefully there's more than a few people who share this perspective, I'd love to talk with you about the challenges you experience, I know I have mine, maybe we have answers to each others problems :)

I assume that the current set of properties can change, however it seems like some things are going to be easier than others, for example multi modal reasoning still seems to be a challenge and I'm trying to work out if that's just hard to solve and will take a while or if we're not far from a good solution

d357r0y3r 12 hours ago|
I thought I'd miss all the typing and syntax, but I really don't. Everyone has their own relationship with coding, but for me, I get satisfaction out of the end product and putting it in front of someone. To the extend that I cared about the code, it mainly had to do with how much it allowed the end product to shine.
zeroonetwothree 11 hours ago|
Yes, there's clearly a big split in the community where perhaps ~50% are like OP and the other ~50% are like you. But I think we should still respect the views of the other side and try to empathize.
More comments...