Posted by petesergeant 5 days ago
Broad-based agency = [Semi-]Autonomous Agent = DISO (data in -> side-effects out)
I'm not sure how to reconcile these two statements. Seems to me the former makes the latter moot?
- Add a 'flags' argument to constructors of classes inherited from Record.
- BOOM! Here are 25 edits for you to review.
- Now add "IsCaseSensitive" flag and update callers based on the string comparison they use.
- BOOM! Another batch of mind-numbing work done in seconds.
If you get the hang of it and start giving your LLMs small, sizable chunks of work, and validating the results, it's just less mentally draining than to do it by hand. You start thinking in much higher-level terms, like interfaces, abstraction layers, and mini-tests, and the AI breeze through the boring work of whether it should be a "for", "while" or "foreach".
But no, don't treat it as another human capable of making decisions. It cannot. It's a fancy machinery for applying known patterns of human knowledge to the locations where you point based on a vague hint, but not a replacement for your judgement.
Isn’t that the proper programming state of mind? I think about keywords the same amount of time a pianist think about the keys when playing. Especially with vim where I can edit larger units reliably, so I don’t have to follow the cursor with my eyes, and can navigate using my mental map.
But I'm 90% confident that you will gain something from LLM-based coding. You can do a lot with our code editing tools, but there's almost certainly going to be times when you need to do a sequence of seven things to get the outcome you want, and you can ask the computer to prepare that for you.
Firstly, "very few" still means "a large number of" considering how many of us there are.
Compared to "zero" for LLMs, that's a pretty significant difference.
Secondly, humans have a much larger context window, and it is not clear how LLMs in their current incarnation can catch up.
Thirdly, maybe more of us invent new ideas of significance that the world will just never know. How will you be able to tell if some plumber deep in West Africa comes up with a better way to seal pipes at joins? From what I've seen of people, this sort of "do trivial thing in a new way" happens all the time.
jquery was the high point.
Frontend is in such a terrible state that whatever shit code LLM spits out is valid? Give me a break.
Frontend is very fun when you're starting a new project though.
Frontend is in such a terrible state that whatever shit code LLM spits out is valid? Give me a break.
Anyone admitting in public they use LLM output straight up without careful thought wouldn't get hired by me. But at the same time not everyone is building useful tools that people use... or is a professional.
But still in general I agree the sentiment of any backend dev who avoids modern frontend. The frontend world is the one who created this problem and continues doubling down on JS/React-everything and isolating frontends from backends, for little benefit besides minor DX gains (aka benefiting only the frontend dev, not the product or users).
The back end is not. If it falls into the hands of the enemy then it is game over.
Security-wise, it is clearly acceptable for the front end to be of lower quality than the back end.
While I don't think that f/end should be of a lower quality than the rest of the stack, I also think that:
1. f/end gets the most churn (i.e. rewritten), so it's kinda pointless if you're spending an extra $FOO months for a quality output when it is going to be significantly rewritten in ($FOO * 2) months.
2. It really is more fault tolerant - an error in the backend stack could lead to widespread data corruption. An error on the f/end results in, typically, misaligned elements that are hard to see/find.
For instance, I do research into multi-robot systems for a living. One of the most amazing uses of LLMs I've found is that I can ask LLMs to generate visualizations for debugging planners I'm writing. If I were to visualize these things myself Id spend hours trying to learn the details and quirks of the visualization library, and quite frankly it isn't very relevant for my personal goal of writing a multi-agent planner.
I presume for you your focus is backend development. Its convenient to have something that can quickly spit out UIs. The reason you use a LLM is precisely because front-end development is hard.
The article doesn't offer much value. It's just saying that you shouldn't use an LLM as the business logic engine because it's not nearly as predictable as a program that will always output the same thing given the same input. Anyone who has any experience with ChatGPT and programming should already know this is true as of 2025.
Just get the LLM to implement the business logic, check it, have it write unit tests, review the unit tests, test the hell out of it.
The post has a catchy title and a (in my opinion) clear message about using models as API callers and fuzzy interfaces in production instead of as complex program simulators. It's not about using models to write code.
Social media upvotes are less frustrating imo if you see it as a measurement of attention, not a funneling of value. Yes people like things that give them value but they also like reading things with a good title.
The post has a catchy title and a (in my opinion) clear message about using models as API callers and fuzzy interfaces in production instead of as complex program simulators. It's not about using models to write code.
I mean, the message is wrong as well. LLMs can provide customer support. In that case, it's the business logic.So you're saying that ChatGPT helped you write the business logic, but it didn't write 100% of it?
Is that your insight?
Or that it didn't help you write any business logic at all and we shouldn't allow it to help us write business logic as well? Is that what you're trying to tell us?
ChatGPT didn't write any business logic, and I'm really struggling to see how you got there from reading the article. The message is: don't use LLMs to execute any logic.
The message is: don't use LLMs to implement any logic.
Too late. I've already asked it to implement logic and that code is in production used by millions of people. Seems to have worked fine.I disagree with your conclusion and I don't understand why people upvoted this article to the top of HN.
The intro to your article is also very confusing.
Don’t let an LLM make decisions or implement business logic: they suck at that. I build NPCs for an online game, and I get asked a lot “How did you get ChatGPT to do that?” The answer is invariably: “I didn’t, and also you shouldn’t”.
I assumed that people are asking you how you got ChatGPT to code the NPC for you. Why would people ask you how ChatGPT is powering the NPC? ChatGPT does not have an API. OpenAI has APIs. ChatGPT is just an interface to their models. How can ChatGPT power your NPCs for an online game? Made no sense.> Why would people ask you how ChatGPT is powering the NPC?
Because they think LLM and ChatGPT are synonymous.
Because they think LLM and ChatGPT are synonymous.
I still find it weird. 99.9999% of NPCs in video games are not LLMs. So why would people ask that question?