Top
Best
New

Posted by petesergeant 4/1/2025

Don’t let an LLM make decisions or execute business logic(sgnt.ai)
325 points | 169 commentspage 3
nivertech 4/1/2025|
Narrow-based agency = Tool = Decision Support System = DIDO (data in -> data out)

Broad-based agency = [Semi-]Autonomous Agent = DISO (data in -> side-effects out)

unethical_ban 4/1/2025||
Title should not have been altered.
wseqyrku 4/2/2025||
Unfortunately, this is the only way to get the maximum performance.
tqi 4/1/2025||
> It’s impossible to reason about and debug why the LLM made a given decision, which means it’s very hard to change how it makes those decisions if you need to tweak them... The LLM is good at figuring out what the hell the user is trying to do and routing it to the right part of your system.

I'm not sure how to reconcile these two statements. Seems to me the former makes the latter moot?

sysmax 4/1/2025||
LLMs are a glorified regex engine with fuzzy input. They are brilliant at doing boring repetitive tasks with known outcome.

- Add a 'flags' argument to constructors of classes inherited from Record.

- BOOM! Here are 25 edits for you to review.

- Now add "IsCaseSensitive" flag and update callers based on the string comparison they use.

- BOOM! Another batch of mind-numbing work done in seconds.

If you get the hang of it and start giving your LLMs small, sizable chunks of work, and validating the results, it's just less mentally draining than to do it by hand. You start thinking in much higher-level terms, like interfaces, abstraction layers, and mini-tests, and the AI breeze through the boring work of whether it should be a "for", "while" or "foreach".

But no, don't treat it as another human capable of making decisions. It cannot. It's a fancy machinery for applying known patterns of human knowledge to the locations where you point based on a vague hint, but not a replacement for your judgement.

razodactyl 4/1/2025||
I hate that I understand the internals of LLM technology enough to be both insulted and in agreement with your statement.
noduerme 4/1/2025||
why is it insulting? It's an incredible piece of machinery for refracting natural language into other language. That itself accounts for a majority of orders people pass on to other people before something actually gets done.
skydhash 4/1/2025|||
> If you get the hang of it and start giving your LLMs small, sizable chunks of work, and validating the results, it's just less mentally draining than to do it by hand. You start thinking in much higher-level terms, like interfaces, abstraction layers, and mini-tests, and the AI breeze through the boring work of whether it should be a "for", "while" or "foreach".

Isn’t that the proper programming state of mind? I think about keywords the same amount of time a pianist think about the keys when playing. Especially with vim where I can edit larger units reliably, so I don’t have to follow the cursor with my eyes, and can navigate using my mental map.

squiggleblaz 4/1/2025||
Ultimately, yes, programming with LLMs is exactly the sort of programming we've always tried to do. It gets rid of the boring stuff and lets you focus on the algorithm at the level you need to - just like we try to do with functions and LSP and IDE tools. People needn't be scared of LLMs: they aren't going to take our jobs or drain the fun out of programming.

But I'm 90% confident that you will gain something from LLM-based coding. You can do a lot with our code editing tools, but there's almost certainly going to be times when you need to do a sequence of seven things to get the outcome you want, and you can ask the computer to prepare that for you.

atomicnature 4/1/2025||
If I may ask - how are humans in general different? Very few of us invent new ideas of significance - correct?
lelanthran 4/1/2025|||
> If I may ask - how are humans in general different? Very few of us invent new ideas of significance - correct?

Firstly, "very few" still means "a large number of" considering how many of us there are.

Compared to "zero" for LLMs, that's a pretty significant difference.

Secondly, humans have a much larger context window, and it is not clear how LLMs in their current incarnation can catch up.

Thirdly, maybe more of us invent new ideas of significance that the world will just never know. How will you be able to tell if some plumber deep in West Africa comes up with a better way to seal pipes at joins? From what I've seen of people, this sort of "do trivial thing in a new way" happens all the time.

Pmop 4/1/2025||
Not only "our context window" is larger but we can add and remove from it on-the-fly, or rely on somebody else who, for that very specific problem, has a far better informed "context window", that BTW they're adding to/removing from on-the-fly as well.
bawolff 4/1/2025|||
I think if we fully understood this (both what exactly ishuman conciousness and how llm differs - not just experimentally but theoretically) we would then be able to truly create human-AI
DarkForge 4/1/2025||
Great insights, this is very helpful.
ilrwbwrkhv 4/1/2025||
Yep, this is the way. The way I use LLMs is also to just do the front-end code. Front-end is anyways completely messed up because of JavaScript developers. So whatever the LLM shits out is fine and it looks good. For actual programming and business logic, I write all of the code and the only time I use LLMs is maybe to understand some section of the code but I manually paste it in different LLMs instead of having it in the editor. That's a horrible crutch and will create distance between you and the code.
gchamonlive 4/1/2025||
If I'd have to give you one piece of unsolicited advice, I'd tell you to seek some therapy so that you can overcome whatever trauma you had with front-end development that's clearly clouding your judgement. That is, if I'd give you that advice. Since I'm not, I'll only say that that's extremely disrespectful with everyone doing good work in user-facing application.
Trowter 4/1/2025|||
He's got a point though front end development is in a completly ridiculous state right now
senordevnyc 4/1/2025|||
And has been for over a decade now.

jquery was the high point.

gchamonlive 4/1/2025|||
When you are disrespectful and arrogant, whichever point you are trying to make no matter how valid it is becomes immediately tangential to what you are actually doing. Venting? Bashing? Ranting? All but valid criticism.

Frontend is in such a terrible state that whatever shit code LLM spits out is valid? Give me a break.

dmix 4/1/2025|||
No it really is like that. "Frontend" aka jam everything into an all-consuming React/Vue mega project really isn't the most fun. It's very powerful, sometimes necessary (<50% of the times it's chosen), and the tooling is constantly evolving. But it's not a fun experience when it comes to maintaining and growing a large JS codebase... which is why they usually get reinvented every 3yrs. Generally an opposite experience with server side which stays stable for a decade+ without touching it and having a much closer relationship to the database makes better code IMO, less layers/duplication.

Frontend is very fun when you're starting a new project though.

gchamonlive 4/1/2025||
Will copy from an answer I gave below:

  Frontend is in such a terrible state that whatever shit code LLM spits out is valid? Give me a break.
dmix 4/1/2025||
I was replying to your comment about the state of frontend, not OP about using AI, just like the other replies you got.

Anyone admitting in public they use LLM output straight up without careful thought wouldn't get hired by me. But at the same time not everyone is building useful tools that people use... or is a professional.

But still in general I agree the sentiment of any backend dev who avoids modern frontend. The frontend world is the one who created this problem and continues doubling down on JS/React-everything and isolating frontends from backends, for little benefit besides minor DX gains (aka benefiting only the frontend dev, not the product or users).

lmiller1990 4/1/2025|||
Why is it acceptable for front end code to be of lower quality than the rest? Your software is only as good as the lowest quality part.
abraae 4/1/2025|||
The front end is in the hands of the enemy. They can do what they want with it.

The back end is not. If it falls into the hands of the enemy then it is game over.

Security-wise, it is clearly acceptable for the front end to be of lower quality than the back end.

lelanthran 4/1/2025||||
> Why is it acceptable for front end code to be of lower quality than the rest?

While I don't think that f/end should be of a lower quality than the rest of the stack, I also think that:

1. f/end gets the most churn (i.e. rewritten), so it's kinda pointless if you're spending an extra $FOO months for a quality output when it is going to be significantly rewritten in ($FOO * 2) months.

2. It really is more fault tolerant - an error in the backend stack could lead to widespread data corruption. An error on the f/end results in, typically, misaligned elements that are hard to see/find.

MathMonkeyMan 4/1/2025||||
"It's just the UI" is a prevalent misconception in my experience.
bttrpll 4/1/2025|||
My favorite is these "vibe coding" situations that leave SQL injection and auth vulns because copy-paste ChatGPT. Never change.
alabastervlog 4/1/2025||
Far from making me fear for my job, LLMs have me more confident than ever that I'll always be able to find some kind of paying programming work, even if it's all short-term contracts (as I get even older).
cheevly 4/1/2025||
So objectively false that I don’t even know where to begin.
alabastervlog 4/1/2025||
You sharing that crystal ball?
accurrent 4/1/2025|||
I think there are ways of wording what you said without hurting front-end devs. LLMs can be excellent tools while coding to deal with the parts you don't want to sink your own time into.

For instance, I do research into multi-robot systems for a living. One of the most amazing uses of LLMs I've found is that I can ask LLMs to generate visualizations for debugging planners I'm writing. If I were to visualize these things myself Id spend hours trying to learn the details and quirks of the visualization library, and quite frankly it isn't very relevant for my personal goal of writing a multi-agent planner.

I presume for you your focus is backend development. Its convenient to have something that can quickly spit out UIs. The reason you use a LLM is precisely because front-end development is hard.

mpalmer 4/1/2025||
"other people's bad work makes it pointless for me to do good work"
aurareturn 4/1/2025||
This went straight to the top of HN. I don't understand.

The article doesn't offer much value. It's just saying that you shouldn't use an LLM as the business logic engine because it's not nearly as predictable as a program that will always output the same thing given the same input. Anyone who has any experience with ChatGPT and programming should already know this is true as of 2025.

Just get the LLM to implement the business logic, check it, have it write unit tests, review the unit tests, test the hell out of it.

christianqchung 4/1/2025||
Why do you think top upvoted posts have to be a 1:1 correlation of value? If you look at the most watched videos on youtube, the most popular movies, or sorted by top of all time on subreddits, the only correlation is that people liked them the most.

The post has a catchy title and a (in my opinion) clear message about using models as API callers and fuzzy interfaces in production instead of as complex program simulators. It's not about using models to write code.

Social media upvotes are less frustrating imo if you see it as a measurement of attention, not a funneling of value. Yes people like things that give them value but they also like reading things with a good title.

aurareturn 4/1/2025||

  The post has a catchy title and a (in my opinion) clear message about using models as API callers and fuzzy interfaces in production instead of as complex program simulators. It's not about using models to write code.
I mean, the message is wrong as well. LLMs can provide customer support. In that case, it's the business logic.
petesergeant 4/1/2025|||
Yep, that's exactly what it's saying. I wrote it because people kept asking me how I was getting ChatGPT to do things, and the answer is: I'm not. Not everything is obvious to everyone. As to why it went straight to the top, I think people resonate with the title, and dislike the buzziness around everything being described as an agent.
aurareturn 4/1/2025||
Honestly, I still don't understand the message you're conveying.

So you're saying that ChatGPT helped you write the business logic, but it didn't write 100% of it?

Is that your insight?

Or that it didn't help you write any business logic at all and we shouldn't allow it to help us write business logic as well? Is that what you're trying to tell us?

petesergeant 4/1/2025||
> So you're saying that ChatGPT helped you write the business logic, but it didn't write 100% of it?

ChatGPT didn't write any business logic, and I'm really struggling to see how you got there from reading the article. The message is: don't use LLMs to execute any logic.

aurareturn 4/1/2025||

  The message is: don't use LLMs to implement any logic.
Too late. I've already asked it to implement logic and that code is in production used by millions of people. Seems to have worked fine.

I disagree with your conclusion and I don't understand why people upvoted this article to the top of HN.

petesergeant 4/1/2025||
I changed the word to 'execute' in what I wrote to try and make it clearer to you.
aurareturn 4/1/2025||
That is much clearer.

The intro to your article is also very confusing.

  Don’t let an LLM make decisions or implement business logic: they suck at that. I build NPCs for an online game, and I get asked a lot “How did you get ChatGPT to do that?” The answer is invariably: “I didn’t, and also you shouldn’t”.
I assumed that people are asking you how you got ChatGPT to code the NPC for you. Why would people ask you how ChatGPT is powering the NPC? ChatGPT does not have an API. OpenAI has APIs. ChatGPT is just an interface to their models. How can ChatGPT power your NPCs for an online game? Made no sense.
petesergeant 4/1/2025||
I changed the word "implement" to "execute" on the blog post. Thank you for your feedback. As to:

> Why would people ask you how ChatGPT is powering the NPC?

Because they think LLM and ChatGPT are synonymous.

aurareturn 4/1/2025||
I think changing the word from "execute" to "inference" is even clearer to be honest. Though it's much better than the original word choice.

  Because they think LLM and ChatGPT are synonymous.
I still find it weird. 99.9999% of NPCs in video games are not LLMs. So why would people ask that question?
senordevnyc 4/1/2025||
This article is not about the how, it’s about the why.
jr-ai-interview 4/1/2025||
[flagged]
aboardRat4 4/1/2025|
Everyone daring to comment on LLMs should first read "Shadows of Mind" by Roger Penrose.