Posted by samrolken 3 days ago
Why not try it out, and if it doesn't work for you or creates more work for you, then ditch it. All these AI assist tools are just tools.
But pretty consistently, such claims are met with accusations of not having tried it correctly, or not having tried it with the best/newest AI model, or not having tried it for long enough.
Thus, it seems that if you don't agree with maximum usage of AI, you must be wrong and/or stupid. I can understand that fostering feeling the need to criticize AI rather than just opt out.
(And if I enjoyed being gaslighted, I'd just be using the LLMs in the first place.)
I believe everyone has to deal with that, AI or not. There are bad human coders.
I've done integration for several years. There are integrations done with tools like Dell Boomi (no-code/low-code) that work but are hard to maintain, like you said. But what can you do? Your employer uses that tool to get it running until it can't anymore, as most no-code/low-code tools can get you to your goal most of the time. But when there's no "connector" or third-party connector that costs an arm and a leg, or hiring a Dell Boomi specialist to code that last mile, which will also cost an arm or a leg, then you turn to your own IT team to come up with a solution.
It's all part of IT life. When you're not the decision-maker, that's what you have to deal with. I'm not going to blame Dell Boomi for making my work extra hard or whatnot. It's just a tool they picked.
I am just saying that a tool is a tool. You can see many real life examples where you'll be pressured into using a tool and maintaining something created by such a tool, and not just in IT but in every field.
Even if LLMs do get 10x as fast, that's not even remotely enough. They are 1e9 times as compute intensive.
The Internet took something that used to be slow, cumbersome, expensive and made it fast, efficient, cheap.
Now we are doing it again.
I am a big proponent of AI. To me, this experiment mostly shows how not to use AI.
I agree this way seems "wrong", but try putting on your engineering hat and ask what would you change to make it right?
I think that is a very interesting thread to tug on.
If the coder is smart, she'll write down the query and note where to put the values, she'll have a checklist of how to load the UI for the database, paste the query, hit run, and copy/paste the output to her HTML. She'll use a standard HTML template. Later she could glue these steps up with some code so that a program takes those values, put them in the SQL query, and then put them in the HTML and send that HTML to the browser... Oh look, she's made a program, a tool! And if she gets an assignment to read/write some values, she can do it in 1 minute instead of 5. Wow, custom made programs save time, who could've guessed?
I agree that spending time on inference or compute every time for the same LLM task is wasteful and the results would be less desirable.
But I don't think the thought experiment should end with that. We can continue to engineer and problem solve the shortcomings of the approach, IMHO.
You provided a good example of an optimization - tool creation.
Trying to keep my mind maximially open - one could think of a "design time" performance at runtime - where the user interacting with the system is describing what they want the first time, and the system is assembling the tool (much like we do now with AI assisted coding, but perhaps without even seeing the code).
Once that piece of the system is working it is persisted so no more inference is required, as essentially code - a tool, that saves time. I am thinking of this as essentially memoizing a function body- i.e. generating and persisting the code.
There could even be some process overseeing the generated code/tool to make sure the quality meets some standard and providing automated iteration, testing, etc if needed.
A big problem is if the LLM never converges to the "right" solution on it's on (e.g. the right tool to generate the HTML from the SQL query, without any hallucination). But, I am willing to momentarily punt on that problem as being more to do with the determinism problem and the quality of the result. The issue isn't per se the non-deterministic results of an LLM anyway, it's the quality of the result fit for purpose for the use case.
I think it's difficult but possible to go further with the thought experiment. A system that "builds itself" at runtime, but persists what it builds, based on user interaction and prompting when the result is satisfactory...
I remember one of the first computer science things I learned- the program that could print out it's own source code. Even then we were believing that systems could build themselves and grow themselves.
So my ask would be to look beyond the initial challenge of the first time costs of generating the tool/code and solve that by persisting a suitable result.
What challenge or problem comes next in this idea?
My brain went to the concept of memoization that we use to speed up function calls for common cases.
If you had a proxy that sat in front of the LLM and cached deterministic responses for inputs, with some way to maybe even give feedback when a response is satisfactory.. this could be a building block for a runtime design mode or something like that.