Posted by samrolken 3 days ago
This project could use something like that. Perhaps ask the LLM to implement a way to store/cache the snapshots of its previous answers. That way, the more you use it, the faster it becomes.
I’ve been reading through all the comments and the range of responses is really great and I'm so thankful for everyone to take the time to comment... from from “this is completely impractical” to “but what if we cached the generated code?” to “why would anyone want non-deterministic behavior?” All valid! Though I think some folks are critiquing this as if I was trying to build something production-ready, when really I was trying to build something that would break in instructive ways.
Like, the whole point was to eliminate ALL the normal architectural layers... routes, controllers, business logic, everything, and see what happens. What happens is: it’s slow, expensive, and inconsistent. But it also works, which is the weird part. The LLM designed reasonable database schemas on first request, generated working forms from nothing but URL paths, returned proper JSON from API endpoints. It just took forever to do it. I kept the implementation pure on purpose because I wanted to see the raw capabilities and limitations without any optimizations hiding the problems.
And honestly? I came away thinking this is closer to viable than it should be. Not viable TODAY. Today it’s ridiculous. But the trajectory is interesting. I think we’re going to look back at this moment and realize we were closer to a real shift than we thought. Or maybe not! Maybe code wins forever. Either way, it was a fun weekend. If anyone wants to discuss this or work on projects that respond faster than 30 seconds per request, I’m available for full stack staff engineer or tech co-founder work: sam@samrolken.org or x.com/samrolken
I guess there are many of us out there with these same thoughts/ideas and you've done an awesome job articulating and implementing it, congrats!
1、If code generation eventually works without human intervention, and every Google search could theoretically produce a real-time, custom-generated page, does that mean we no longer need people to build websites at all? At that point, “web development” becomes more like intent-shaping rather than coding.
2、I’m also not convinced that chat is the ideal interface for users. Natural language feels flexible, but it can also be slow, ambiguous, and cognitively heavier than clicking a button. Maybe LLM-driven systems will need new UI models that blend conversation with more structured interaction, instead of assuming chat = the future.
Curious how others here think about those two points.
The only value of an LLM generating a realistic HTML page as an answer is to make it appear as though the answer was found on a preexisting page, lending the answer some level of validity.
If users really are fine with the LLM just generating the answer on the fly, doing so in HTML is completely unnecessary. Just give the user answers in text form.
Now what if you ask it to optimize itself? Instead of just:
prompt: `Handle this HTTP request: ${method} ${path}`,
Append some simple generic instructions to the prompt that it should create a code path for the request if it doesn't already exist, and list all existing functions it's already created along with the total number of times each one has been called, or something like that.Even better, have it create HTTP routings automatically to bypass the LLM entirely once they exist. Or, do exponential backoff -- the first few times an HTTP request is called where a routing exists, still have the LLM verify that the results are correct, but decrease the frequency as long as verifications continue to pass.
I think something like this would allow you to create a version that might then be performant after a while...?
In fact, this thought has been percolating in the back of my mind but I don't know how to process it:
If LLMs were perfectly deterministic - e.g. for the same input we get the same output - and we actually started memoizing results for input sets by materializing them - what would that start to resemble?
I feel as though such a thing might start to resemble the source information the model was trained on. The fact that the model compresses all the possibilities into a limited space is exactly what makes it more valuable - instead of having to store every input, function body and outputs by memoizing that an LLM could generate, it just stores the model.
But this blows my mind somehow because if we DID store all the "working" pathways, what would that knowledgebase effectively represent and how would intellectual property work anymore in that case?
Thinking about functional programming, to me the potential to think of the LLM as the "anything" function, where a deterministic seed and input always produces the same output, with a knowledgebase of pregenererated outputs to use to speed up the retrieval of acceptable results for a given seed and set of inputs.... I can't put my finger on it.. is it a basically just a search engine then?
Let me try another way...
If I have a ask an LLM to generate a function for "what color is the fruit @fruit?", where fruit is the variable, and I memoize that @fruit = banana + seed 3 is "yellow", then the set of the prompt, input "@fruit", seed = 3, output = "yellow"... then this is now a fact that I could just memoize.
Would that be faster to retrieve the memoized result than calculating the result via the LLM?
And, what do we do with the thought that that set of information is "always true" with regards to intellectual property?
I honestly don't know yet.
So guess kind of like v0