Top
Best
New

Posted by signa11 19 hours ago

A sufficiently detailed spec is code(haskellforall.com)
574 points | 315 commentspage 9
tomlin 17 hours ago|
[dead]
gethwhunter34 13 hours ago||
the comments here are better than the article lol
duesabati 13 hours ago|
You mean as arguments that proves the author's pov? Yeah definitely
randyrand 17 hours ago||
True. That's why I only write assembly. Imagine a piece of software deciding register spill for you! Unhinged!
ur-whale 13 hours ago||
> A sufficiently detailed spec is code

Posting something like this in 2026: they must not have heard of LLMs.

And also: this is such a typical thing a functional programmer would say: for them code is indeed a specification, with strictly no clue (or a vague high level idea at most) as to how the all effing machine it runs on will actually conduct the work.

This is not what code is for real folks with real problems to solve outside of academic circles: code is explicit instructions on how to perform a task, not a bunch of constraints thrown together and damned be how the system will sort it out.

And to this day, they still wonder why functional programming almost never gets picked up in real world application.

Gabriel439 6 hours ago|
Author here: not only have I heard of LLMs but I built a domain-specific programming language for prompt engineering: https://github.com/Gabriella439/grace
charcircuit 18 hours ago||
This articles ignores that AI agents have intelligence which means that they can figure out unspecified parts of the spec on their own. There is a lot of the design of software that I don't care about and I'm fine letting AI pick a reasonable approach.
abcde666777 17 hours ago||
These algorithms don't have intelligence, they just regurgitate human intelligence that was in their training data. That also goes the other way - they can't produce intelligence that wasn't represented in their training input.
half-kh-hacker 15 hours ago|||
How does post-training via reinforcement learning factor in? Does every evaluated judgement count as 'the training data' ?
abcde666777 15 hours ago|||
I guess I'd place both within a broader umbrella: human generated input. So it still holds that they're regurgitating the decisions made by humans.
internet_points 13 hours ago|||
yes
charcircuit 14 hours ago|||
Firstly, it doesn't really matter if they can produce novel designs or not. 99% of what is being done is not novel. It is manipulating data in ways computers have been manipulating data for decades. The design of what is implemented is also going to be derivative of what already exists in the world too. Being too novel makes for a bad product since users will not easily understand how to use it and adapt their existing knowledge of how other things work.

Secondly, they are able to produce intelligence that wasn't represented in their training input. As a simple example take a look at chess AI. The top chess engines have more intelligence over the game of chess than the top humans. They have surpassed humans understanding of chess. Similar with LLMs. They train on synthetic data that other LLMs have made and are able to find ways to get better and better on their own. Humans learn off the knowledge of other humans and it compounds. The same thing applies to AI. It is able to generated information and try things and then later reference what it tried when doing something else.

abcde666777 12 hours ago||
Chess AI isn't trained in the same way. Things like alpha zero partly worked by playing themselves recursively, meaning they actually did generate novel data in the process.

That was partly possible because chess is a constrained domain: rigid rules and board states.

But LLM land is not like that. LLM land was trained on pre-existing text written by humans. They do discover patterns within said data but the point stands that the data and patterns within are not actually novel.

charcircuit 12 hours ago||
>LLM land was trained on pre-existing text written by humans.

Some of the pretraining. Other pretraining is on text written by AI. Human training data is only but a subset of what these models train on. There is a ton of synthetic training data now.

systemsweird 18 hours ago|||
Exactly. The real speed up from AI will come when we can under specify a system and the AI uses its intelligence to make good choices on the parts we left out. If you have to spec something out with zero ambiguity you’re basically just coding in English. I suspect current ideas around formal/detailed spec driven development will be abandoned in a couple years when models are significantly better.
majormajor 16 hours ago|||
This is humans have traditionally done with greenfield systems. No choices have been made yet, they're all cheap decisions.

The difficulty has always arisen when the lines of code pile up AND users start requesting other things AND it is important not to break the "unintended behavior" parts of the system that arose from those initial decisions.

It would take either a sea-change in how agents work (think absorbing the whole codebase in the context window and understanding it at the level required to anticipate any surprising edge case consequences of a change, instead of doing think-search-read-think-search-read loops) or several more orders of magnitude of speed (to exhaustively chase down the huge number of combinations of logic paths+state that systems end up playing with) to get around that problem.

So yeah, hobby projects are a million times easier, as is bootstrapping larger projects. But for business works, deterministic behaviors and consistent specs are important.

bigstrat2003 16 hours ago||||
> in a couple years when models are significantly better.

They aren't significantly better now than a couple of years ago. So it doesn't seem likely they will be significantly better in a couple of years than they are now.

charcircuit 12 hours ago||
A couple years ago we didn't even have thinking. AI could barely complete a line of code that you working on and now they are capable of long running tasks like building an entire C compiler.
macinjosh 17 hours ago|||
For now I would be happy if it just explored the problem space and identify the choices to be made and filtered down to the non-obvious and/or more opinionated ones. Bundle these together and ask the me all at once and then it is off to the races.
ozozozd 15 hours ago|||
Here is an easy example:

Say you and I both wrote the same spec that under-specifies the same parts. But we both expect different behavior, and trust that LLM will make the _reasonable_ choices. Hint: “The choice that I would have made.”

Btw, by definition, when we under-specify we leave some decisions to the LLM unknowingly.

And absent our looks or age as input, the LLM will make some _reasonable_ choices based on our spec. But will those choices be closer to yours or mine? Assuming it won’t be neither.

charcircuit 14 hours ago||
And I'll say it doesn't matter. If it goes with your approach then I'll say that the approach works too. And if goes with my approach I hope you would recognize that my approach also is good enough for what we need. As long as the software meets the requirements, its okay if it isn't implemented the exact way it would have been if it was done by hand.
bigfishrunning 6 hours ago||
> As long as the software meets the requirements

"Requirements" are a spec. If your requirements lack detail, you probably won't get what you expected -- vague "requirements" can be met without solving your problem in many cases.

charcircuit 5 hours ago||
It's okay if you don't get what you expect as long as it works. That's the point. There's a ton of different valid permutations and letting a LLM pick a good one ends up working in practice.
hrmtst93837 13 hours ago|||
Works until the first weird bug shows up and now nobody knows what the code is doing. "I don't care how this part works" turns into tracing some "reasonable" choice through the repo until you find the API edge case and the config key that only exists because the model improvised.
TheRoque 17 hours ago||
Exactly, I find that type of article too dismissive. Like, we know we don't have to write the full syntax of a loop when we write the spec "find the object in the list", and we might even not write this spec because that part is obvious to any human (hence to an LLM too)
sjeiuhvdiidi 16 hours ago||
Absolute nonsense. A sufficiently detailed "spec" is the code. What is wrong with you people ? Pure nonsense, all they have to offer.
tomhow 13 hours ago|
Please don't fulminate on HN. The guidelines make it clear we're trying for something better here. https://news.ycombinator.com/newsguidelines.html
kenjackson 16 hours ago|
Code is usually over specified. I recently used AI to build an app for some HS kids. It built what I spec’wd and it was great. Is it what I would’ve coded? Definitely not. In code I have to make a bunch of decisions that I don’t care about. And some of the decisions will seem important to some, but not to others. For example, it built a web page whereas I would’ve built a native app. I didn’t care either way and it doesn’t matter either way. But those sorts of things matter when coding and often don’t matter at all for the goal of the implementation.