Top
Best
New

Posted by todsacerdoti 5 hours ago

Verified Spec-Driven Development (VSDD)(gist.github.com)
118 points | 58 comments
_pdp_ 4 hours ago|
Everything in this post stems from the assumption that you already know what you're doing, which is probably true for things you've built before. But I hope we can agree that you can't spec out something you have no clue how to build, let alone write the tests before you've even explored the boundaries of the problem space. That's completely unreasonable.

My second point is that this approach is fundamentally wrong for AI-first development. If the cost of writing code is approaching zero, there's no point investing resources to perfect a system in one shot. What matters more is how fast you can explore the edges. You can now spin up five agents to implement five different versions of the thing you're building and simply pick the best one.

In our shop, we have hundreds of agents working on various problems at any given time. Most of the code gets discarded. What we accept to merge are the good parts.

virgilp 3 hours ago||
Nothing of what you write here matches my experience with AI.

Specification is worth writing (and spending a lot more time on than implementation) because it's the part that you can still control, fully read, understand etc. Once it gets into the code, reviewing it will be a lot harder, and if you insist on reviewing everything it'll slow things down to your speed.

> If the cost of writing code is approaching zero, there's no point investing resources to perfect a system in one shot.

THe AI won't get the perfect system in one shot, far from it! And especially not from sloppy initial requirements that leave a lot of edge (or not-so-edge) cases unadressed. But if you have a good requirement to start with, you have a chance to correct the AI, keep it on track; you have something to go back to and ask other AI, "is this implementation conforming to the spec or did it miss things?"

> five different versions of the thing you're building and simply pick the best one.

Problem is, what if the best one is still not good enough? Then what? You do 50? They might all be bad. You need a way to iterate to convergence

manmal 32 minutes ago||
This. Waterfall never worked for a reason. Humans and agents both need to develop a first draft, then re-evaluate with the lessons learned and the structure that has evolved. It’s very very time consuming to plan a complex, working system up front. NASA has done it, for the moon landing. But we don’t have those resources, so we plan, build, evaluate, and repeat.
zozbot234 16 minutes ago||
That "first draft" still has to start with a spec. Your only real choice is whether the spec is an actual part of project documentation with a human in the loop, or it's improvised on the spot within the AI's hidden thinking tokens. One of these choices is preferable to the other.
theptip 2 hours ago|||
There’s a real tension here.

If you are vibe-coding, this approach is definitely going to kill you buzz and lose all the rapid iteration benefits.

But if you are working in an existing large system, vibe coding is hard to bring into the core. So I think something more formal like OP is needed to reap major benefits from AI.

zozbot234 20 minutes ago||
This is just AI-written slop, but even if you're vibe coding and want to go for rapid iteration, you still benefit by having the AI write out a broad plan of what it's going to do and looking it over before telling it to implement it. One-shot vibe coding is totally worthless, but the more you're aware of what the AI is thinking about and ready to revise its plans, the better it can potentially do.
noosphr 1 hour ago|||
If the price of code is zero then changing the spec also costs zero in terms of code and. This is what always was the problem with specs before. You'd write one, run it through the prover, write the code, then have to throw out the whole thing because there was a business case you didn't account for.

Now the bottom 98% can be given to a robot with a clear success signal other than 'it looks about right'.

baq 1 hour ago|||
code is orthogonal to spec. you can iterate on the code and iterate on the spec. the spec is not meant to be constant, it's a form of ECC for the artifacts of the coding pipeline.
giancarlostoro 3 hours ago|||
Thats why I have AI do a write up about the system I want to build, I then review it all. If it looks good I use it as my prompt.
DaylitMagic 4 hours ago|||
If you don't mind the question with regard to your second point, couldn't what you've done in your shop be also used here? There's no reason why 'try to develop it five different ways and pick the best parts out of each' is incompatible with the 'VSDD' concept; seems like it could be included?
_pdp_ 43 minutes ago|||
A lot of interesting replies below this comment that I won't be able to respond to individually.

I'll just leave this here:

https://en.wikipedia.org/wiki/P_versus_NP_problem

robot-wrangler 1 minute ago||
That seems barely related and settles nothing? Bottom line is simple, saying "you can't spec out something you have no clue how to build" is saying you cannot desire coldness unless you understand how to build a refrigerator. It's just the difference between what and how. If you don't know the difference between implementation and specifications, just try a whole day of answering "what" and "why" questions with "how" answers and see how it goes.
zppln 2 hours ago|||
> But I hope we can agree that you can't spec out something you have no clue how to build

Eh, of course you can. You can specify anything as long as you know what you want it to do. This is like systems engineering 101 and people do it successfully all the time.

tikhonj 4 hours ago|||
> you can't spec out something you have no clue how to build

Ideally—and at least somewhat in practice—a specification language is as much a tool for design as it is for correctness. Writing the specification lets you explore the design space of your problem quickly with feedback from the specification language itself, even before you get to implementing anything. A high-level spec lets you pin down which properties of the system actually matter, automatically finds an inconsistencies and forces you to resolve them explicitly. (This is especially important for using AI because an AI model will silently resolve inconsistencies in ways that don't always make sense but are also easy to miss!)

Then, when you do start implementing the system and inevitably find issues you missed, the specification language gives you a clear place to update your design to match your understanding. You get a concrete artifact that captures your understanding of the problem and the solution, and you can use that to keep the overall complexity of the system from getting beyond practical human comprehension.

A key insight is that formal specification absolutely does not have to be a totally up-front tool. If anything, it's a tool that makes iterating on the design of the system easier.

Traditionally, formal specification have been hard to use as design tools partly because of incidental complexity in the spec systems themselves, but mostly because of the overhead needed to not only implement the spec but also maintain a connection between the spec and the implementation. The tools that have been practical outside of specific niches are the ones that solve this connection problem. Type systems are a lightweight sort of formal verification, and the reason they took off more than other approaches is that typechecking automatically maintains the connection between the types and the rest of the code.

LLMs help smooth out the learning curve for using specification languages, and make it much easier to generate and check that implementations match the spec. There are still a lot of rough edges to work out but, to me, this absolutely seems to be the most promising direction for AI-supported system design and development in the future.

politician 4 hours ago||
"Most of the code gets discarded." If you don't mind sharing, what's your signal-to-token ratio?
kvdveer 3 hours ago||
How do you propose we measure signal? Lines of code is renowned for being a very bad measure of anything, and I really can't come up with anything better.
politician 2 hours ago||
The OP said that they kept what they liked and discarded the rest. I think that's a reasonable definition for signal; so, the signal-to-token ratio would be a simple ratio of (tokens committed)/(tokens purchased). You could argue that any tokens spent exploring options or refining things could be signal and I would agree, but that's harder to measure after the fact. We could give them a flat 10x multiplier to capture this part if you want.
mirekrusin 34 minutes ago||
I'm going to call it out as bullshit, you can't dig out "what you like" from "hundreds agents running all the time".
WestN 2 hours ago||
Short take: replace TDD with BDD, and might add DDD as a spice. Otherwise this is a fairly good article.

Why not TDD? Since a lot of developers use LLMs to create tests today, plus a lot of the training data contains information on how to do this. Making it something that it either can figure out to do by itself or that it will cheat. Both equally bad.

A somewhat controversial take is that you should simply avoid writing tests which the LLM can produce by itself, similar to how we in the last week removed the agents.md file.

chrisweekly 1 hour ago|
could you say more about removing agents.md?
supermdguy 12 minutes ago||
Probably referencing this: https://news.ycombinator.com/item?id=47034087
Robdel12 3 hours ago||
I’ve gotten the absolute best results from LLMs just acting like the software engineer I’ve aspired to be the past 15 years.

Normal dev things. Scope the ticket properly, break it down. Test well. Write the correct docs.

LLM specific things are going to be gone next week

twodave 1 hour ago|
I feel this very recently. I just pretend the agent is a junior dev and tell them to do things I don’t have time for. Reviewing the changes at my convenience is a lot like checking in on a junior dev, too. On the other hand I do feel like I get better results with the same teeing up that a junior dev requires, so I try to remove as many unknowns/dependencies as possible (or else explicitly tell it to leave some things as stubs) before sending him off to do something
SirensOfTitan 3 hours ago||
LLM-assisted development feels a lot like trend-driven development. When dealing with technique and heterogenous prompts and goals, it’s easy to gain somewhat of a gambler’s fallacy with respect to a particular technique.

Spec-driven development feels pretty questionable to me. I’m sure it works fine for feature work that is predictable or has been done before, but then I wonder why you’d waste your time with it.

Prior to LLMs, the whole vibe was to iterate rapidly toward a working thing so you can see what works and what doesn’t. Why would we abandon that strategy as an industry when the cost of writing code is ostensibly getting cheaper?

If I’m using LLMs at all, I’m using them to do a breadth search of prior art or ideas, then I’m doing what I might call a prototype onion: successive clean room attempts at a novel problem, accumulating what I learn at each attempt in each successive prompt. I usually then take the prototype and write the final version myself so I’m properly internalizing the idea.

Ultimately a lot of this prompt work feels like procrastination. It is not about understanding where these tools is useful and where they are not but trying to have them consume every aspect of the work.

getnormality 3 hours ago|
Or maybe people who like talking much more than they like code are now very excited about the possibility that talking has eaten software development.

This is exactly backwards. For many tasks, formal languages are better, more real, more beautiful than English. No matter how many millions of tokens you have, you will never talk the formulas of Fermat, Euler, and Gauss into irrelevance. And the same is true of good code.

Of course, a lot of code is ugly and utilitarian too, and maybe talking will make it less painful to write that stuff.

skydhash 1 hour ago||
> a lot of code is ugly and utilitarian too,

And as everyone who can abstract well knows: Ugly code that have staying power have a reason to be ugly. And the best will be annotated with HACK and NOTE comments. Anything else can be refactored and abstracted to a much better form. But that requires a sense of craftsmanship (to know better) and a time allowance for such activity.

Animats 10 minutes ago||
Is the posting a description of a real system, or just imagination? Is there a link to something that makes this real?
zozbot234 3 minutes ago|
Imagination? More like hallucination - the AI-generated kind.
melvinroest 2 hours ago||
I've been doing something less formal. I stumbled upon Riaan Zoetmulder's free course on deep learning and medical image analysis [1] and found his article on spec-driven development [2]. He adapts the V-Model by specifying three things upfront: requirements, system design and architecture. The rest gets generated. He mentioned a study where they show that LLM assistance slowed down experienced open source devs on large codebases. The model doesn't know the implicit context. And to me that's the thing! An LLM should have an index of some sort.

So I vibe coded my own static analysis program where I just track my own function calls. It outputs a call graph of all my self-defined functions and shows the name (and Python type hints) of what it is calling (excluding standard library function, also only self-defined stuff). Running that program and sending the diff from time to time seems to have helped a lot already.

[1] https://www.riaanzoetmulder.com/courses/deep-learning-medica...

[2] https://www.riaanzoetmulder.com/articles/ai-assisted-program...

choeger 2 hours ago||
If I am not mistaken, the verification is problematic here. It's run too late.

A piece of code that satisfies a single test will most likely not be probable to adhere to the spec.

Worse, the whole spec can only be correctly implemented in total. You cannot work iteratively by satisfying one constraint after the other. The same holds for the test cases. That means that satisfying the last test or fulfilling the last constraint will take much more work than the first. The number of tests passed is not a good metric for completion of the implementation.

FrankRay78 18 minutes ago||
No much different from what I did manually when employer outsourced development to India.
DaylitMagic 4 hours ago||
Some random (hopefully additive and helpful) thoughts:

Many companies have older code bases / databases that can be somewhat well defined (and somewhat not). If things have been slowly iterating over 35 years, there's a lot of undocumented edge behavior that may occur; it may be beneficial to have a step before Edge Case Catalog where there's some kind of prompting to catalogue how the inputs and outputs work, and then find the different inputs and outputs - and then confirm that with Input A and Output A that it works as expected. (Legacy systems often have weird orchestration that nobody remembers.)

(Sub-note: This is somewhat part of the provable properties catalog; while this step could be placed there, it would require a re-run of edge case catalog build potentially, which isn't a bad thing.)

A small note that I personally think is a good idea is better code commenting than has been outlined here - the spec itself should be woven into the code with potentially slightly over-commenting for each aspect, code spec gets lost. The code itself should serve as context, especially in the TDD stage.

I think it's implicit but may be worth overtly stating that for the Code Quality check in Phase 3 that it also checks on a zero-trust basis, and doesn't include things like hardcoded keys.

I'm not sure what Chainlink is (sorry!) but I like the ideas outlined around the decomposition - but it misses stringing everything together end-to-end in the way outlined here (it asks to create each part, but never actually weaves the whole together).

Something not covered - is sequencing work and decomposition of work. A spec can create multiple dependencies within itself, requiring things to be worked on in a specific order.

pron 3 hours ago|
If you come up with a strategy that seems to "solve programming", then you know for certain there must be a flaw in it, and you need to identify where it is that corners must be cut and how.

Computer science is an introspective discipline because it studies the essential difficulty of problems regardless of the process taken to solve them, and programming itself (i.e. the problem of producing a correct, or correct-enough program) is such a problem that can be, and has been studied. The question of learning whether a program X satisfies some correctness property P is known as the model-checking problem, and we know that answering it with certainty is intractable. For example, some properties that are true for some program would take no less than 10 minutes to verify (regardless of how that verification is done), others will take no less than 10 hours, others no less than 10 months, others no less than 10 years and so on, and we don’t know ahead of time whether the proprty is true, and if it is, where on this spectrum it falls.

So suppose you decide some property must be proven with full certainty, the question becomes, how long do you wait before giving up waiting for the validation and what do you do when you give up? If you then decide that you’re okay with less than 100% confidence, what approach do you take and how much confidence do you actually have? The problem with that is that the answer to that question often requires a deep understanding of the implementation. I.e. if you have two programs, X and Y, that compute the same function, one less-than-perfect approach would give you 99% confidence with one of them, but only 10% confidence with another.

twoodfin 3 hours ago|
More, “If your LLM comes up with a strategy to ‘solve programming’ …”
More comments...