Sorry, this is a bit off-topic, but I have to call this out.
The area absolutely does change, you can see this in the trivial example from the first to second step in https://yagmin.com/blog/content/images/2026/02/blocks_cuttin...
The corners are literally cut away.
What doesn't change is the length of the edges, which is a kind of manhattan distance.
The length of the edge has a limit of the straight line, but does not actually approach the limit.
The area however absolutely does approach the limit, as in fact you remove half the "remaining" area each iteration.
It’s not like it changes our industry’s overall flavour.
How many SaaS apps are excel spreadsheets made production grade?
It’s like every engineer forgets that humans have been building a Tower of Babel for 300000 years. And somehow there is always work to do.
People like vibe coding and will do more of it. Then make money fixing the problems the world will still have when you wake up in the morning.
The current solution is to simply reroll the whole project and let the LLM rebuild everything with new knowledge. This is fine until you have real data, users and processes built on top of your project.
Maybe you can get away with doing that for a while, but tech debt needs to be paid down one way or another. Either someone makes sense of the code, or you build so much natural language scaffolding to keep the ship afloat that you end up putting in more human effort than just having someone codify it.
We are definitely headed toward a future where we have lots of these Frankenstein projects in the wild, pulling down millions in ARR but teetering in the breeze. You can definitely do this, but "a codebase always pays its debts."
Yea, the more things change the more they stay the same. This latest AI hype cycle seems to be no different. Which I think will become more widely accepted over the next couple of years as creating deployable, production-ready, maintainable, sellable, profitable software remains difficult for all the reasons besides the hands-to-keyboard writing of code.
LEt's see where it goes!
My intuition off reading what you wrote is... Nobody is gonna want to write PTLs and ISLs.
The generation of these meta documents will also be fixed by ai, not humans.
* We need a deterministic input language
* The LLM generates machine code
Isn't that just a compiler? Why do we need the LLM at that point?
Only one or two of those questions are actually related to programming. (Even though most developers wear multiple hats.) If an organization has the resources to have a six person meeting for adding dark mode, I'd sure hope at least one of them is a designer and knowledgeable on UX. Because most of those questions are ones that they should bring up and have an answer for.
I like the idea that 'code is truth' (as opposed to 'correct'). An AI should be able to use this truth and mutate it according to a specification. If the output of an LLM is incorrect, it is unclear whether the specification is incorrect or if the model itself is incapable (training issue, biases). This is something that 'process engineering' simply cannot solve.
Though I have to push back on the idea of "code as truth". Thinking about all the layers of abstraction and indirection....hasn't data and the database layer typically been the source of truth?
Maybe I'm missing something in this iteration of the industry where code becomes something other than what it's always been: an intermediary between business and data.
That’s why you should always make your agent write tests first and run them to make sure they fail and aren’t testing that 1+1=2. Then and only then do you let it write the code that makes them pass.
That creates an enormous library of documentation of micro decisions that can be read by the agent later.
Your codebase ends up with executable documentation of how things are meant to work. Agents can do archaeology on it and demystify pretty much anything in the codebase. It makes agents’ amnesia a non-issue.
I find this far more impactful than keeping specs around. It detects regressions and doesn’t drift.
I'm working on a tool that uses structured specs as a single source of truth for automated documentation and code generation. Think the good parts of SpecKit + Beads + Obsidian (it's actually vault compatible) + Backstage, in a reasonably sized typescript codebase that leverages existing tools. The interface is almost finalized, I'm working on polishing up the CLI/squashing bugs and getting good docs ready to do a real launch but if anyone's curious they can poke around the github in the meantime.
One neat trick I'm leveraging to keep the nice human ergonomics of folders + markdown while enforcing structure and type safety is to have a CUE intermediate representation, then serialize to a folder of markdown files, with all object attributes besides name and description being thrown in front matter. It's the same pattern used by Obsidian vaults, you can even open the vaults it creates in Obsidian if you want.
This structure lets you lint your specs, and do code generation via template pattern matching to automatically generate code + tests + docs from your project vault, so you have one source of truth that's very human accessible.
I've been exploring spec driven workflows, but from a different angle.
I've been thinking about how to describe systems with a standard format, recursively. Instead of one 10 page doc, you might get 10 one pagers, starting from the highest level of abstraction and recursing down into the parts and subsystems, all following the same format. Building out this Graph of domains provides certain reusable nodes/bits of context.
This then extends to any given bit of software, which is a system in itself composed of the intersections of a lot of different domains and subsystems.
Cue looks interesting and is something I'll be digging into more
I'm not sure I agree, you don't need to vibe document at all.
What I do in general is: - write two separate business requirements and later implementation markdown files - keep refining the first and second one as the work progresses, stakeholders provide feedback
Before merging I have /docs updated based on requirements and implementation files. New business logic gets included in business docs (what and why), new rules/patterns get merged in architectural/code docs.
Works great and better at every new pr and iteration.