Top
Best
New

Posted by lubujackson 2 days ago

The Missing Layer(yagmin.com)
55 points | 53 comments
xnorswap 1 day ago|
> but no matter how small you make the steps, the area never changes

Sorry, this is a bit off-topic, but I have to call this out.

The area absolutely does change, you can see this in the trivial example from the first to second step in https://yagmin.com/blog/content/images/2026/02/blocks_cuttin...

The corners are literally cut away.

What doesn't change is the length of the edges, which is a kind of manhattan distance.

The length of the edge has a limit of the straight line, but does not actually approach the limit.

The area however absolutely does approach the limit, as in fact you remove half the "remaining" area each iteration.

sunir 1 day ago||
I find these throws of passionate despondency similar to the 1980s personal computing revolution. Oh dear. Giving mere mortals the power of computing?! How many people would abandon their computers or phones.

It’s not like it changes our industry’s overall flavour.

How many SaaS apps are excel spreadsheets made production grade?

It’s like every engineer forgets that humans have been building a Tower of Babel for 300000 years. And somehow there is always work to do.

People like vibe coding and will do more of it. Then make money fixing the problems the world will still have when you wake up in the morning.

lubujackson 1 day ago||
I am not against vibe coding at all, I just don't think people understand how shaky the foundation is. Software wants to be modified. With enough modifications the disconnect between the code as it is imagined and the code in reality becomes too arduous of a distance to bridge.

The current solution is to simply reroll the whole project and let the LLM rebuild everything with new knowledge. This is fine until you have real data, users and processes built on top of your project.

Maybe you can get away with doing that for a while, but tech debt needs to be paid down one way or another. Either someone makes sense of the code, or you build so much natural language scaffolding to keep the ship afloat that you end up putting in more human effort than just having someone codify it.

We are definitely headed toward a future where we have lots of these Frankenstein projects in the wild, pulling down millions in ARR but teetering in the breeze. You can definitely do this, but "a codebase always pays its debts."

computerex 1 day ago||
This hasn’t been my experience at all working on production code bases with LLMs. What you are describing is how it was more like in gpt 3.5 era.
lubujackson 1 day ago||
Not using LLMs, but using them without ever looking at the code.
cootsnuck 1 day ago||
But this time is different! For reasons!

Yea, the more things change the more they stay the same. This latest AI hype cycle seems to be no different. Which I think will become more widely accepted over the next couple of years as creating deployable, production-ready, maintainable, sellable, profitable software remains difficult for all the reasons besides the hands-to-keyboard writing of code.

aditgupta 1 day ago||
Jim nailed the core problem. I've been building exactly this "missing layer" for past few months. The challenge isn't just connecting product decisions to code. It's that product context lives in a format that's optimized for human communication, not machine consumption. When engineers feed this to LLMs, they spend massive effort "re-contextualizing" what stakeholders already decided. I built TypMo (https://typmo.com) around two structured formats that serve as this context layer: PTL (Product Thinking Language)- Structures product decisions (personas, objectives, constraints, requirements) in a format both humans can read/edit and LLMs can parse precisely. Think YAML for product thinking. and Interface Structure Language (ISL) - Defines wireframes and component hierarchies in structured syntax that compiles into visual mockups and production-ready prompts. LLMs don't need more context, they need structured context. The workflow Jim describes (stakeholder meeting → manager aggregates → engineer re-contextualizes for LLM) becomes: stakeholder meeting → PTL compilation → IA generation → production prompts.

LEt's see where it goes!

4b11b4 1 day ago|
Nice and I'm thinking along similar lines but not DSLs.

My intuition off reading what you wrote is... Nobody is gonna want to write PTLs and ISLs.

aditgupta 1 day ago||
Exactly right, and that's the core point. Users don't write PTL or ISLs. Let's say you have customer interactions (fetched from Zoom) or product/research notes. The AI structures that into PTL automatically. You see clean, editable notes and visual wireframes + high-fidelity prototypes and prompts. The structured formats exist in the background for token efficiency and interoperability.
4b11b4 1 day ago||
Ah I see. When I say write.. I also mean review and revise.
SeriousM 23 hours ago||
... and understand. Whenever I vibe code something and there is an issue with it, I either have to step in myself and have to learn what was generated or hand it back to the ai to fix it.

The generation of these meta documents will also be fixed by ai, not humans.

asim 1 day ago||
We need a language and a transpiler. Honestly the LLM has many uses. Agents have many uses. And we are narrowing down how to make them deterministic and predictable for programming machines and software. But that also means we need something beyond natural language for the actual implementation. Yes we've moved a level up, but engineers are not product managers, so as much as we can define the scope and outline a project like a 2 week sprint using scrum or kanban, the reality is deterministic input for deterministic output is still the way to go. Just as compilers and higher level languages opened the doors to the next phase, the LLM manages this translation and compilation, but it's missing a sort of intermediary language, a format that's going to be much better processed and compiled directly down to machine code. We're talking about LLVM. Why are asking LLMs to write Go code or Python, when we could much better translate an intermediary language to something far more efficient and performant. So I think there's still work to be done.
wtetzner 1 day ago|
Am I understanding what you're saying correctly?

* We need a deterministic input language

* The LLM generates machine code

Isn't that just a compiler? Why do we need the LLM at that point?

CuriouslyC 1 day ago||
If the compiler only gets you 80% of the way there, but what it does is sufficient to put the LLM on rails, like programming language mad libs, I'd say that's a win.
4b11b4 1 day ago|||
Yup, that's the idea. Mad libs are still constrained
wtetzner 1 day ago|||
I feel like I'm still not understanding something. How does making the output from the LLM lower level help?
CuriouslyC 1 day ago||
Concrete example: Next/Turborepo. These tools make your life easier if you drink some kool aid. Rather than have the agent scaffold the app you have the agent use a tool that scaffolds. Agents write specs to manage tools, and those tools scaffold the code, then the agents just sprinkle in business logic that is too bespoke for codegen.
helloplanets 1 day ago||
> Let's say your organization wants to add "dark mode" to your site. How does that happen? A site-wide feature usually requires several people to hash out the concerns and explore costs vs. benefits. Does the UI theming support dark mode already? Where will users go to toggle dark mode? What should the default be? If we change the background color we will need to swap the font colors. What about borders and dividers? What about images? What about the company blog, and the FAQ area, which look integrated but run on a different frontend? What about that third-party widget with a static white background?

Only one or two of those questions are actually related to programming. (Even though most developers wear multiple hats.) If an organization has the resources to have a six person meeting for adding dark mode, I'd sure hope at least one of them is a designer and knowledgeable on UX. Because most of those questions are ones that they should bring up and have an answer for.

einrealist 1 day ago||
I am curious to know what he has in mind. This 'process engineering' could be a solution to problems that BPM and COBOL are trying to solve. He might end up with another formalized layer (with rules and constraints for everyone to learn) of indirection that integrates better with LLM interactions (which are also evolving rapidly).

I like the idea that 'code is truth' (as opposed to 'correct'). An AI should be able to use this truth and mutate it according to a specification. If the output of an LLM is incorrect, it is unclear whether the specification is incorrect or if the model itself is incapable (training issue, biases). This is something that 'process engineering' simply cannot solve.

reg_dunlop 1 day ago|
I'm also curious about what a process engineering abstraction layer looks like. Though the final section does hint at it; more integration of more stakeholders closer to the construction of code.

Though I have to push back on the idea of "code as truth". Thinking about all the layers of abstraction and indirection....hasn't data and the database layer typically been the source of truth?

Maybe I'm missing something in this iteration of the industry where code becomes something other than what it's always been: an intermediary between business and data.

einrealist 1 day ago||
Yes, the database layer and the data itself are also sources of truth. Code (including code run inside the database, such as SQL, triggers, stored procedures and other native modules) defines behaviour. The data influences behaviour. This is why we can only test code with data that is as close to reality as possible, or even production data.
cadamsdotcom 22 hours ago||
The devil is always in the details.

That’s why you should always make your agent write tests first and run them to make sure they fail and aren’t testing that 1+1=2. Then and only then do you let it write the code that makes them pass.

That creates an enormous library of documentation of micro decisions that can be read by the agent later.

Your codebase ends up with executable documentation of how things are meant to work. Agents can do archaeology on it and demystify pretty much anything in the codebase. It makes agents’ amnesia a non-issue.

I find this far more impactful than keeping specs around. It detects regressions and doesn’t drift.

CuriouslyC 1 day ago||
Spec driven development is great in theory, but it has a lot of issues, I rant about them here: https://sibylline.dev/articles/2026-01-28-problems-with-spec...

I'm working on a tool that uses structured specs as a single source of truth for automated documentation and code generation. Think the good parts of SpecKit + Beads + Obsidian (it's actually vault compatible) + Backstage, in a reasonably sized typescript codebase that leverages existing tools. The interface is almost finalized, I'm working on polishing up the CLI/squashing bugs and getting good docs ready to do a real launch but if anyone's curious they can poke around the github in the meantime.

One neat trick I'm leveraging to keep the nice human ergonomics of folders + markdown while enforcing structure and type safety is to have a CUE intermediate representation, then serialize to a folder of markdown files, with all object attributes besides name and description being thrown in front matter. It's the same pattern used by Obsidian vaults, you can even open the vaults it creates in Obsidian if you want.

This structure lets you lint your specs, and do code generation via template pattern matching to automatically generate code + tests + docs from your project vault, so you have one source of truth that's very human accessible.

Jarwain 1 day ago||
This sounds quite interesting!

I've been exploring spec driven workflows, but from a different angle.

I've been thinking about how to describe systems with a standard format, recursively. Instead of one 10 page doc, you might get 10 one pagers, starting from the highest level of abstraction and recursing down into the parts and subsystems, all following the same format. Building out this Graph of domains provides certain reusable nodes/bits of context.

This then extends to any given bit of software, which is a system in itself composed of the intersections of a lot of different domains and subsystems.

Cue looks interesting and is something I'll be digging into more

4b11b4 8 hours ago|||
hierarchical specs? Sounds hard to understand where they overlap. Or, they are independent?
CuriouslyC 1 day ago|||
This is my approach. I use the C4 software model, it's pretty general. Entities can be represented either by markdown files, or folders with README.md files (sort of like an index.js or __init__.py). Folder hierarchy gives you a basic project object model.
4b11b4 1 day ago|||
Arbiter is looking nice. I like the composability. Pretty tough for anything except a developer to author it though, which was maybe an accepted tradeoff.
CuriouslyC 1 day ago||
Thanks. My philosophy with it is to be minimal and adaptable to people's existing workflows, I went through a lot of iterations to land on something that was both expressive and "human." The Obsidian compatibility was a sign for me that I was on the right track.
4b11b4 8 hours ago||
My gut says it's too far downstream
shuss 1 day ago||
There are many impediments to scaling vibe coding. We built a tool to internally scale vibe coding to vibe engineering: https://mfbt.ai/blog/vibe-coding-vs-vibe-engineering/
epolanski 1 day ago|
> Documentation is hard to maintain because it has no connection to the code. Having an LLM tweak the documentation after every merge is "vibe documenting."

I'm not sure I agree, you don't need to vibe document at all.

What I do in general is: - write two separate business requirements and later implementation markdown files - keep refining the first and second one as the work progresses, stakeholders provide feedback

Before merging I have /docs updated based on requirements and implementation files. New business logic gets included in business docs (what and why), new rules/patterns get merged in architectural/code docs.

Works great and better at every new pr and iteration.

More comments...