Posted by larelli 2 hours ago
I used to believe this, but after working at a successful SaaS I have come to believe that correctness and unambiguity are not entirely necessary for successful software products.
It was a very sad realization that systems can be flaky if there is enough support people to solve edge case problems. That features can be delivered while breaking other features as long as enough users don't run into the middle of that venn diagram, etc.
Fact is it always comes down to economics, your software can afford to be as broken and unpredictable as your users will still be willing to pay money for it.
In my opinion, a system that has been stable for years isn't 'mature' in a good sense. An exceptional system is one that can still change after many years in production.
I believe this is almost impossible to achieve for enterprise software, because nobody has incentive to make the (huge) investment into longterm maintainability and changeability.
For me, consistent systematic naming and prefixes/suffixes to make names unique are a hint that a person is thinking about this or has experience with maintaining old systems. This has a huge effect on how well you can search, analyze, find usages, understand, replace, change.
Obviously a lot of this you can piece together today, in fact Snowflake itself does a lot of it. But the other part of the article makes me think they understand the even harder part of the problem in modern enterprises, which is that nobody has a clear view of the model they're operating under, and how it interacts with parts of the business. It takes insane foresight and discipline to keep these things coherent, and the moment you are trying to integrate new acquisitions with different models you're in a world of pain. If you can create a layer to make all of this explicit - the models, the responsibilities, the interactions, and the incompatibilities that may already exist, then mediate the chaos with some sort of AI handholding layer (because domain experts and disciplined engineers aren't always going to be around to resolve ambiguities), then you can solve both a huge technical problem but a much more complicated ecological one.
Anyway, whatever they're working on, I think this is the exact area you should focus on if you want to transform modern enterprise data stacks. Throwing AI at existing heterogenous systems and complex tech stacks might work, but building from scratch on a system that enforces cohesion while maintaining agility feels like it's going to win out in the end. Excited to see what they come up with!
In practice, most of the complexity comes exactly from what’s described here: every system has a rich internal model, but the moment data crosses a boundary, everything degrades into strings, schemas, and implicit contracts.
You end up rebuilding semantics over and over again (validation, mapping, enrichment), and a lot of failures only show up at runtime.
I’m skeptical about “one model to rule them all”, but I strongly agree that losing semantics at system boundaries is the core problem.
I think die-hard fans of static typing mostly fail to acknowledge this objective reality and its implications. Every time they encounter this problem again and again, they approach it as if nobody thought of this before, and didn’t develop reliable abstractions to productively work in these environments.
Uh...
> Implementing it is more than I can do alone, which is why my cofounders, Daniel Mills and Skylar Cook, and I are starting Cambra. We are developing a new kind of programming system that rethinks the traditional internet software stack on the basis of a new model.