Posted by r4um 20 hours ago
>> If you ignore a dependency and try to fix it later, it will be more expensive. More time, more effort, more thinking. And it will require the same level of coordination that you tried to avoid initially.
Would add that, if you only address fixing these dependencies one by one, as they manifest, i.e. continue in the evolutionary way, you risk resolving those parts of your Big System into some local minima; over time, you go from lots of little presumed-independent bubbles, to an intermediate stage with fewer but larger medium sized bubbles. When those get into conflict, the pain will be correspondingly greater.There are core differences in software engineering that, unlike in construction work:
- making changes is often cheaper
- we might not know beforehand everything that is needed to be built, especially unknown unknowns
I would still agree that the truth is somewhere in between, but I would argue that, for software, it's closer to the evolutionary approach.
Incidentally this highlights a problem when using chatbots to build large software projects that are intended to be used for a long period of time.
The key is not how much code you can add but how little you can get away with.
Chatbots only solution ever is to ADD code. They're not good at NOT writing code or even deleting it because after all the training set for the lines of code that do not exist is an empty set. Therefore it's impossible to train a robot to not write code.
What's better than generating 10kloc really fast? Not having it in the first place.
In short: the tension described in "systems thinking" is the same one as the one between "spec driven" vs. "iterative prompting"
> you lay out a huge specification that would fully work through all of the complexity in advance, then build it.
I have tried this a couple of time even for small projects ( a few sprints ), and they never worked out. I'd argue it never works out if you are doing non-system programming projects, and only has a theoretical non-zero possibility to work out for system programming projects, and perhaps a 5-10% to work out for very critical and no patch possible projects (like moon landing).Because requirements always change. Humans always change. That's it. No need to elaborate.
Nah I’m good. I’ve watched system architecture framework views be developed. Years of prep and planning. System is released and half the employees that had requirements no longer work there and the business already pivoted to a new industry focus.
There’s a reason we went this way in software development a quarter century ago.
Software is not a skyscraper.
Everything W3C does. Go is evolving through specs first. Probably every other programming language these days.
People already do that for humankind-scale projects where there have to be multiple implementations that can talk to each other. Iteration is inevitable for anything that gains traction, but it still can be iteration on specs first rather than on code.
IMO the problem isn't discussing the spec per se. It's that the spec doesn't talk back the way actual working code does. On a "big upfront design" project, there is a high chance you're spending a lot of time on moot issues and irrelevant features.
Making a good spec is much harder than making working software, because the spec may not be right AND the spec may not describe the right thing.
I suppose it's primarily a matter of experience. And as the article alludes, it's very important to deeply understand the subject matter. I highly value some of my non-programmer colleagues responsible for documentation. But can't put my finger on what exactly they brought to table that made their prose exceptionally good (clear, concise, spot on)...