Posted by r4um 21 hours ago
- you create large number of working small apps .
- you create a spec from these apps .
- create a huge app .
- make a Dsl to make extensible .
- extend the Dsl to fit what you need in the future .
- optimize the Dsl remove obvious N+1 stuff.
The hard part is throwing away the code in each step . Both managment and devs cant stomach the reality that the code is useless at each stage prior to dsl. They cant molt and discard the shell and hence the project dies.
When they turn out to be based on false assumptions of simplicity the fallout is that the whole thing can't go forward because of one of the details.
Evolutionary systems at least always work to some degree even if you can look after the fact and decide that there's a lot of redundancy. Ideally you would then refactor the most troublesome pieces.
Ai assistance would seem to favor the engineering approach as the friction of teams and personalities is reduced in favor of quick feasibility testing and complete planning.
Software has 0 construction cost but that it does have is extremely complicated behavior.
Take a bridge for example: the use case is being able to walk or drive or ride a train across it. It essentially proves a surface to travel on. The complications of providing this depend on the terrain, length etc etc and are not to be dismissed but there's relatively little doubt about what a bridge is expected to do. We don't iterate bridge design because we don't need to know much from the users of the bridge: does it fulfill their needs, is it "easy to use" etc AND because construction of a bridge is extremely expensive so iteration is also incredibly costly. We do, however, not build all bridges the same and people develop styles over time which they repeat for successive bridges and we iterate that way.
In essence, cycling is about discovering more accurately what is wanted because it is so often the case that we don't know precisely at the start. It allows one to be far more efficient because one changes the requirements as one learns.
In fact, this is how you build an aerospace program, satelite, and more.
It is even possible to develop software with agile processes in such a framework, even though strictly speaking it’s not fully agile.
That feels like a straw man to me. This is not a binary question. For each small design decision you have a choice about how much uncertainty you accept.
There are no "two schools". There is at least a spectrum between two extremes and no real project was ever at either of the very ends of it. Actually, I don't think spectrum is a proper word even because this is not just a single dimension. For example, speed and risk often correlate but they are also somewhat independent and sometimes they anti-correlate.
It didn't work, even for the "large" systems of that time: and Booch had worked on more than a few. The kind of "system" the OP is describing is vastly larger, and vastly more complex. Even if you could successfully apply the waterfall model to a system built over two or three years, you certainly can't for a system of systems built over 50 years: the needs of the enterprise are evolving, the software environment is evolving, the hardware platform is evolving.
What you can do, if you're willing to pay for it, is ruthlessly attack technical debt across your system of systems as a disciplined, on-going activity. Good luck with that.
The 1970 Royce paper was about how waterfall didn't work, and most "Agile" is a subset of DSDM, each flavor missing a necessary thing or two whether working in large systems or growing them greenfield from nothing. But DSDM wasn't "little a" agile (and SAFE just isn't). There is a middle way.
If you like applying this stuff (e.g. you've chatted with Gene Kim, follow Will Larsen, whatever, sure, but you've deliberately iterated your approach based on culture and outcome observability), feel free to drop me a note to user at Google's thing.