Top
Best
New

Posted by r4um 21 hours ago

Systems Thinking(theprogrammersparadox.blogspot.com)
258 points | 115 commentspage 3
elisharobinson 13 hours ago|
Something i see pop up in large Orgs and software solutions is as follows.

- you create large number of working small apps .

- you create a spec from these apps .

- create a huge app .

- make a Dsl to make extensible .

- extend the Dsl to fit what you need in the future .

- optimize the Dsl remove obvious N+1 stuff.

The hard part is throwing away the code in each step . Both managment and devs cant stomach the reality that the code is useless at each stage prior to dsl. They cant molt and discard the shell and hence the project dies.

t43562 14 hours ago||
Big upfront designs are obviously based on big upfront knowledge which nobody has.

When they turn out to be based on false assumptions of simplicity the fallout is that the whole thing can't go forward because of one of the details.

Evolutionary systems at least always work to some degree even if you can look after the fact and decide that there's a lot of redundancy. Ideally you would then refactor the most troublesome pieces.

bluGill 12 hours ago|
Big upfront design always tries to design too many things that should be implementation details. Meanwhile the things that are really important are often ignored - because you don't even realize they are important at the time.
internet_points 16 hours ago||
https://en.wikipedia.org/wiki/Conway%27s_law is very relevant here
ArchieScrivener 20 hours ago||
The Evolution method outlined also seems born from the Continuous Delivery paradigm that was required for subscription business models. I would argue Engineering is the superior approach as the Lean/Agile methods of production were born from physical engineering projects whose end result was complete. Evolution seems to be even more chaotic because an improper paradigm of 'dev ops' was used instead of organically emerged as one would expect with an evolving method.

Ai assistance would seem to favor the engineering approach as the friction of teams and personalities is reduced in favor of quick feasibility testing and complete planning.

t43562 14 hours ago|
I think that a comparison with Engineering is not that helpful for software.

Software has 0 construction cost but that it does have is extremely complicated behavior.

Take a bridge for example: the use case is being able to walk or drive or ride a train across it. It essentially proves a surface to travel on. The complications of providing this depend on the terrain, length etc etc and are not to be dismissed but there's relatively little doubt about what a bridge is expected to do. We don't iterate bridge design because we don't need to know much from the users of the bridge: does it fulfill their needs, is it "easy to use" etc AND because construction of a bridge is extremely expensive so iteration is also incredibly costly. We do, however, not build all bridges the same and people develop styles over time which they repeat for successive bridges and we iterate that way.

In essence, cycling is about discovering more accurately what is wanted because it is so often the case that we don't know precisely at the start. It allows one to be far more efficient because one changes the requirements as one learns.

prpl 7 hours ago||
you don’t build specifications, you build models like you would in model-based systems engineering, and then you follow the V-model process.

In fact, this is how you build an aerospace program, satelite, and more.

It is even possible to develop software with agile processes in such a framework, even though strictly speaking it’s not fully agile.

wvlia5 10 hours ago||
Let's point the elephant in the room: you won't be able to create any relevant system by specializing in "systems thinking".
qznc 18 hours ago||
> There are two main schools of thought in software development about how to build really big, complicated stuff.

That feels like a straw man to me. This is not a binary question. For each small design decision you have a choice about how much uncertainty you accept.

There are no "two schools". There is at least a spectrum between two extremes and no real project was ever at either of the very ends of it. Actually, I don't think spectrum is a proper word even because this is not just a single dimension. For example, speed and risk often correlate but they are also somewhat independent and sometimes they anti-correlate.

zingar 13 hours ago||
It's strange that engineering is considered to be in opposition to evolution when we get concepts like prototypes and working models from engineering.
wduquette 13 hours ago||
Grady Booch said that any large system that works is invariably found to have evolved from a smaller system that worked. I've seen this cited as Gall's Law, from John Gall's 2012 book Systemantics, but I read it in a book by Booch back in the late 80's/early 90's. At that time the "waterfall model" was the conventional wisdom: to the extent possible, gather all the requirements, then do all the design, then do all the coding, then do all the testing, doing the minimum of rework at each step.

It didn't work, even for the "large" systems of that time: and Booch had worked on more than a few. The kind of "system" the OP is describing is vastly larger, and vastly more complex. Even if you could successfully apply the waterfall model to a system built over two or three years, you certainly can't for a system of systems built over 50 years: the needs of the enterprise are evolving, the software environment is evolving, the hardware platform is evolving.

What you can do, if you're willing to pay for it, is ruthlessly attack technical debt across your system of systems as a disciplined, on-going activity. Good luck with that.

Terretta 12 hours ago|
The paradox in post is resolved by limiting the planning to Russian doll like nested timeframes and scopes, upgrading from endless 2 week sprints and quarterly or annual "planning" to cycles within cycles, scoped to human magnitudes of time, and JIT re-planned at the Nyquist interval of each cycle by those at the corresponding level of the enterprise org chart who must also be domain leads with mastery at that level and who have retained ability to probe two levels below while practiced at bringing along at least one level up.

The 1970 Royce paper was about how waterfall didn't work, and most "Agile" is a subset of DSDM, each flavor missing a necessary thing or two whether working in large systems or growing them greenfield from nothing. But DSDM wasn't "little a" agile (and SAFE just isn't). There is a middle way.

If you like applying this stuff (e.g. you've chatted with Gene Kim, follow Will Larsen, whatever, sure, but you've deliberately iterated your approach based on culture and outcome observability), feel free to drop me a note to user at Google's thing.

More comments...