The way I learned "systems thinking" explicitly includes the perspectives this article offers to refute it - a system model is useful but only a model, it is better used to understand an existing system than to design a new one, assume the system will react to resist intervention. I've found this definition of systems thinking extremely useful as a way to look reductively at a complex system - e.g. we keep investing in quality but having more outages anyway, maybe something is optimizing for the wrong goal - and intervene to shift behaviour without tearing down the whole thing, something this article dismisses as impossible.
The author and I would agree on Gall's Law. But the author's conclusion to "start with a simple system that works" commits the same hubris that the article, and Gall, warn against - how do you know the "simple" system you design will work, or will be simple? You can't know either of those things just by being clever. You have to see the system working in reality, and you have to see if the simplicity you imagined actually corresponds to how it works in reality. Gall's Law isn't saying "if you start simple it will work", it's saying "if it doesn't work then adding complexity won't fix it".
This article reads a bit like the author has encountered resistance from people in the past from people who cited "systems thinking" as the reason for their resistance, and so the author wants to discredit that term. Maybe the term means different things to different people, or it's been used in bad faith. But what the article attacks isn't systems thinking as I know it, more like high modernism. The author and systems thinking might get along quite well if they ever actually met.
I think there's plenty to agree with in the article's descriptions of failure and hubris. What the critical commenters are taking issue with is that the article blames those symptoms on a straw man. It's a persuasive article, not a historical review, so it's reasonable to debate its conclusion and reasoning as well as its supporting evidence.
>claims to refute the whole discipline based on a single incorrect prediction
I'm not so sure about "incorrect" even. The retrospectives have been generally positive.[0][1]Citing economic growth as a counterexample is pretty silly, because in the Limits models many parameters look great right up until the collapse.[2]
I would encourage everyone to see how the original authors describe their findings[2][3], rather than (potentially motivated) retellings.
[0] https://www.livescience.com/collapse-human-society-limits-to...
[1] https://www.theguardian.com/commentisfree/2014/sep/02/limits...
Articles debunking them are always full of fundamental misunderstandings about the discipline. (The ones supporting them are obviously wrong.) And people focusing on understanding the discipline never actually refer to them in any way.
It's got all the essential elements of Factorio that make it so interesting and compelling, which apply to so many other fields from VLSI design to networking to cloud computing.
But you mine shapes and colors and combine them into progressively more complex patterns!
All valid criticisms, but somehow it sounds exactly like something a member of inept bureaucracy would say.
When inept bureaucracy is put in the spotlight usually someone pops up to defend how much of important work they are doing and how the things they deal with are just so complicated. And how criticisms are unfair and unfounded.
Systems don't do that. Only constituents who fear particular consequences do.
Systems also don't care about levels of complexity. Especially since it's insanely hard to actually break systems that are held together by only the "what the fuck is going on, let's look into that" kind. Hours, days, weeks, later, things run again. BILLIONS lost. Oh, we wish ...
At the end of the day, the term Systems Thinking is overloaded by all the parts that have been invented by so called economists and "the financial industry", which makes me chuckle every time now that it's 2025 and oil rich countries have been in development for decades, the advertisement industry is factory farming content creators and economists and multi-billionaires want more tikktoccc and instagwam to get into the backs of teen heads.
If you are a SWE, systems architect or anything in that sphere, please, ... act like you care about the people you are building for ... take some time off if you can and take care of must be taken care of, ... it's just systems, after all.
These are part of a system. Ignoring these components gives you an incomplete model.
(All models are incomplete, by definition, but ignoring constituents that have a major influence greatly reduces the effectiveness of your model)
> there is no such thing as an isolated system.
Very true.
Look no further than evolutionary biology, you see this all the time where extinctions occur because the environment changes such that the system is no longer optimal.
What if we looked at the extinct species as constituents that have been removed because they were obsolete in the system? That way, the system remains optimal, without resisting change.
The system of humanity requires a lot. We used to say "survival of the fittest", which meant survival of the fittest and the "most aware", meaning being able to distinguish which survival strategy is the most viable for a given organism.
Fight, flight, freeze, dominance, independence, submission, DIY, DOBUY; the latter are especially interesting given how reduced information about the requirements and the sensitivities of the individual body can cripple your organs to a point that is more beneficial for some interest group than it is to you; in other words: someone can make sure you are stupid enough to be abused for some specific task until you can be discarded of. At this point we don't know if the system will survive more than one period because of the interest group or suffer within one or more periods because of that interest group.
In evolutionary biology, more symbiotic organisms and systems survived a lot longer that those who were less symbiotic, on scales that modern humans can't put into adequate numbers yet.
Isolated systems do exist. They can be isolated and they can self-isolate for various reasons and by various means. This happens even in species/systems we mostly consider mostly unconscious while definitely sentient and aware.
Wear and tear and maintenance, leeching and seeding, putting info and questions into words and lurking; none of these really attach a system to another by default, by design or via behavior, reward and punishment. The rules go beyond that and stretch longer time frames than we account for.
Thinking out loud here, btw.
Systems don't do that. Only constituents who fear particular consequences do. <<
For example, the human body is pretty decent at maintaining a fixed internal temperature.
Cities supposedly maintain a fairly stable transit time even as transit infrastructure improves.
The deeper question is why create models of a reality in which all models are wrong, but some extract value long enough to create both ecological collapse and poverty? These are the end states or even goals of models in a universe with limited resources to surfaces of planets.
Each optimization is designed to create dystopic conditions. This is obvious.
[1] https://en.wikipedia.org/wiki/Reflexivity_(social_theory)
(You will never get all them right. You will never even be able to list what their entirety will be. But you have to be able to predict the order of magnitude of a few of them.)
The things the author complains about seem to be "parts of systems thinking they aren't aware of". The field is still developing.
I think it's worth considering that the theories you're familiar with are incredibly niche, have never gained any foothold in mainstream discussions of system dynamics, and it's not wrong for people not to be aware of them (or to choose not to mention them) in a post addressed at general audiences.
Further, you just missed the opportunity to explain these concepts to a broader HN audience and maybe make sure that the next time someone writes about it, they are aware of this work.
I don't think commenters should be expected to provide full overviews of topics just to inform others. Parent gave plenty of pointers beyond metacybernetics, all of which are certainly discoverable. If you are curious, read about it. It's not the responsibility of random strangers to educate you.
would https://en.m.wikipedia.org/wiki/Warren_Sturgis_McCulloch be what you mean?
and if not,can you give the right pointer?
Training a nonlinear system to behave in a way you want is the raison d’etre of optimal control theory.
But I wouldn’t say it’s the birthing place of neural networks, personally.
You missed the opportunity to ask a simple question - what is metacybernetics? - and decided everything on that list was just as niche.
There was some hard to follow explanations in it, but the author tries to connect the history and goals of cybernetics versus modern problems like being unable to get support from a company.
https://en.wikipedia.org/wiki/American_Society_for_Cyberneti...
I found it much better to take the first step and progress from there, even when the full solution is not known. Maybe it's a testament to the limits of my own context window. Having said, I'm not advocating for abandoning architecture or engineering principles. I like the idea of "Growing software" [0]. It's perhaps a more holistic metaphor.
In terms of short circuiting large bureaucracies, I found "Fighter Mafia" [1] to be an interesting example of this. A group of military officials/contractors managed to influence aircraft design, somewhat outside of the "official" channels. The outcome was better than if it went through normal channels.
In this version, the manufacturer sees all the inventories, and all the middle layers pass all stock to the next layer. (The game also has a trivial demand function, so the only challenge is to detect or predict the single step change in demand rate, and then calculate 3 weeks ahead to smooth out the supply chain.)
https://forio.com/app/showcase/near-beer-game/
The game was played for 35 years before you demonstrated that it was a broken over-complication of a trivial game?
Or did you break the game by coordinating with your teammates on strategy (or, equivalently, all players computing the perfect Hofstadteran superrational strategy https://en.m.wikipedia.org/wiki/Superrationality ), when the game was meant to simulate the general human tendency for hyperlocal optimization, and the problem of dealing with chaotic incompetent peers?
If you use knowledge of the deck, you can obviously pre-solve things, but that was not an assumption here - our thing works without knowledge of the order deck.
The "new beer game" looks totally different, honestly.
I'd encourage people to look into soft systems methodology, critical systems theory, and second order cybernetics, all of which are pretty explicitly concerned with the problem of the "system fighting back". The article is good, as works in progress articles usually are, but the initial premise and resulting coverage are shallow as far as the intellectual depth and lineage here goes.
Then, “Meltdown” and finally “The Fifth Discipline”
Within the current state of the world, we need more systems thinking+research, done with quality, sophistication and in the public. Not less - this article does not help and spreads the wrong ideas.
summary and smackdown by Opus 4.1: https://claude.ai/share/1e8fec5e-ec6a-4c6f-a3d7-b74b0801e5b9
[0] https://cdn.factorio.com/assets/blog-sync/fff-420-line-art.p...