Top
Best
New

Posted by epb_hn 11 hours ago

Magical systems thinking(worksinprogress.co)
249 points | 78 commentspage 3
mallowdram 7 hours ago|
Wait until "world models" fail to work along these lines. Models are always wrong. They're useful only as interpretations, they can never reproduce, reference, or mimic the events in question.

Another great example is Tansley's ecological systems model that he worked on for over many years with influence from Forrester only for the Odums to develop models, attempt to reproduce them in controlled environments and watch them fail miserably.

The cybernetic, computational, systems, world models are illusions all. AI has the same limitations simply because the infinity of tasks can never be modeled or automated.

Most of the ideas in the article can be seen, very clearly and cleverly narrated, in Curtis's best series "All Watched Over By Machines of Loving Grace" particularly episode 2.

luluthefirst 9 hours ago||
start small or fail big
cindyllm 9 hours ago|
[dead]
api 10 hours ago||
I studied biology in college and this has always been obvious to me, and it shocks me that people with backgrounds in e.g. ecology don't understand that living systems are unpredictable auto-adaptive machines full of feedback loops. How a bunch of ecologists could take doomerism based on "world models" seriously enough to cause a public panic about it (e.g. Paul Ehrlich) baffles me.

Human cultural systems are even worse than non-human living systems: they actively fight you. They are adversarial with regard to predictions made within them. If you're considered a credible source on economics and you say a recession is coming, you change the odds of a recession by causing the system to price in your pronouncement. This is part of why market contrarianism kind of works, but only if the contrarians are actually the minority! If contrarianism becomes popular, it stops being contrarian and stops working.

So... predicting doom and gloom from overpopulation would obviously reduce the future population if people take it seriously.

Tangentially, everything in economics is a paradox. A classic example is the paradox of thrift: if everyone is saving nobody can save because for one to save another must spend. Pricing paradoxes are another example. When you're selling your labor as an employee you want high wages, high benefits, jobs security, etc, but when you go shopping you want low wages, low benefits, and a fluid job market... at least if you shop by comparing on price. If you are both a buyer and a seller of labor you are your own adversary in a two-party pricing game.

I personally hold the view that the arrow of time goes in one direction and the future of non-linear computationally irreducible systems cannot be predicted from their current state (unless you are literally God and have access to the full quantum-level state of the whole system and infinite computational power). I don't mean predicting them is hard, but that it's "impossible like perpetual motion" impossible.

I also wonder if we are being fooled by randomness when we think we see a person or a technique that yields good predictions. Are good prophets just luck plus survivorship bias? Obviously we forget all the bad prophets. All lottery winners are lucky, therefore lucky people should play the lottery. But who is lucky? The only way to find out is to play the lottery. Anyone who wins should have played, and anyone who loses should not have played.

sinuhe69 35 minutes ago||
That’s why we often model dynamic systems with feedback loops using control theory and when uncertainty is involved, with stochastic control theory and probabilistic equations. This way, we can account for the system’s possible reactions and transitions to new states, or put differently, we can even model how the system might fight back.
devenson 4 hours ago||
> non-linear computationally irreducible systems cannot be predicted

How confident are you of your ability to identify such systems?

Many systems are not such and therefore easy to predict.

lanstin 4 hours ago||
Not many important systems are predictable.
photochemsyn 6 hours ago||
Sounds like the Club of Rome was enamored of Isaac Asimov's Foundation series:

> "The Club of Rome asked an even more intricate question: how would social and economic forces interact in the coming decades? Where were the bottlenecks and feedback mechanisms? Could economic growth continue, or would the world enter a new phase of equilibrium or decline?"

The problem is, as systems grow more complex they often start to demonstrate sensitive dependence on conditions, eg with tiny variations in inputs to one node of the system resulting in wild swings in outputs from that node. Equally problematic, nodes in a complex system can change their connectivity to other nodes if conditions change enough (think of a breakdown in trade between nations due to wars, natural disasters, diseases etc).

The ideal systems to depend on are stable (not hypersensitive to small forcings, with predictable behavior) and have consistent structure. They can still be complicated but should fail gracefully back to simpler structures under stress, eg an emergency power supply for electricity at a hospital that normally relies on the grid.

From this perspective, our electrical grids are well-designed systems - not given to huge power fluctuations - that will nevertheless need major expansions and improvements if electricity demand keeps rising with data centers and eVs etc. However, expanding the grid isn't adding fundamental instabilities, it's just modular addition in the same pattern as the existing system.

In contrast, the USA's current financial-monetary system is not that stable, predictable, or reliable. All kinds of fundamental instabilities exist, and wild swings in behavior under pressure are expected - and since everything else relies on it, eg you can't update the electrical grid without capital input, you risk avalanching catastrophes by relying on such an unstable system.

crdrost 9 hours ago||
I like this. The author is somewhat needlessly hopeless about the prospects of changing a complex system.

Basic summary is that once you start getting more than a handful of feedback loops, the author through many examples cautions that maps of the system becomes more like physical maps—necessarily oversimplified. When you have four feedback loops under the right control of management, it's still a diagnostic aid, but you add everything in the US healthcare system, say—fuggetaboudit! And because differences at the small scale add up for long term outcomes, the map doesn't let you forecast the long term, it doesn't let you predict what to optimize, in fact, the only value that the author finds in a systems map for a sufficiently complex system, is as a rhetorical prop to show people why we need to reinvent the whole system. The author thinks this works very well, but only if the new system is grown organically, as it were, rather than imposed structurally.

The first criticism is, this complaint about being unable to change a system, is actually too amorphous and wibbly wobbly to stand. Here's what I mean: the author gives the example of the ICBM project in US military contracting as a success of the "reinvent method", but if you try to poke at that belief, it doesn't "push back" at you. Did we invent a whole new government to solve the ICBM project? I mean we invented other layers of bureaucracy—but they were embedded in the existing government and its bureaucracy. What actually happened was, a complex system existed that contained two subsystems that were, while not entirely decoupled, still operating with substantial independence. Somewhere up the chain, they both folded into the same bureaucracy with the same president, but that bureaucracy minimized a lot of its usual red tape.

This is actually the conceit of Theory of Constraints folks, although I don't usually see them being bold about it. The claim is that all of those hacks that you do in order to ship something? “Colleague gave me a 400 line diff, eh fuckitapprove, we'll do it live” ... that sort of thing? Actually, say ToC folks, that is your system running well, not poorly. The complex system is being pinned to an achievable output goal and it is being allowed to reorganize itself to achieve that goal. This is ultimately the point of the whole ToC ‘finding the bottlenecks’ jargon. “But the safeties are off and someone will get hurt,” you say. And they say somewhat unhelpfully, “That’s for the system to deal with.” Yes, the old configuration had these mechanisms to keep things safe, but you need a new system with new mechanisms. And that's precisely what you see in these new examples, there actually is top-down systems engineering, but around how do we maintain our quality standards, how do we keep the system accountable.

If the first criticism is that the “organically grow a new system to take its place” is airy-fairy, the second criticism is just that the hopelessness is unnecessarily pessimistic. Yes, complex systems with lots of feedback loops do maintain a homeostasis and revert back to that as you poke and prod them. Yes, it is really frustrating how to change one thing, you must change everything. Yes, it is doubly frustrating that systems that nominally are about providing and promoting X, turn out to provide and promote Y while actually being X-neutral (think for instance about anything which you do which ultimately just allows your manager to cover their ass, say—it is never described as a CYA, just acknowledged silently that way in hallway conversation).

But, we know complex systems that find new homeostatic equilibriums. You, reading this, probably know someone (maybe a friend, maybe a friend of a friend) who kicked drugs. You also know somebody who managed to “lose the weight and keep it off.” You know a player who became a family man, and you yourself remember instances where you were a dumb kid reliving the same shitty day over and over when you could have just done this one damn thing differently—you know it now!—and your days would have gotten steadily better and better rather than the same old rut. So you know that these inscrutably complex things do change. Sometimes it's pinning the result, like someone who drops the pounds because “I just resolved to live like my friend Derek, he agreed to take me a week through everything in his life, I wrote down what he eats for breakfast, when he hits the gym, how much does he talk with friends and family, then I forced myself to live on this schedule for a month and finally I got the hang of it.” Sometimes it's literally changing everything, “Yeah I lost the pounds because I went to live in the Netherlands and school was a 50 minute bike ride from my apartment either way and then I didn't have any friends so I joined the university's competitive ultimate frisbee team, so like my dinner most days was bought that day after practice in a 5 minute trip through the grocery—a raw bell pepper, a ball of mozzarella, maybe some bread in olive oil—I didn't have time to cook anything big.” Or sometimes it was imposed top-down but with good motivation, “yeah, I really wanted to get a role as an orphan in this musical, so I dieted and dieted with the idea of ‘I can binge once I get the part, but I have to sell scrawny orphan when auditions come round soon’ and like it sucked for two weeks but then I got used to the lifestyle and I no longer wanted to binge, funny how that worked out.”

There are so many different stories, and yes they never look like we would imagine success to look like, but being pessimistic about the existence of the solution in general because there's nothing in common about the success stories, I don't know, seems to throw the baby out with the bathwater. There is hope, it's just that when you are looking at the systems map, people get in this rut where they're looking for one thing to change, but really everything needs to change on that map, you've created a big networked dependency graph of the spaces you need to interrogate to figure out whether they are able to cope with the new way of doing things and, if not, are they going to grind their heels in and try to block the change. There's still use in it, you just need to view the whole graph holistically.

DonHopkins 8 hours ago||
Will Wright credits Jay Forrester for inspiring him to simulate a city on the computer:

https://www.gamedeveloper.com/business/the-replay-interviews...

>How did the leap from Raid's world editor, to SimCity with its urban design theories, happen?

>WW: First, it was just a toy for me. I was just making my editor more and more elaborate. I thought it would be cool to have the world come to life. So I started researching books on urban dynamics, and traffic, and things like that. I came across the work of Jay Forrester, who was kind of the father of system dynamics. He was actually one of the first people I found that actually simulated a city on a computer. Except in his simulation, there was no map; it was just numbers. It was like population level, number of jobs -- it was kind of a spreadsheet model.

>So I took his approach to it, and then applied a lot of the cellular automata stuff that I had learned earlier, and get these emergent dynamics that he wasn't getting in his model. I found when I was reading all these theories about urban dynamics and city behavior, that when I had a toy simulated version on the computer, it made the subject much more interesting than reading a book -- because I could go to my computer model and start experimenting.

>That just bought the whole subject to life for me and then, more and more, I started thinking, "Other people might enjoy this." But even then I never thought SimCity would have a broad appeal. I thought it might appeal to a few architects and city planner types, but not average people.

But Will's goal was to make a game that was fun to play, not to accurately simulate reality or make predictions. Intentionally inspiring magical systems thinking for entertainment, education, and storytelling!

Chaim Gingold's SimCity Reverse Diagrams:

https://smalltalkzoo.computerhistory.org/users/Dan/uploads/S...

>These reverse diagrams map and translate the rules of a complex simulation program into a form that is more easily digested, embedded, disseminated, and and discussed (Latour 1986).

>The technique is inspired by the game designer Stone Librande’s one page game design documents (Librande 2010).

>If we merge the reverse diagram with an interactive approach—e.g. Bret Victor’s Nile Visualization (Victor 2013), such diagrams could be used generatively, to describe programs, and interactively, to allow rich introspection and manipulation of software.

Will Wright on Designing User Interfaces to Simulation Games (1996) (2023 Video Update):

https://news.ycombinator.com/item?id=34573406

https://donhopkins.medium.com/designing-user-interfaces-to-s...

>Some muckety-muck architecture magazine was interviewing Will Wright about SimCity, and they asked him a question something like “which ontological urban paradigm most influenced your design of the simulator, the Exo-Hamiltonian Pattern Language Movement, or the Intra-Urban Deconstructionist Sub-Culture Hypothesis?” He replied, “I just kind of optimized for game play.”

https://news.ycombinator.com/item?id=22062590

>DonHopkins on Jan 16, 2020 | parent | context | favorite | on: Reverse engineering course

Will Wright defined the "Simulator Effect" as how game players imagine a simulation is vastly more detailed, deep, rich, and complex than it actually is: a magical misunderstanding that you shouldn’t talk them out of. He designs games to run on two computers at once: the electronic one on the player’s desk, running his shallow tame simulation, and the biological one in the player’s head, running their deep wild imagination. "Reverse Over-Engineering" is a desirable outcome of the Simulator Effect: what game players (and game developers trying to clone the game) do when they use their imagination to extrapolate how a game works, and totally overestimate how much work and modeling the simulator is actually doing, because they filled in the gaps with their imagination and preconceptions and assumptions, instead of realizing how many simplifications and shortcuts and illusions it actually used.

https://www.masterclass.com/classes/will-wright-teaches-game...

>There's a name for what Wright calls "the simulator effect" in the video: apophenia. There's a good GDC video on YouTube where Tynan Sylvester (the creator of RimWorld) talks about using this effect in game design.

https://en.wikipedia.org/wiki/Apophenia

>Apophenia (/æpoʊˈfiːniə/) is the tendency to mistakenly perceive connections and meaning between unrelated things. The term (German: Apophänie) was coined by psychiatrist Klaus Conrad in his 1958 publication on the beginning stages of schizophrenia. He defined it as "unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness". He described the early stages of delusional thought as self-referential, over-interpretations of actual sensory perceptions, as opposed to hallucinations.

RimWorld: Contrarian, Ridiculous, and Impossible Game Design Methods

https://www.youtube.com/watch?v=VdqhHKjepiE

5 game design tips from Sims creator Will Wright

https://www.youtube.com/watch?v=scS3f_YSYO0

>Tip 5: On world building. As you know by now, Will's approach to creating games is all about building a coherent and compelling player experience. His games are comprised of layered systems that engage players creatively, and lead to personalized, some times unexpected outcomes. In these types of games, players will often assume that the underlying system is smarter than it actually is. This happens because there's a strong mental model in place, guiding the game design, and enhancing the player's ability to imagine a coherent context that explains all the myriad details and dynamics happening within that game experience.

>Now let's apply this to your project: What mental model are you building, and what story are you causing to unfold between your player's ears? And how does the feature set in your game or product support that story? Once you start approaching your product design that way, you'll be set up to get your customers to buy into the microworld that you're building, and start to imagine that it's richer and more detailed than it actually is.

cantor_S_drug 10 hours ago||
Modernizing software systems take time because of inherent corruption in the procurement process or workings of consulting company involved. Those problems can be solved much faster and cheaper if a knowledgeable tech person was involved.

Hertz vs. Accenture: In 2019, car rental company Hertz sued Accenture for $32 million in fees plus additional damages over a failed website and mobile app project. Hertz claimed Accenture failed to deliver a functional product, missed multiple deadlines, and built a system that did not meet the agreed-upon requirements.

Marin County vs. Deloitte: In 2010, California's Marin County sued Deloitte Consulting for $30 million over a failed SAP ERP implementation. The county alleged Deloitte misrepresented its skills and used the county as a "training ground" for inexperienced consultants.

wvlia5 9 hours ago|
[flagged]
tomhow 4 hours ago||
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

Please don't fulminate. Please don't sneer...

Eschew flamebait. Avoid generic tangents. Omit internet tropes.

Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

Please don't use Hacker News for political or ideological battle. It tramples curiosity.

https://news.ycombinator.com/newsguidelines.html

DonHopkins 8 hours ago|||
So what does that make you, for "feeling" and "trolling" by projecting your ignorant political ideology onto something you know nothing about, instead of "thinking" and "experimenting" and "publishing" and "collaborating"? Exactly what profession are you role playing yourself?
voidhorse 9 hours ago||
You clearly haven't read much in the field of systems thinking, then. Many of the practitioners and most of its pioneers are in fact actual mathematicians, biologists, or computer scientists (Wiener, von Foerster, Banathy, etc)
wvlia5 9 hours ago||
Could you quote a non-trivial "systems thinking" theorem or tool such that, by knowing it, I will be able to solve a problem I couldn't solve before?
whatever1 8 hours ago|||
The entire field of process design & automatic process control. This literally is the O.G. job description of a chemical engineer. The field of grid design and balancing. Again job description of an electrical engineer.
wvlia5 8 hours ago||
Yes, but in all your examples it's the same: you learn the specific subject and you are implicitly learning "systems thinking"; it's not like you learn "systems thinking" first as the hard part, and then you learn electronic components as an implementation detail to become a electrical engineer.
whatever1 7 hours ago||
I see your point. Indeed we do learn it bottoms up. But why do you think the opposite is impossible? It seems like a transferable skill across domains.
voidhorse 9 hours ago||||
This is totally orthogonal to your original claim that systems thinkers are "liberal" philosophers but OK.

McCulloch and Pitts, early cyberneticians literally invented neural networks. See the wikipedia page on neural nets.

Another really simple one: Law of Requisite Variety. If that's too simple, I'd encourage you to bear in mind that Norbert Wiener, beyond his direct contributions to mathematics in the form of signal processing filters, is also responsible for the view of control as communication, which motivates much of the approach to control and stability in digital systems.

DonHopkins 8 hours ago|||
Here's one: reading the most basic Wikipedia page about a subject before you make up your mind based on your political ideology instead of any actual knowledge.