Posted by Gooblebrai 2 days ago
What the author did was use the Physical Church-Turing thesis, and Kleene's second recursion theorem, to show that: (1) If a universe’s dynamics are computable (PCT), and (2) the universe can implement universal computation (RPCT), then (3) the universe can simulate itself, including the computer doing the simulating.
That's basically all. And thus "there would be two identical instances of us, both equally 'real'." (Two numerically distinct processes are empirically identical if they are indistinguishable. You might remember this sort of thing from late 20th c. philosophy coursework.)
He also uses Rice’s theorem (old) to show that there is no uniform measure over the set of "possible universes."
It's all very interesting, but it's more a review article than a "new mathematical framework." The notion of a mathematical/simulated universe is as old as Pythagoras (~550 BC), and Rice, Church-Turing, and Kleene are all approaching the 100-year mark.
Maybe the problem is axiomative deduction, we need a new inference-ology?
Maybe I'm too out of this scope but if you want to simulate Universe X plus the computer Y that simulates X then you'd need at least 1 extra bit of memory (likely way more) to encompass the simulation plus the computation running the simulation (X+Y). The computer running the simulation by definition is not part of the simulation, so how can it be that it can truly simulate itself?
By thinking of memory usage, you’re restricting yourself to our perceived physical limits withine our perceived reality.
But, what if the things running the simulation did not have those limits? E.g. maybe data could be stored in an infinite number of multiverses outside of the infinite simulations being discussed. Any of the simulations could potentially simulate universes like ours while still allowing those simulations to contain others, to be contained by others, to have references to others, to have reflective references, etc. The makes anything and everything possible while not necessarily removing limits we have in our own simulation. It just depends on what’s running the simulation.
If a compressor can compress every input of length N bits into fewer than N bits, then at least 2 of the 2^N possible inputs have the same output. Thus there cannot exist a universal compressor.
Modify as desired for fractional bits. The essential argument is the same.
The big riddle of Universe is, how all that matter loves to organize itself, from basic particles to Atoms, basic molecues, structured molecues, things and finally live.. Probably unsolvable, but that doesnt mean we shouldnt research and ask questions...
Isn't that 'just' the laws of nature + the 2nd law of thermodynamics? Life is the ultimate increaser of entropy, because for all the order we create we just create more disorder.
Conway's game of life has very simple rules (laws of nature) and it ends up very complex. The universe doing the same thing with much more complicated rules seems pretty natural.
(1) It maintains a persisting internal model of an environment, updated from ongoing input.
(2) It maintains a persisting internal model of its own body or vehicle as bounded and situated in that environment.
(3) It possesses a memory that binds past and present into a single temporally extended self-model.
(4) It uses these models with self-derived agency to generate and evaluate counterfactuals: Predictions of alternative futures under alternative actions. (i.e. a general predictive function.)
(5) It has control channels through which those evaluations shape its future trajectories in ways that are not trivially reducible to a fixed reflex table.
This would also indicate that Boltzmann Brains are not conscious -- so it's no surprise that we're not Boltzmann Brains, which would otherwise be very surprising -- and that P-Zombies are impossible by definition. I've been working on a book about this for the past three years...
How do you define these terms without begging the question?
What this requires is a persistent internal model of: (A) what counts as its own body/actuators/sensors (a maintained self–world boundary), (B) what counts as its history in time (a sense of temporal continuity), and (C) what actions it can take (degrees of freedom, i.e. the future branch space), all of which are continuously used to regulate behavior under genuine epistemic uncertainty. When (C) is robust, abstraction and generalization fall out naturally. This is, in essence, sapience.
By "not trivially reducible," I don't mean "not representable in principle." I mean that, at the system's own operative state/action abstraction, its behavior is not equivalent to executing a fixed policy or static lookup table. It must actually perform predictive modeling and counterfactual evaluation; collapsing it to a reflex table would destroy the very capacities above. (It's true that with an astronomically large table you can "look up" anything -- but that move makes the notion of explanation vacuous.)
Many robots and AIs implement pieces of this pipeline (state estimation, planning, world models,) but current deployed systems generally lack a robust, continuously updated self-model with temporally deep, globally integrated counterfactual control in this sense.
If you want to simplify it a bit, you could just say that you need a robust and bounded spatial-temporal sense, coupled to the ability to generalize from that sense.
I think I agree you've excluded them from the definition, but I don't see why that has an impact on likelihood.
"P-zombie" is not a coherent leftover possibility once you fix the full physical structure. If a system has the full self-model (temporal-spatial sense) / world-model / memory binding / counterfactual evaluator / control loop, then that structure is what having experience amounts to (no extra ingredient need be added or subtracted).
I hope I don't later get accused of plagiarizing myself, but let's embark on a thought experiment. Imagine a bitter, toxic alkaloid that does not taste bitter. Suppose ingestion produces no distinctive local sensation at all – no taste, no burn, no nausea. The only "response" is some silent parameter in the nervous system adjusting itself, without crossing the threshold of conscious salience. There are such cases: Damaged nociception, anosmia, people congenitally insensitive to pain. In every such case, genetic fitness is slashed. The organism does not reliably avoid harm.
Now imagine a different design. You are a posthuman entity whose organic surface has been gradually replaced. Instead of a tongue, you carry an in‑line sensor which performs a spectral analysis of whatever you take in. When something toxic is detected, a red symbol flashes in your field of vision: “TOXIC -- DO NOT INGEST.” That visual event is a quale. It has a minimally structured phenomenal character -- colored, localized, bound to alarm -- and it stands in for what once was bitterness.
We can push this further. Instead of a visual alert, perhaps your motor system simply locks your arm; perhaps your global workspace is flooded with a gray, oppressive feeling; perhaps a sharp auditory tone sounds in your private inner ear. Each variant is still a mode of felt response to sensory information. Here's what I'm getting at with this: There is no way for a conscious creature to register and use risky input without some structure of "what it is like" coming along for the ride.
1. Qualia exist as something separate from functional structure (so p-zombies are conceivable)
2. Qualia don't exist at all (Dennett-style eliminativism)
But I say that there is a third position: Qualia exist, but they are the internal presentation of a sufficiently complex self-model/world-model structure. They're not an additional ingredient that could be present or absent while the functional organization stays fixed.
To return to the posthuman thought experiment, I'm not saying the posthuman has no qualia, I'm saying the red "TOXIC" warning is qualia. It has phenomenal character. The point is that any system that satisfies certain criteria and registers information must do so as some phenomenal presentation or other. The structure doesn't generate qualia as a separate byproduct; the structure operating is the experience.
A p-zombie is only conceivable if qualia are ontologically detachable, but they're not. You can't have a physicalism which stands on its own two feet and have p-zombies at the same time.
Also, it's a fundamentally silly and childish notion. "What if everything behaves exactly as if conscious -- and is functionally analogous to a conscious agent -- but secretly isn't?" is hardly different from "couldn't something be H2O without being water?," "what if the universe was created last Thursday with false memories?," or "what if only I'm real?" These are dead-end questions. Like 14-year-old-stoner philosophy: "what if your red is ackshuallly my blue?!" The so-called "hard problem" either evaporates in the light of a rigorous structural physicalism, or it's just another silly dead-end.
I’ve heard a similar thought experiment to your bitterness one from Keith Frankish: You have the choice between two anesthetics. The first one suppresses your pain quale, meaning that you won’t _feel_ any pain at all. But it won’t suppress your external response: you will scream, kick, shout, and do whatever you would have done without any anesthetic. The second one is the opposite: it suppresses all the external symptoms of pain. You won’t budge, you’ll be sitting quiet and still as some hypothetical highly painful surgical procedure is performed on you. But you will feel the pain quale completely, it will all still be there.
I like it because it highlights the tension in the supposed platonic essence of qualia. We can’t possibly imagine how either of these two drugs could be manufactured, or what it would feel like.
Would you classify your view as some version of materialism? Is it reductionist? I’m still trying to grasp all the terminology, sometimes it feels there’s more labels than actual perspectives.
It all gets filtered through consciousness.
"Objectivity" really means a collection of organisms having (mostly) the same subjective experiences, and building the same models, given the same stimuli.
Given that less intelligent organisms build simpler models with poorer abstractions and less predictive power, it's very naive to assume that our model-making systems aren't similarly crippled in ways we can't understand.
Or imagine.
That's the question that prevent me from being atheist and shift me to agnosticism.
A lot of people are more interested in the Why of the Universe than the How, though.
How is an implementation detail, Why is "profound". At least that's how I think most people look at it.
Ha
A universe is simply a function, and a function can be called multiple times with the same/different arguments, and there can be different functions taking the same or different arguments.
There's no guarantee their logic is the same as our logic. It needs to be able to simulate our logic, but that doesn't mean it's defined or bound by it.
> He also uses Rice’s theorem (old) to show that there is no uniform measure over the set of "possible universes."
I assume a finite uniform measure? Presumably |set| is a uniform measure over the set of "possible universes".
Anyway if I understood that correctly, than this is not that surprising? There isn't a finite uniform measure over the real line. If you only consider the possible universes of two particles at any distance from eachother, this models the real line and therefore has no finite uniform measure.
If the universe isn't "real" in the materialist sense, that does not imply that there's a "real" universe outside of the one we perceive, nor does it imply that we're being "simulated" by other intelligences.
The path of minimal assumptions from reality not being "real" is idealism. We're not simulated, we're manifesting.
> Tegmark's MUH is the hypothesis that our external physical reality is a mathematical structure. That is, the physical universe is not merely described by mathematics, but is mathematics — specifically, a mathematical structure. Mathematical existence equals physical existence, and all structures that exist mathematically exist physically as well. Observers, including humans, are "self-aware substructures (SASs)". In any mathematical structure complex enough to contain such substructures, they "will subjectively perceive themselves as existing in a physically 'real' world".
https://en.wikipedia.org/wiki/Mathematical_universe_hypothes...
The classical naive way of obtaining a consistent causal chain, is to put the links one after the other following the order defined by the simulation time.
The funnier question is : can it be done another way ? With the advance of generative AI, and things like diffusion model it's proven that it's possible theoretically (universal distribution approximation). It's not so much simulating a timeline, but more sampling the whole timeline while enforcing its physics-law self-consistency from both directions of the causal graph.
In toy models like game of life, we can even have recursivity of simulation : https://news.ycombinator.com/item?id=33978978 unlike section 7.3 of this paper where the computers of the lower simulations are started in ordered-time
In other toy model you can diffusion-model learn and map the chaotic distribution of all possible three-body problem trajectories.
Although sampling can be simulated, the efficient way of doing it necessitate to explore all the possible universes simultaneously like in QM (which we can do by only exploring a finite number of them while bounding the neighbor universe region according to the question we are trying to answer using the Lipschitz continuity property).
Sampling allows you to bound maximal computational usage and be sure to reach your end-time target, but at the risk of not being perfectly physically consistent. Whereas simulating present the risk of the lower simulations siphoning the computational resources and preventing the simulation time to reach its end-time target, but what you could compute is guaranteed consistent.
Sampled bottled universe are ideal for answering question like how many years must a universe have before life can emerge, while simulated bottled universe are like a box of chocolate, you never know what you are going to get.
The question being can you tell which bottle you are currently in, and which bottle would you rather get.
Does the potential cause current? No, they coexist.
How is this consistent with the second law of thermodynamics? If there is one universe containing an infinite number of simulations (some of which simulate the base universe) wouldn’t there be a limit to how much computation could be contained? By its very nature a chain of simulations would grow exponentially with time, rapidly accelerating heat death. That may not require the simulations to degrade but it puts a hard limit on how many could be created.