Posted by nooks 12/3/2025
The thread: > Replacing ECCA1 by version with step after the direction change could save something like 1% of the ecca1 bits size. Compiling agnosticized program instead of fixed lane program by ecca1 could save something like 1% as well (just guesses). Build of smaller ECCA1 would shorten binary portion, but it would be hardly seen in the ship size.
> Using agnosticized recipe in the fuse portion would definitely reduce its size. Better cordership seed and better salvo for gpse90 would help…
Dear lord I had no idea there’s this much jargon in the game of life community. Gonna be reading the wiki for hours
https://codegolf.stackexchange.com/questions/11880/build-a-w...
I would make the case that, zoomed out far enough, nothing at all is meaningful, so you might as well make beautiful things, and this is a delightfully beautiful thing.
There's something exceedingly interesting about how you can model complexity with something extremely simple. Brainfuck is fun because it forces you to think extremely low level, because ultimately it is basically just a raw implementation of a Turing machine. I wouldn't want to write a big program in it, but it is fun to think about how you might express a complicated algorithm with it.
Similarly with CGOL, it is really interesting to see how far you can stretch really simple rules into something really complex.
I've written CGOL dozens of times, it's a common project that I do to "break in" a language I've learned, since it's not completely trivial but it's simple enough to not be frustrating, and I completely understand why math/computability-theory folks find it something to dedicate brain power to.
Honestly, as an educational tool, the only thing wrong with it is the name!
Primarily because of the note on the "calculating pi" example program:
> Richard Mitton supplies this amazing program which calculates an approximation of pi... literally by dividing a circular area by the radius twice.
> Naturally, a more accurate value can be obtained by using a bigger program.
This was recreated from memory. I think it is close but I may have a bounding bug.
import random
def pi(count):
inside = 0
for i in range(count):
test_x = random.random()
test_y = random.random()
if test_x ** 2 + test_y ** 2 < 1:
inside += 1
return inside / count * 4 #above is a quarter circle
print(pi(2 ** 30) )```
$ cat << EOF > pi.py
state = [0, 0, 2*8, 2*12]; _ = [print(f'\rRun {state.__setitem__(0, state[0] + 1) or state[0]}/{state[3]} | Last \u03c0: {current_pi:.6f} | *Average \u03c0: {(state.__setitem__(1, state[1] + current_pi) or state[1]) / state[0]:.6f}*', end='', flush=True) for current_pi in [(4 * sum([1 for _ in range(state[2]) if __import__("random").random()*2 + __import__("random").random()*2 < 1]) / state[2]) for _ in range(state[3])]]; print()
EOF
$ time python3 pi.py
Run 4096/4096 | Last π: 3.140625 | *Average π: 3.143051*
python3 pi.py 0.41s user 0.01s system 99% cpu 0.429 total
```
Play around with the `2*8` and `2*10` values in the state, they control the amount of rounds and the range in which the random values get generated respectively.
I would probably need to find a similar language with a different name though.
You don’t really get any of that with brainfuck. You have a theoretical tape and counters and that’s basically it.
"What if we had the lambda calculus without the lambda forms?" asked no one.
Writing a naive CGOL is fun and quick. But writing a _fast_ one can get arbitrarily complicated.
https://en.wikipedia.org/wiki/Hashlife is one particular example of where you can go for a faster-than-naive CGOL.
It goes much beyond just cellular automata, the thousand pages or so all seem to drive down the same few points:
- "I, Stephen Wolfram, am an unprecedented genius" (not my favorite part of the book) - Simple rules lead to complexity when iterated upon - The invention of field of computation is as big and important of an invention as the field of mathematics
The last one is less explicit, but it's what I took away from it. Computation is of course part of mathematics, but it is a kind of "live" mathematics. Executable mathematics.
Super cool book and absolutely worth reading if you're into this kind of thing.
I say something-like-Turing completeness, because it requires a very specially prepared tape to work that makes it a bit borderline. (But please look it up properly, this is all from memory.)
Having said all that, the result is a nice optimisation / upper bound on how little you need in terms of CA to get Turing completeness, but I agree that philosophically nothing much changes compared to having to use a slightly more complicated CA to get to Turing completeness.
You could do it on an analog computer but then you'd be into the noise very quickly.
In theory you can, but in practice this is super hard to do.
Btw, quantum mechanics is both linear and stable--and even deterministic. Admittedly it's a bit of a mystery how the observed chaotic nature of eg Newtonian billard balls emerges from quantum mechanics.
'Stable' in this case means that small perturbations in the input only lead to small perturbations in the output. You can insert your favourite epsilon-delta formalisation of that concept, if you wish.
To get back to the meat of your comment:
You can simulate such a stable system 'lazily'. Ie you simulate it with any given fixed precision at first, and (only) when someone zooms in to have a closer look at a specific part, you increase the precision of the numbers in your simulation. (Thanks to the finite speed of light, you might even get away with only re-simulating that part of your system with higher fidelity. But I'm not quite sure.)
Remember those fractal explorers like Fractint that used to be all the rage: they were digital at heart---obviously---but you could zoom in arbitrarily as if they had infinite continuous precision.
Sure, but that 'If' isn't true for all but the simplest analog systems. Non-linearities are present in the most unexpected places and just about every system can be made to oscillate.
That's the whole reason digital won out: not because we can't make analog computers but because it is impossible to make analog computers beyond a certain level of complexity if you want deterministic behavior. Of course with LLMs we're throwing all of that gain overboard again but the basic premise still holds: if you don't quantize you drown in an accumulation of noise.
Quantum mechanics is linear and stable. Quantum mechanics is behind all systems (analog or otherwise), unless they become big enough that gravity becomes important.
> That's the whole reason digital won out: not because we can't make analog computers but because it is impossible to make analog computers beyond a certain level of complexity if you want deterministic behavior.
It's more to do with precision: analog computers have tolerances. It's easier and cheaper to get to high precision with digital computers. Digital computers are also much easier to make programmable. And in the case of analog vs digital electronic computers: digital uses less energy than analog.
Or even just lots and lots of variation and some process selecting which one we focus our attention one. Compare the anthropic principle.
In our current algorithmic-obsessed era, this is reminiscent of procedural generation (but down/up the scale of complexity, not "one man's sky" style of PG).
However, we also have a long track record of seeing the world as nails for our latest hammer. The idea of an algorithm, or even computation in general, could be in reality conceptually closer to "pointy stone tool" than "ultimate substrate".
That's a tempting thing to say, but quantum mechanics suggests that we don't have infinite layers at the bottom. Most thermodynamic arguments combined with quantum mechanics. See eg also https://en.wikipedia.org/wiki/Bekenstein_bound about the amount of information that can even in theory be contained in a specific volume of space time.
> the maximum amount of information that is required to perfectly describe a given physical system _down to the quantum level_
(emphasis added by me)
It looks like it makes predictions for the quantum layer and above.
--
Historically, we humans have a long proven track record of missing layers at the bottom that were unknown but now are known.
(If you don't recognize that use of "shipping", don't google it at work.)
That's kind of amazing. I wish someone unpacked the units of abstraction/compilation that must surely exist here.
Surely they aren't developing this with 1 or 0 as the abstraction level!
It’s also a relatively sparse line, as the number of live cells is less than a hundredth of the line’s extent: https://conwaylife.com/wiki/Unidimensional_spaceship_1
> The third and fourth arms are extreme compression construction arms "ecca", where a programming language interpreter is created and individual incoming letters are interpreted as instructions specifying which phase (mod 2) and line of glider to emit.
Almost 10 years of development.
Given that it starts as a single line, it is symmetric in the axis implied by that line, and hence can’t possibly move diagonally or orthogonal to the line. Hence it moves in the direction of the line.
I was a bit confused by that wiki page because it says "Direction Orthogonal" but like you said that can't be.
We all know how to do that, but that's not why were here.
What we’re less capable of—and the reason we look to each other here instead—is distinguishing where the LLM’s errors or misinterpretations lie. The gross mistakes are often easy enough to spot, but the subtle misstatements get masked by its overconfidence.
Luckily for us, a lot of the same people actually doing the work on the stuff we care about tend to hang out around here. And often, they’re kind enough to duck in and share.
Thank you in any case for being upfront about it. It’s just that it’d be a shame and a real loss if the slop noise came to drown out the signal here.
Your SO is likely only enthused to the degree that it affects your mood. "So this RISC architecture isn't compliant with ADA-1056 after all? And you were right all along? Wow, that's great, honey!"
I picture entities playing with our universe, "it starts slow but check it out at the 13.8B mark"
Of course, there are other variants (see qntm's https://qntm.org/responsibility) where the simulation _is_ a simulation of the world outside. And we have GoL in GoL :-)
1. What is the behavior of Conway's Game of Life when the initial position is random? Paraphrasing Boris Bukh's comment on the post linked below, the Game of Life supports self-replication and is Turing-complete, and therefore can support arbitrarily intelligent programs. So, will a random initial position (tend to) be filled with super-intelligent life forms, or will the chaos reign?
There exist uncountably infinitely many particular initial configurations out of which a random one may be drawn, which makes this more difficult (a particular infinite grid configuration can be represented as the binary digits (fractional part) of a real number, spiraling outwards from a given center coordinate cell: 0.0000... represents an empty infinite grid, 0.1111... a fully alive infinite grid).
https://mathoverflow.net/questions/132402/conways-game-of-li...
2. Relatedly, does a superstable configuration exist? One that continues to exist despite any possible external interference pattern on its border? Perhaps even an expanding one?
https://mathoverflow.net/questions/132687/is-there-any-super...
One of the chapters asks "what is life?". It considers (and rejects) various options, and finally settles upon a definition based on Von Neumann-style self-replicating machines using blueprints and universal constructors, and explains why this is the most (only?) meaningful definition of life.
Later, it talks about how one would go about creating such a machine in Conway's Game of Life. When the book was written in 1984, no one had actually created one (they need to be very large, and computers weren't really powerful enough then). But in 2010 Andrew J. Wade created Gemini, the first successful self-replicating machine in GoL, which I believe meets the criteria - and hence is "alive" according to that definition (but only in the sense that, say, a simple bacteria is alive). And I think it works somewhat like how it was sketched out in the book.
Another chapter estimated how big (and how densely populated) a randomly-initialized hypothetical GoL universe would need to be in order for "life" (as defined earlier) to appear by chance. I don't recall the details - but the answer was mind-boggling big, and also very sparsely populated.
All that only gives you life though, not intelligence. But life (by this definition) has the potential to evolve through a process of natural selection to achieve higher levels of complexity and eventually intelligence, at least in theory.
You might have better luck with other variants. Reversible cellular automata have a sort of 'conservation of mass' where cells act more like particles. Continuous cellular automata (like Lenia) have less chaotic behavior. Neural cellular automata can be trained with gradient descent.
I think people will disagree about whether “Turing-complete” is powerful enough for supporting intelligence but let’s assume it does.
> So, will a random initial position (tend to) be filled with super-intelligent life forms, or will the chaos reign?
Even if it doesn’t, it might take only one intelligent life form for the space to (eventually) get filled with it (the game of life doesn’t heave energy constraints that make it hard to travel over long distances, so I don’t see a reason why it wouldn’t. On the other hand, maybe my assumption that all intelligent life would want to expand is wrong), and in an infinite plane, it’s likely (¿certain?) one will exist.
On the other hand it’s likely more than one exists, and they might be able to exterminate each other.
It wouldn't need to be intelligent to do this; it could be a self-replicating machine with no intelligence at all - which is orders of magnitude simpler and therefore more likely.
Chaotic initial state -> self-replicating machine -> intelligence is much more likely than chaotic initial state -> intelligence.
(See my other reply to the GP comment about The Recursive Universe, where all this is discussed.)
Here's an excerpt from a comment of Daniel Harlow (a prof at MIT):
> In order for us to be having this discussion at all, the laws of physics need to have the ability to generate interesting complex structures in a reasonable amount of time starting from a simple initial state. Now I know that as a computer scientist you are trained to think that is a trivial problem because of Turing completeness, universality, blah blah blah, but really I don’t think it is so simple. Why should the laws of physics allow a Turing machine to be built? And even if a Turing machine is possible, why should one exist? I think the CS intuition that “most things are universal” comes with baked-in assumptions about the stability of matter and the existence of low-entropy objects, and I think it is not so easy to achieve these with arbitrary laws of physics.
Scott replies:
> Multiple people made the case to me that it’s far from obvious how well
(1) stable matter,
(2) complex chemistry,
(3) Lorentzian and other continuous symmetries,
(4) robustness against small perturbations,
(5) complex structures
being not just possible but likely from “generic” initial data,…can actually be achieved in simple Turing-universal classical cellular automaton models.
See comments 225 and 261
Does that you exist any less fully because its not currently in the memory of a computer being evaluated?
This has not been proven yet: https://math.stackexchange.com/a/216348/575868
(or more generally: https://en.wikipedia.org/wiki/Disjunctive_sequence)
Depending on the infinite grid filling scheme even these properties may not be sufficient to guarantee that every two dimensional pattern is initially generated because the grid is two-dimensional, but the number property is "one-dimensional". A spiral pattern for example may always make it line up in a way such that certain 2d patterns are never generated.
Starting at the origin, mark off rows of squares. the Nth row would contain NxN^2 squares of size n x n. Each square would be filled in left to right reading order with successive binary numbers with the most significant digit at the top left.
Somewhere in that pattern is the physics simulation of you reading this comment :)
This is a linear sequence of bits, which when interpreted as a Game of Life board, "prints" an exact copy of itself 2 pixels to the right (leaving no trace of the original).
I suppose its job would be easier if it only had to construct a copy of itself rather than "moving" itself, but I enjoy the interpretation that it's a linear "tape" of bits which prints its own code transposed by 2 pixels, and takes an unfathomable amount of time and space to do so. Beautiful.
Actually more than one true replicator has been constructed. The 0E0P metacell
https://conwaylife.com/wiki/0E0P_metacell
can be programmed to self-replicate in any number of ways, but it's so big that it's very hard to simulate it through a full cycle. By contrast, Pavel Grankovskiy's "DOGun SaGaQR"
https://conwaylife.com/forums/viewtopic.php?&p=138191#p13819...
only has one pattern of replication, but it's much simpler than the 0E0P metacell and can easily be run through many cycles in Golly.
In the book he also talks about GoF, but one of the fascinating experiments he did is that a "computronium" of initially random Brainfuck programs (that obviously don't do anything interesting in the beginning) that mutate (by flipping random bits) and merge (by taking two random programs and sticking them together) eventually, after a sudden phase transition, start to self replicate by producing gradually better copies of themselves!
He also argues that symbiogenesis (merging replicators into a whole that does more than its parts) is the main driving force of evolution instead of just random mutations, because the random Brainfuck computronium eventually produces replicators even without the random bit flips.
Now, I'm unaware of this strange GoL world with amazing work people are doing. Sometimes I wonder which frontiers of progress, should we as human race be utilizing this amazing creative potential of the current generations.
I'm really charmed by the linked thread and all the passion and work it belies. Congrats to those involved!
>It's still an open problem as to whether there exists a spaceship in B3/S23 which fits within a 1-by-N bounding box in one of its phases.
So they use the "typical" rule here.
https://parkscomputing.com/page/conways-game-of-life?boardSi...
That produces a spinner, because the empty cells above and below the 1d row have three live cells nearby.