Posted by konradx 5 days ago
- "identification of conceptual equivalencies among disparate phenomena were foundational to developing previous laws of nature" - what exactly is a "conceptual equivalence"? You mean models? Unifying disparate observations into models is basic science. Not sure why it is highlighted here as some important insight.
- "The laws of classical physics emerged as efforts to provide comprehensive, predictive explanations of phenomena in the macroscopic world" - followed by a laymen's listing of physical laws, then goes on to claim "conspicuously absent is a law of increasing “complexity.”"
- then a jumble of examples including gravitation, stellar evolution, mineral evolution and biological evolution
- this just feels like a slight generalization of evolution: "Systems of many interacting agents display an increase in diversity, distribution, and/or patterned behavior when numerous configurations of the system are subject to selective pressure."
At this point, I gave up.
I don't really have an issue with any of the points you raised - why do they bother you?
The interesting stuff is the discussion about "functional information" later in the paper, which is their proposed quantitative measure for understanding the evolution of complexity (although it seems like early stages for the theory).
It's "just" a slight generalisation of the ideas of evolution but it applies to nonbiological systems and they can make quantitative predictions. If it turns out to be true then (for me) that is a pretty radical discovery.
I'm looking forward to seeing what can be demonstrated experimentally (the quanta article suggests there is some evidence now, but I haven't yet dug into it).
Indeed, and Natural Philosophy was the precursor to what we now call Science.
I still think the old name better fit what we’re doing because it admits that the work is still a philosophical endeavor.
This is not to question the validity of what we now call science, but it’s common these days to believe in the ultimate supremacy of science as the answer to questions that are best explored both philosophically and scientifically, and because pure science still can’t answer important philosophical questions that that the entire scientific discipline rests upon.
I'm describing unfounded beliefs many people hold about science based mostly on a lack of philosophical understanding, which is orthogonal to anti-science sentiment and still important to examine.
I don't see a reason for there to be tension between the two.
As a layperson, I often come in contact with people who believe in science but fall into what is essentially scientific absolutism and see philosophy as irrelevant. I was one of those people in my 20s before I went down some rabbit holes that set me straight. Many of the people around me did not.
The scientists I know are not the absolutist types. I sometimes forget there are more scientists here than the average internet community.
now what?
or in my words: "the first approximation is poetic. the last one is mathematical"
from philosophy to hard-science and engineered tooling and other products (andor services)
similarly to
from poetry as dubious, cloudy, and vague ideas all the way to crystal clear, fixed and unmoving (dead) formalizations
Idk about GP, but bad science writing ("identification of conceptual equivalencies ...") does bother me. It's sloppy, and tends to hide possibly invalid shortcuts taken by the authors by being an impenetrable fog of words. That sort of thing is a very good indicator of bunk, and it tends to peg my BS meter. Which isn't to say that there is no place for that sort of language in a scientific paper, but that one should preface the use of it with an admission of hand-waving for some purpose.
A distributed system can still achieve centralized outcomes as a result of centralizing constraints acting on it. For example, matter under gravity forces leads to celestial bodies, particles under EM forces lead to stable chemical molecules, genes and species under the constraint of replication lead to evolution, language under constraint of usage leads to the evolution of culture, and brains under the constraint of serial action lead to centralized semantics and behavior. In neural nets we have the loss function as a centralizing constraint, moving weights towards achieving a certain functional outcome.
Ok, so what is the relation between centralizing constraints and recursion? Recursion is how distributed activity generates constraints. Every action becomes a future constraint. I think this approach shows great promise. We can link recursive incompressibility and undecidability to explanatory gaps. You can't know a recursive process unless you walk the full path of recursion, you have to be it to know it. There is no shorter description of a recursive process than its full history.
So what looks like constraints when seen top-down, looks like search seen bottom-up. Particles search for minimal energy, genes for survival, markets search for profit, and our actions for goal maximization. Search acts on all levels, but since constraints are emergent, search is also open-ended.
Complexity is probably most formally modeled in entroy in thermodynamics, although it behaves in the opposite direction that these ideas and oberservations suggest it should.
It still asks questions about the reason for this complexity and there is no scientific answer aside from "propably accidental complexity".
Science is curious so it probably shouldn't be dismissed by unmet formal requirements that aren't specified. "Layman" is unspecific, so what would your requirements be exactly?
No, a model is not an "identification of conceptual equivalencies among disparate phenomena". It's a simplified representation of a system.
"identification of conceptual equivalencies among disparate phenomena were foundational to developing previous laws of nature" could be called an analogy, an isomorphism, a unifying framework, etc.
>Unifying disparate observations into models is basic science. Not sure why it is highlighted here as some important insight.
Perhaps because the most important insights are the most basic ones - it's upon those eveything else sits upon.
>At this point, I gave up
If you can't bother beyond the abstract or 1st paragraph, or are perplexed that the abstract has a 10,000ft simplistic introduction into the basics, then it's better that you did :)
This paper just reads like an attempt at sounding smart while actually saying little.
Good examples of these are anything that Kolmogorov-compresses well. For example, by almost any measure the output of a pseudo random number generator has high entropy. Yet it has low information density (low complexity), as the program that generates the sequence, plus its state, is really small.
I wonder if it always increases though? Eventually there will be enough entropy that any change may cause it to reduce or oscillate? (At universe / reachable universe scale).
It always increases in an isolated system. That caveat is almost always missing in pop-sci level of discussions about entropy, but it is crucial.
> Eventually there will be enough entropy that any change may cause it to reduce or oscillate?
Assuming that the universe is actually an isolated system, entropy will reach a maximum (it cannot oscillate). It is interesting to speculate, and of course our theories are imperfect and we are certainly missing something. In particular, the relationship between time and entropy is not straightforward. Very roughly: is the entropy a function of time, which we could define otherwise, or is time a consequence of entropy changes?
In the first case, we can suppose that if the universe reaches an entropy maximum we’d be far enough outside the conditions under which our theories work that we’d just have entropy decrease with time (i.e., the rule that entropy increases with time is only valid close to our usual conditions).
But in the second case, it would mean that the universe reached the end of time. It could evolve in any conceivable way (in terms of the fundamental laws of Physics), and the arrow of time would always point to the same moment. "What comes after?" Would be a question just as meaningless as "what came before the Big Bang?"
In any case, there are a lot of assumptions and uncertainty. The story does not do the subject any justice.
The book describes "Assembly Theory", a theory of how life can arise in the universe. The idea is that you can quantitatively measure the complexity of objects (especially chemicals) by the number of recursive steps to create them. (The molecule ATP is 21 for instance.) You need life to create anything over 15; the idea of life is it contains information that can create structures more complex than what can be created randomly. The important thing about life is that it isn't spontaneous, but forms an unbroken chain through time. Explaining how it started may require new physics.
If the above seems unclear, it's because it is unclear to me. The book doesn't do a good job of explaining things. It looks like a mass-market science book, but I found it very confusing. For instance, it's unclear where the number 21 for ATP comes from, although there's an analogy to LEGO. The book doesn't define things and goes into many, many tangents. The author is very, very enthusiastic about the ideas but reading the book is like looking at ideas through a cloud of vagueness.
The writing is also extremely quirky. Everyone is on a first-name basis, from Albert (Einstein) to Johnny (von Neumann) and Erwin (Schrödinger). One chapter is written in the second person, and "you" turn out to be "Albert." The book pushes the idea that physics is great and can solve everything, covering physics "greatest hits" from relativity and quantum mechanics to gravitational waves and the Higgs boson. (The underlying theme is: "Physics is great. This book is physics. Therefore, this book is great.") The book has a lot of discussion of how it is a new paradigm, Kuhn's paradigm shifts, how it will move astrobiology beyond the pre-paradigmatic phase and unify fields of research and so forth. It's not a crackpot book, but there are an uncomfortable number of crackpot red flags.
I'm not rejecting the idea of assembly theory. To be honest, after reading the book, I don't understand it well enough to say which parts seem good and which parts seem flawed. There seem to be interesting ideas struggling to get out but I'm not getting them. (I don't like to be negative about books, but there are a few that I regret reading and feel that I should warn people.)
How do you know it's not a crackpot book? All evidence you mentioned here seems to support that conclusion.
"There is a theory which states that if ever anyone discovers exactly what the Universe is for and why it is here, it will instantly disappear and be replaced by something even more bizarre and inexplicable. There is another theory which states that this has already happened."
"When one compares a hotplate with and without a Benard cell apparatus on top, there is an overall increase in entropy as energy passes through the system as required by the second law, because the increase in entropy in the environment (at the heat sink) is greater than the decreases in entropy that come about by maintaining gradients within the Benard cell system."
(Think: no heat death!)
Related to another heresy understated by qmag just this week: https://news.ycombinator.com/item?id=43665831
In that case, qmag didn't (dare to?) shout loud enough that the para-particles are globally ?distinguishable..
That's like a very restricted version of TFA's claim though..
Another take on the issue:
https://scottaaronson.blog/?p=762
*I don't want to say "entropy" because it's not clear to many folks, including experts, whether entropy is uh, "correlated" or "anticorrelated" with complexity.
Also the value of entropy has different signs in thermodynamics and computer science for example. Not helpful either...
[0] <https://onlinelibrary.wiley.com/doi/pdf/10.1002/%28SICI%2910...>
[1] <https://www.liverpooluniversitypress.co.uk/doi/book/10.3828/...>
The article talks a lot about biological evolution, but in that case the only claim that is likely to be true is that the complexity of the entire biosphere increases continuously, unless a catastrophe resets the biosphere to a lower complexity.
If you look only at a small part of the biosphere, like one species of living beings, it is extremely frequent to see that it evolves to become simpler, not more complex, because a simpler structure is usually optimal for constant environmental conditions, the more complex structures are mainly beneficial for avoiding extinction when the environmental conditions change.
Seriously, phrases like “selection for function”, unified theories of biology and physics, and big ideas about the second law of thermodynamics are major red flags.
Only if that system isn’t already in thermodynamic equilibrium. A closed system that reaches thermodynamic equilibrium has maximum entropy.
Why the universe as a whole didn’t start out in thermodynamic equilibrium, i.e doesn’t have maximum entropy is something we don’t understand.
https://en.wikipedia.org/wiki/Black_hole_cosmology
I'm partial to the hypothesis that our universe is actually a giant black hole in some kind of larger universe. The Big Bang was really the formation of our universe's event horizon. Cosmic inflation is the result of stuff falling into our universe, adding to its mass-energy -- there is no dark energy, our universe is just accreting mass-energy from something larger.
As for what the larger universe looks like -- in this model it may be impossible to know because the event horizon is impenetrable. It could be a much larger universe or it could be something else, like a higher dimensional one.
We can hardly know that, can we? Water and carbon are abundant.