Posted by FillMaths 6 hours ago
As the "evidence" piles up, in further mathematics, physics, and the interactions of the two, I still never got to the point at the core where I thought complex numbers were a certain fundamental concept, or just a convenient tool for expressing and calculating a variety of things. It's more than just a coincidence, for sure, but the philosophical part of my mind is not at ease with it.
I doubt anyone could make a reply to this comment that would make me feel any better about it. Indeed, I believe real numbers to be completely natural, but far greater mathematicians than I found them objectionable only a hundred years ago, and demonstrated that mathematics is rich and nuanced even when you assume that they don't exist in the form we think of them today.
Take R as an ordered field with its usual topology and ask for a finite-dimensional, commutative, unital R-algebra that is algebraically closed and admits a compatible notion of differentiation with reasonable spectral behavior. You essentially land in C, up to isomorphism. This is not an accident, but a consequence of how algebraic closure, local analyticity, and linearization interact. Attempts to remain over R tend to externalize the complexity rather than eliminate it, for example by passing to real Jordan forms, doubling dimensions, or encoding rotations as special cases rather than generic elements.
More telling is the rigidity of holomorphicity. The Cauchy-Riemann equations are not a decorative constraint; they encode the compatibility between the algebra structure and the underlying real geometry. The result is that analyticity becomes a global condition rather than a local one, with consequences like identity theorems and strong maximum principles that have no honest analogue over R.
I’m also skeptical of treating the reals as categorically more natural. R is already a completion, already non-algebraic, already defined via exclusion of infinitesimals. In practice, many constructions over R that are taken to be primitive become functorial or even canonical only after base change to C.
So while one can certainly regard C as a technical device, it behaves like a fixed point: impose enough regularity, closure, and stability requirements, and the theory reconstructs it whether you intend to or not. That does not make it metaphysically fundamental, but it does make it mathematically hard to avoid without paying a real structural cost.
I work in applied probability, so I'm forced to use many different tools depending on the application. My colleagues and I would consider ourselves lucky if what we're doing allows for an application of some properties of C, as the maths will tend to fall out so beautifully.
I get the same feeling when I think about monads, futures/promises, reactive programming that doesn't seem to actually watch variables (React.. cough), Rust's borrow checker existing when we have copy-on-write, that there's no realtime garbage collection algorithm that's been proven to be fundamental (like Paxos and Raft were for distributed consensus), having so many types of interprocess communication instead of just optimizing streams and state transfer, having a myriad of GPU frameworks like Vulkan/Metal/DirectX without MIMD multicore processors to provide bare-metal access to the underlying SIMD matrix math, I could go on forever.
I can talk about why tau is superior to pi (and what a tragedy it is that it's too late to rewrite textbooks) but I have nothing to offer in place of i. I can, and have, said a lot about the unfortunate state of computer science though: that internet lottery winners pulled up the ladder behind them rather than fixing fundamental problems to alleviate struggle.
I wonder if any of this is at play in mathematics. It sure seems like a lot of innovation comes from people effectively living in their parents' basements, while institutions have seemingly unlimited budgets to reinforce the status quo..
For complex numbers my gut feeling is yes, they do.
When you divide 2 collinear 2-dimensional vectors, their quotient is a real number a.k.a. scalar. When the vectors are not collinear, then the quotient is a complex number.
Multiplying a 2-dimensional vector with a complex number changes both its magnitude and its direction. Multiplying by +i rotates a vector by a right angle. Multiplying by -i does the same thing but in the opposite sense of rotation, hence the difference between them, which is the difference between clockwise and counterclockwise. Rotating twice by a right angle arrives in the opposite direction, regardless of the sense of rotation, therefore i*i = (-i))*(-i) = -1.
Both 2-dimensional vectors and complex numbers are included in the 2-dimensional geometric algebra, whose members have 2^2 = 4 components, which are the 2 components of a 2-dimensional vector together with the 2 components of a complex number. Unlike the complex numbers, the 2-dimensional vectors are not a field, because if you multiply 2 vectors the result is not a vector. All the properties of complex numbers can be deduced from those of the 2-dimensional vectors, if the complex numbers are defined as quotients, much in the same way how the properties of rational numbers are deduced from the properties of integers.
A similar relationship like that between 2-dimensional vectors and complex numbers exists between 3-dimensional vectors and quaternions. Unfortunately the discoverer of the quaternions, Hamilton, has been confused by the fact that both vectors and quaternions have multiple components and he believed that vectors and quaternions are the same thing. In reality, vectors and quaternions are distinct things and the operations that can be done with them are very different. This confusion has prevented for many years during the 19th century the correct use of quaternions and vectors in physics (like also the confusion between "polar" vectors and "axial" vectors a.k.a. pseudovectors).
2. Topology: The fact the complex numbers are 2D is essential to their fundamentality. One way I think about it is that, from the perspective of the real numbers, multiplication by -1 is a reflection through 0. But, from an "outside" perspective, you can rotate the real line by 180 degrees, through some ambient space. Having a 2D ambient space is sufficient. (And rotating through an ambient space feels more physically "real" than reflecting through 0.) Adding or multiplying by nonzero complex numbers can always be performed as a continuous transformation inside the complex numbers. And, given a number system that's 2D, you get a key topological invariant of closed paths that avoid the origin: winding number. This gives a 2D version of the Intermediate Value Theorem: If you have a continuous path between two closed loops with different winding numbers, then one of the intermediate closed loops must pass through 0. A consequence to this is the fundamental theorem of algebra, since for a degree-n polynomial f, when r is large enough then f(r*e^(i*t)) traces out for 0<=t<=2*pi a loop with winding number n, and when r=0 either f(0)=0 or f(r*e^(i*t)) traces out a loop with winding number 0, so if n>0 there's some intermediate r for which there's some t such that f(r*e^(i*t))=0.
So, I think the point is that 2D rotations and going around things are natural concepts, and very physical. Going around things lets you ensnare them. A side effect is that (complex) polynomials have (complex) roots.
Real numbers function as magnitudes or objects, while complex numbers function as coordinatizations - a way of packaging structure that exists independently of them, e.g. rotations in SO(2) together with scaling). Complex numbers are a choice of coordinates on structure that exists independently of them. They are bookkeeping (a la double‑entry accounting) not money
Most of real numbers are not even computable. Doesn't that give you a pause?
There's this lack of rigor where people casually move "between" R and C as if a complex number without an imaginary component suddenly becomes a real number, and it's all because of this terrible "a + bi" notation. It's more like (a, b). You can't ever discard that second component, it's always there.
So in our everyday reality I think -1 and i exist the same way. I also think that complex numbers are fundamental/central in math, and in our world. They just have so many properties and connections to everything.
In my view, that isn’t even true for nonnegative integers. What’s the physical representation of the relatively tiny (compared to ‘most integers’) Graham’s number (https://en.wikipedia.org/wiki/Graham's_number)?
Back to the reals: in your view, do reals that cannot be computed have good physical representations?
I think these questions mostly only matter when one tries to understand their own relation to these concepts, as GP asked.
Which makes me wonder if complex numbers that show up in physics are a sign there are dimensions we can’t or haven’t detected.
I saw a demo one time of a projection of a kind of fractal into an additional dimension, as well as projections of Sierpinski cubes into two dimensions. Both blew my mind.
Even negative numbers and zero were objected to until a few hundred years ago, no?
They originally arose as tool, but complex numbers are fundamental to quantum physics. The wave function is complex, the Schrödinger equation does not make sense without them. They are the best description of reality we have.
If it doesn't differ, you are in the good company of great minds who have been unable to settle this over thousands of years and should therefore feel better!
More at SEP:
I believe even negative numbers had their detractors
Almost every other intuition, application, and quirk of them just pops right out of that statement. The extensions to the quarternions, etc… all end up described by a single consistent algebra.
It’s as if computer graphics was the first and only application of vector and matrix algebra and people kept writing articles about “what makes vectors of three real numbers so special?” while being blithely unaware of the vast space that they’re a tiny subspace of.
(I have a math degree, so I don't have any issues with C, but this is the kind of question that would have troubled me in high school.)
Complex numbers offers that resolution.
A better way to understand my point is: we need mental gymnastics to convert problems into equations. The imaginary unit, just like numbers, are a by-product of trying to fit problems onto paper. A notable example is Schrodinger's equation.
For example, reflections and chiral chemical structures. Rotations as well.
It turns out all things that rotate behave the same, which is what the complex numbers can describe.
Polynomial equations happen to be something where a rotation in an orthogonal dimension leaves new answers.
That is how they started, but mathematics becomes remarkable "better" and more consistent with complex numbers.
As you say, The Fundamental Theorem of Algebra relies on complex numbers.
Cauchy's Integral Theorem (and Residue Theorem) is a beautiful complex-only result.
As is the Maximum Modulus Principle.
The Open Mapping Theorem is true for complex functions, not real functions.
---
Are complex numbers really worse than real numbers? Transcendentals? Hippasus was downed for the irrationals.
I'm not sure any numbers outside the naturals exist. And maybe not even those.
First, let's try differential equations, which are also the point of calculus:
Idea 1: The general study of PDEs uses Newton(-Kantorovich)'s method, which leads to solving only the linear PDEs,
which can be held to have constant coefficients over small regions, which can be made into homogeneous PDEs,
which are often of order 2, which are either equivalent to Laplace's equation, the heat equation,
or the wave equation. Solutions to Laplace's equation in 2D are the same as holomorphic functions.
So complex numbers again.
Now algebraic closure, but better: Idea 2: Infinitary algebraic closure. Algebraic closure can be interpeted as saying that any rational functions can be factorised into monomials.
We can think of the Mittag-Leffler Theorem and Weierstrass Factorisation Theorem as asserting that this is true also for meromorphic functions,
which behave like rational functions in some infinitary sense. So the algebraic closure property of C holds in an infinitary sense as well.
This makes sense since C has a natural metric and a nice topology.
Next, general theory of fields: Idea 3: Fields of characteristic 0. Every algebraically closed field of characteristic 0 is isomorphic to R[√-1] for some real-closed field R.
The Tarski-Seidenberg Theorem says that every FOL statement featuring only the functions {+, -, ×, ÷} which is true over the reals is
also true over every real-closed field.
I think maybe differential geometry can provide some help here. Idea 4: Conformal geometry in 2D. A conformal manifold in 2D is locally biholomorphic to the unit disk in the complex numbers.
Idea 5: This one I'm not 100% sure about. Take a smooth manifold M with a smoothly varying bilinear form B \in T\*M ⊗ T\*M.
When B is broken into its symmetric part and skew-symmetric part, if we assume that both parts are never zero, B can then be seen as an almost
complex structure, which in turn naturally identifies the manifold M as one over C.It feels a bit like the article's trying to extend some legitimate debate about whether fixing i versus -i is natural to push this other definition as an equal contender, but there's hardly any support offered. I expect the last-place 28% poll showing, if it does reflect serious mathematicians at all, is those who treat the topological structure as a given or didn't think much about the implications of leaving it out.
I showed various colleagues. Each one would ask me to demonstrate the equivalence to their preferred presentation, then assure me "nothing to see here, move along!" that I should instead stick to their convention.
Then I met with Bill Thurston, the most influential topologist of our lifetimes. He had me quickly describe the equivalence between my form and every other known form, effectively adding my node to a complete graph of equivalences he had in his muscle memory. He then suggested some generalizations, and proposed that circle packings would prove to be important to me.
Some mathematicians are smart enough to see no distinction between any of the ways to describe the essential structure of a mathematical object. They see the object.
The algebraic conception, with its wild automorphisms, exhibits a kind of multiplicative chaos — small changes in perspective (which automorphism you apply) cascade into radically different views of the structure. Transcendental numbers are all automorphic with each other; the structure cannot distinguish e from π. Meanwhile, the analytic/smooth conception, by fixing the topology, tames this chaos into something with only two symmetries. The topology acts as a damping mechanism, converting multiplicative sensitivity into additive stability.
I'll just add to that that if transformers are implementing a renormalization group flow, than the models' failure on the automorphism question is predictable: systems trained on compressed representations of mathematical knowledge will default to the conception with the lowest "synchronization" cost — the one most commonly used in practice.
https://www.symmetrybroken.com/transformer-as-renormalizatio...
This is a very interesting question, and a great motivator for Galois theory, kind of like a Zen koan. (e.g. "What is the sound of one hand clapping?")
But the question is inherently imprecise. As soon as you make a precise question out of it, that question can be answered trivially.
One of the roots is 1, choosing either adjacent one as a privileged group generator means choosing whether to draw the same complex plane clockwise or counterclockwise.
1) Exactly one C
2) Exactly two isomorphic Cs
3) Infinitely many isomorphic Cs
It's not really the question of whether i and -i are the same or not. It's the question of whether this question arises at all and in which form.
Haven’t thought it through so I’m quite possibly wrong but it seems to me this implies that in such a situation you can’t have a coordinate view. How can you have two indistinguishable views of something while being able to pick one view?
That's not the interesting part. The interesting part is that I thought everyone is the same, like me.
It was a big and surprising revelation that people love counting or algebra in just the same way I feel about geometry (not the finite kind) and feel awkward in the kind of mathematics that I like.
It's part of the reason I don't at all get the hate that school Calculus gets. It's so intuitive and beautifully geometric, what's not to like. .. that's usually my first reaction. Usually followed by disappointment and sadness -- oh no they are contemplating about throwing such a beautiful part away.
The obsession with rigor that later developed -- while necessary -- is really an "advanced topic" that shouldn't displace learning the intuition and big picture concepts. I think math up through high school should concentrate on the latter, while still being honest about the hand-waving when it happens.
calculus works... because it was almost designed for Mechanics. If the machine it's getting input, you have output. When it finished getting input, all the output you get yields some value, yes, but limits are best understood not for the result, but for the process (what the functions do).
You are not sending 0 coins to a machine, do you? You sent X to 0 coins to a machine. The machine will work from 2 to 0, but 0 itself is not included because is not a part of a changing process, it's the end.
Limits are for ranges of quantities over something.
But instead calculus is taught from fundamentals, building up from sequences. And a lot of complexity and hate comes from all those "technical" theorems that you need to make that jump from sequences to functions. E.g. things like "you can pick a converging subsequence from any bounded sequence".
In Maths classes, we started with functions. Functions as list of pairs, functions defined by algebraic expressions, functions plotted on graph papers and after that limits. Sequences were peripherally treated, just so that limits made sense.
Simultaneously, in Physics classes we were being taught using infinitesimals, with the a call back that "you will see this done more formally in your maths classes, but for intuition, infinitesimals will do for now".
(attributed to Jerry Bona)
There's no "intend to". The complex numbers are what they are regardless of us; this isn't quantum mechanics where the presence of an observer somehow changes things.
(Yes, mathematicians really use it. It makes parity a simpler polynomial than the normal assignment).