Top
Best
New

Posted by elonlit 4 days ago

A Theory of Deep Learning(elonlit.com)
240 points | 59 comments
r0ze-at-hn 2 days ago|
Linking to the paper: https://arxiv.org/pdf/2605.01172 which is also a fantastic read, the application to deep learning is good. It does a lot of cross-mapping and highlighting a bunch of old stuff that is named differently in this paper and worth calling out for those with those backgrounds:

"Cumulative Dissipation Gramian" Ws = Observability Gramian (from Control Theory). For example the spectral cutoff is exactly the Hankel singular value truncation from model reduction.

"Signal Channel" / "Reservoir" is Controllable/Observable vs. Uncontrollable/Unobservable Subspaces. Using Adamjan-Arov-Krein (AAK) theory gives the optimal nonlinear reduced model answering the optimal compression question.

"Drift–Diffusion Separation" is Freidlin-Wentzell Large Deviation Theory. They can predict "grokking" time from the FW action.

"Population-Risk Gate" is Quantum Weak Value / Postselection (Aharonov)

So for the follow-up problems

Control theory gives the truncation error bounds for model compression. Large deviation theory gives the grokking time predictions. Quantum measurement theory gives the imaginary preconditioners. Information geometry gives the optimal continuous relaxation of the gate.

Some nice implications of new ways of doing stuff which are nice to see formalized here:

Old: Pick architecture, hope it generalizes New: Design architecture to maximize observability Gramian rank (Honestly we pull a lot from control theory here)

Old: Use validation set to detect overfitting New: Monitor λ(Ws) spectrum during training; no validation needed

Old: Prune post-hoc based on magnitude New: Prune during training based on ker(Ws) membership

Old: Fixed learning rate New: Spectral learning rate

johnthescott 1 day ago|
> We present a non-asymptotic theory of generalization

what is a non-elephant animal (to paraphrase stan ulam)?

refulgentis 3 days ago||
This is a beautifully written way of saying “Some parts of what the network memorizes affect test behavior, and some don’t.” But that’s not a theory of deep learning, the grand unified theory would explain that.

We're given a signal channel and a reservoir. Signal lives in the channel, noise lives in the reservoir, and the reservoir supposedly doesn’t show up at test time.

Okay, but then we have: why would SGD put the right things in the right bucket?

If the answer is “because the reservoir is defined as the stuff that doesn’t transfer to test,” then this is close to circular.

The Borges/Lavoisier stuff is a tell. "We have unified the field” rhetoric should come after nontrivial predictions and results. Claiming to solve benign overfitting, double descent, grokking, implicit bias, risk of training on population, how to avoid a validation set, and last but not least, skipping training by analytically jumping to the end is 6 theory papers, 3 NeurIPS winners, and a $10B startup. Let's get some results before we tell everyone we unified the field. :) I hope you're right.

Chance-Device 2 days ago||
> why would SGD put the right things in the right bucket?

Think of it as a best fit curve and exceptions to that curve. The noise is essentially this set of exceptions that move points away from where they would otherwise fall on the curve.

Gradient descent wants to be able to make the smallest change that moves the most data points towards the curve. To do this it learns an arrangement where it can change, say, one parameter and have a bunch of points move at once. What does this correspond to? The big common patterns shared by many data points.

Most of the capacity gets soaked up modelling these sorts of common patterns, and after they have been learned the model starts adding exceptions that allow individual points to deviate from the curve.

Because they’re exceptions, they must not impact neighbouring points, or at least only ones within a very short distance from them. Otherwise they’re now driving the error higher by impacting more points than they should. So you end up with very narrow ranges of features that are able to trigger different sorts of noise.

How narrow they are is shaped by the training data, they’re exactly as narrow as needed not to raise the error, so assuming the total population has the same distribution, they don’t get hit. Much.

At least, that’s what I take away from it.

dwrodri 3 days ago|||
Admittedly probably some aggrandized boasting here, but I think empirical verification of that Adam modification alone would be a meaningful contribution, unless that's prior work?
317070 2 days ago||
A theory that skips the parameter space, and understands grokking theory, comes up with an unexplained update rule, which notably works on a per-parameter level by dropping the updates for most parameters.

I suspect there is going to be a lot of handwaving to actually go from eNTK to that new update rule.

I also doubt it helps in the non-grokking regime, given the focus of the theory, which is where all the practical applications I have ever heard from live.

Don't get me wrong, I did enjoy reading this essay. It's well written and reasonably argumented without going into details.

yorwba 2 days ago||
The handwaving required is just to assume a diagonal preconditioner, and the optimal preconditioner under that constraint corresponds to the new update rule. (See section F of the paper.) And of course a diagonal preconditioner works on the per-paramer level.
hariseldom 2 days ago|||
These are the same complaints I had. Also felt like it was high quality ai writing, possibly because of the style choices like "Benign overfitting is noise sitting in the reservoir at interpolation. XYZ is ..." and because of the similarity it has to the times I ended up with chatgpt or gemini creating very detailed and plausible reports about my own crackpot or vague-enough-to-be-useless ideas.
neosat 2 days ago|||
If that's the case, a way to test the theory and understanding (assuming some parts of reservoir and signal channel can be reliably identified) would be to prune the high-confidence reservoir significantly reducing the model size while still getting good predictions. I don't believe the authors mention this (though I skimmed and didn't read the full paper in detail so I may be wrong)
chermi 2 days ago|||
I find the landscape perspective very valuable when trying to understand NN. Why SGD finds the right buckets? I am not super current, but this looked like the right trail to me -- https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.12...
robot-wrangler 2 days ago|||
> The Borges/Lavoisier stuff is a tell.

Nah, the softer stuff seems like valuable outreach / good science communication for people that aren't up for the math. Including probably lots of software engineers who are sick of dumb debates in forums, and starting to dip into the real literature and listen to better authorities. More people should do this really, since it's the only way to see past the marketing and hype from fully entrenched AI boosters or detractors. Neither of those groups is big on critical thinking, and they dominate most conversation.

Time/effort coming from experts who want to make things accessible is a gift! The paper is linked elsewhere in the thread if you want no-frills.

SubiculumCode 2 days ago||
I don't know the math, but this point was clear to me and it screamed, "crank" but not being sure of that because I am not learned enough to understand the math... but even I could tell the magnitude of the claim. Even just the removing the need for validation sets would have epic consequences across many fields.
ks2048 3 days ago||
The relevant paper: "A Theory of Generalization in Deep Learning". https://arxiv.org/abs/2605.01172
pixelpoet 2 days ago|
I interpreted the kernel K of this paper as the BRDF in Rendering Equation[0] and its familiar diffusion process (from light transport simulation, or really any integro-differential equation system); together with https://en.wikipedia.org/wiki/Neural_tangent_kernel I hope this paper might be accessible with some study

[0] https://en.wikipedia.org/wiki/Rendering_equation

arolihas 2 days ago||
Idk to me this is just redescribing what deep neural networks do without actually explaining why anything happens. I guess it "unifies" things but I am kinda over most unifying theories. Everything is Bayesian, everything is a graph or a group or some other fancy geometric structure, everything is a category. Ultimately the best framework is whatever is useful enough to explain what's happening in such a way that a practitioner can manipulate the model towards a desired outcome. In other words, where is the knob? The tool they share may be interesting and I hope to play with it to see what happens at different levels of noise applied to the labels.
po1nt 2 days ago||
We're still in the era of room-sized-computers-only-scientists-understand era of the neural networks. Knobs and buttons for nerds are slowly coming.
arolihas 2 days ago||
I agree, which is why it’s too early to make such grandiose claims about deep learning theory.
ipnon 2 days ago|||
A real theory would predict phenomena thus far unseen. We already know about this 4 part taxonomy.
T-A 2 days ago||
Did you also know about this?

Lastly, we derive an exact population-risk objective from a single training run with no validation data, for any architecture, loss, or optimizer, and prove that it measures precisely the noise in the signal channel. This objective reduces in practice to an SNR preconditioner on top of Adam, adding one state vector at no extra cost; it accelerates grokking by 5x, suppresses memorization in PINNs and implicit neural representations, and improves DPO fine-tuning under noisy preferences while staying 3x closer to the reference policy. [1]

[1] https://arxiv.org/abs/2605.01172

throwjjj 2 days ago||
[dead]
prideout 3 days ago||
This is a fascinating mathematical framework, but the post title might be a bit of an overreach. I often wonder if "a theory of deep learning" could exist that could be stated succinctly and that could predict (1) scaling laws and (2) the surprising reliability of gradient descent.

Note that I said "predict" not "describe". It feels like we're still in the era of Kepler, not Newton.

sdenton4 2 days ago||
I dunno... gradient descent is only really reliable with a big bag of tricks. Knowing good initializations is a starting point, but recurrent connections and batch/layer normalization go a very long way towards making it reliable.
hellohello2 2 days ago||
I agree, this is the correct way to see it IMO. Instead of designing better optimizers, we designed easier parameterizations to optimize. The surprising part is that these parameterizations exist in the first place.
sigmoid10 2 days ago||
Gradient descent is mathematically the most efficient optimization strategy (safe for some special functions) in high dimensions. This goes so far that people nowadays even believe it has to be used in the human brain [1], if only because every other method of updating the brain would be way too energy inefficient. From that perspective, finding the right parameterization was all we ever needed to achieve AI.

[1] https://physoc.onlinelibrary.wiley.com/doi/full/10.1113/JP28...

scarmig 2 days ago|||
Even in supervised ML, pure gradient descent is not the most efficient optimization strategy. E.g., momentum is ubiquitous, and the updates it induces cannot be expressed as a gradient of some scalar loss. But the rotational non-gradient component of its updates substantially improves performance and convergence on the architectures we use.

The brain probably primarily uses something like TD for task learning, which is also not expressible as a gradient of any objective function. And, though the paper mentions Hebbian learning, it's only very particular network architectures (e.g. single neuron; symmetric connections) that you can treat its updates as a gradient of some energy function; these architectures aren't anything close to what we see in the brain.

sigmoid10 2 days ago||
Pure gradient descent is not what happens in either field, but e.g. momentum is just another parameter constructed from historic gradients. While it is unlikely that the brain runs backpropagation the way you see it implemented in modern ML (same goes for TD btw), the core principle kind of needs to be the same from a pure large scale, high dimensional network efficiency POV. On top of that, adaptive plasticity is almost by definition about estimating useful directions of change. The key insight here would be that the brain does gradient estimation quite cheap and we can probably still learn a thing or two about modern ML from it.
sdenton4 2 days ago|||
Taking a quick look at the paper...

Their claim isn't that the brain uses gradient descent, but that the direction of updates has (on average) positive inner product with the gradient. I expect this would also be true for (say) simulated annealing, yet we don't say that simulated annealing is gradient descent.

There's also a discussion of loss functions and how they relate to the update missing - as far as I know, there's still no great notion of how the brain picks a global loss function, and no mechanism for backprop. In this paper, looking at a specific learning task you can define a loss function extrinsically allowing us to talk about the gradient, but how that relates to things happening in the brain is a big big mystery.

jldugger 2 days ago||
[flagged]
smokel 3 days ago||
This essay seems to be related to the paper "There Will Be a Scientific Theory of Deep Learning" [1] which was discussed here recently [2].

[1] https://arxiv.org/pdf/2604.21691

[2] https://news.ycombinator.com/item?id=47893779

jhanschoo 2 days ago||
My intuitive understanding about double descent is that

1. Older ML models encoded in their architecture and lack of expressivity a bias to simplicity; which aided interpolation.

2. Overparameterized models instead use regularization to nudge parameters to simpler and more robust representations, while still memorizing the noise. In this manner, we still achieve generalization performance OOD. Moreover, the softer nudging and fundamental architectural expressivity allows for "data-specific" generalizations and representations that may be impossible to represent in small models. 3. At the critical point between the two regimes, the model is expressive enough to memorize; but not expressive enough to simultaneously both do that and encode general patterns.

I wonder how this understanding translates to these researchers' models of deep learning.

hashta 2 days ago||
Interesting read. I remember the grokking paper when it came out but I don't think I've ever seen that classic grokking loss curve in my own hands on real data. Curious if others have seen it more often in practice
yorwba 2 days ago|
To get pure grokking, you need a model large enough to easily memorize the entire training data and keep training for a long time after memorization. In practice, you'll probably use a more realistically-sized model that might grok on some subset of the data, but not so strongly that it's extremely obvious.
hashta 2 days ago||
I think I trained models with #params >> #training examples for hundreds of epochs, but still don't recall seeing that loss curve on real data. Curious if others have seen it with larger models or much longer runs
kleiba2 2 days ago||
> This exact characterization is possible because in output space, training dynamics can be understood through a locally linear differential equation along the realized path, where dominant eigenmodes of the evolving kernel equilibrate exponentially fast. Forcing an optimizer to slowly step through these solved directions is highly inefficient and suggests a path to analytically jump to the final network state.

But at what computational cost?

minimaltom 2 days ago|
> That is, if the batch signal on a parameter exceeds its leave-one-out noise, update it; if not, skip it. This is a one-line change to Adam that accelerates grokking by 5x, suppresses memorization in PINNs, and improves DPO fine-tuning, eliminating the need for validation sets entirely.

Does anyone understand the formula they expressed above this sentence? is this just the classic "skip updating parameters with high gradient/loss variance in multiple batches/samples" ?

yorwba 2 days ago|
What is classic about "skip updating parameters with high gradient/loss variance in multiple batches/samples"? Do you have a particular algorithm in mind that uses this heuristic?
minimaltom 1 day ago||
Theres been multiple papers discussing how only updating parameters that have high agreement in update direction leads to less overfitting and better generalization. Lemme see if I can find em.

https://arxiv.org/abs/2411.16085 - set updates to 0 where theres disagreement in the sign of the parameter update - got accepted!

https://arxiv.org/pdf/2412.18052 - discard gradient updates from batches/minibatches that disagree where disagree means cosine distance threshold (they solved for 0.97 or something being optimal)

More comments...