Posted by helloplanets 1 day ago
https://www.ft.com/content/e5245ec3-1a58-4eff-ab58-480b6259a... (https://archive.md/5eZWq)
There are a lot more degrees of freedom in world models.
LLMs are fundamentally capped because they only learn from static text -- human communications about the world -- rather than from the world itself, which is why they can remix existing ideas but find it all but impossible to produce genuinely novel discoveries or inventions. A well-funded and well-run startup building physical world models (grounded in spatiotemporal understanding, not just language patterns) would be attacking what I see as the actual bottleneck to AGI. Even if they succeed only partially, they may unlock the kind of generalization and creative spark that current LLMs structurally can't reach.
Even with continuous backpropagation and "learning", enriching the training data, so called online-learning, the limitations will not disappear. The LLMs will not be able to conclude things about the world based on fact and deduction. They only consider what is likely from their training data. They will not foresee/anticipate events, that are unlikely or non-existent in their training data, but are bound to happen due to real world circumstances. They are not intelligent in that way.
Whether humans always apply that much effort to conclude these things is another question. The point is, that humans fundamentally are capable of doing that, while LLMs are structurally not.
The problems are structural/architectural. I think it will take another 2-3 major leaps in architectures, before these AI models reach human level general intelligence, if they ever reach it. So far they can "merely" often "fake it" when things are statistically common in their training data.
Kahneman’s whole framework points the same direction. Most of what people call “reasoning” is fast, associative, pattern-based. The slow, deliberate, step-by-step stuff is effortful and error-prone, and people avoid it when they can. And even when they do engage it, they’re often confabulating a logical-sounding justification for a conclusion they already reached by other means.
So maybe the honest answer is: the gap between what LLMs do and what most humans do most of the time might be smaller than people assume. The story that humans have access to some pure deductive engine and LLMs are just faking it with statistics might be flattering to humans more than it’s accurate.
Where I’d still flag a possible difference is something like adaptability. A person can learn a totally new formal system and start applying its rules, even if clumsily. Whether LLMs can genuinely do that outside their training distribution or just interpolate convincingly is still an open question. But then again, how often do humans actually reason outside their own “training distribution”? Most human insight happens within well-practiced domains.
I've never heard about the Wason selection task, looked it up, and could tell the right answer right away. But I can also tell you why: because I have some familiarity with formal logic and can, in your words, pattern-match the gotcha that "if x then y" is distinct from "if not x then not y".
In contrast to you, this doesn't make me believe that people are bad at logic or don't really think. It tells me that people are unfamiliar with "gotcha" formalities introduced by logicians that don't match the everyday use of language. If you added a simple additional to the problem, such as "Note that in this context, 'if' only means that...", most people would almost certainly answer it correctly.
Mind you, I'm not arguing that human thinking is necessarily more profound from what what LLMs could ever do. However, judging from the output, LLMs have a tenuous grasp on reality, so I don't think that reductionist arguments along the lines of "humans are just as dumb" are fair. There's a difference that we don't really know how to overcome.
> You are shown a set of four cards placed on a table, each of which has a number on one side and a color on the other. The visible faces of the cards show 3, 8, blue and red. Which card(s) must you turn over in order to test that if a card shows an even number on one face, then its opposite face is blue?
Confusion over the meaning of 'if' can only explain why people select the Blue card; it can't explain why people fail to select the Red card. If 'if' meant 'if and only if', then it would still be necessary to check that the Red card didn't have an even number. But according to Wason[0], "only a minority" of participants select (the study's equivalent of) the Red card.
[0] https://web.mit.edu/curhan/www/docs/Articles/biases/20_Quart...
We keep benchmarking models against the best humans and the best human institutions - then when someone points out that swarms, branching, or scale could close the gap, we dismiss it as "cheating". But that framing smuggles in an assumption that intelligence only counts if it works the way ours does. Nobody calls a calculator a cheat for not understanding multiplication - it just multiplies better than you, and that's what matters.
LLMs are a different shape of intelligence. Superhuman on some axes, subpar on others. The interesting question isn't "can they replicate every aspect of human cognition" - it's whether the axes they're strong on are sufficient to produce better than human outcomes in domains that matter. Calculators settled that question for arithmetic. LLMs are settling it for an increasingly wide range of cognitive work. The fact that neither can flip a burger is irrelevant.
Humans don't have a monopoly on intelligence. We just had a monopoly on generality and that moat is shrinking fast.
We are doing inversion of God of gaps to "LLM of Gaps" where gaps in LLM capabilities are considered inherently negative and limiting
And the questions "Are these things really intelligent" is just a proxy for that.
And we are interested in that question because that is necessary to justify the massive investment these things are getting now. It is quite easy to look at these things and conclude that it will continue to progress without any limit.
But that would be like looking at data compression at the time of its conception, and thinking that it is only a matter of time we can compress 100GB into 1KB..
We live in a time of scams that are obvious if you take a second look. If something that require much deeper scrutiny, then it is possible to generate a lot more larger bubble.
> and that moat is shrinking fast..
The point is that in reality it is not. It is just appearance. If you consider how these things work, then there is no justification of this conclusion.
I have said this elsewhere, but the problem of Hallucination itself along with the requirement of re-training, the smoking gun that these things are not intelligence in ways that would justify these massive investments.
Agreed. More broadly, classical logic isn't the only logic out there. Many logics will differ on the meaning of implication if x then y. There's multiple ways for x to imply y, and those additional meanings do show up in natural language all the time, and we actually do have logical systems to describe them, they are just lesser known.
Mapping natural language into logic often requires a context that lies outside the words that were written or spoken. We need to represent into formulas what people actually meant, rather than just what they wrote. Indeed the same sentence can be sometimes ambiguous, and a logical formula never is.
As an aside, I wanna say that material implication (that is, the "if x then y" of classical logic) deeply sucks, or rather, an implication in natural language very rarely maps cleanly into material implication. Having an implication if x then y being vacuously true when x is false is something usually associated with people that smirk on clever wordplays, rather than something people actually mean when they say "if x then y"
Though note that as GP said, on the Wason selection task, people famously do much better when it's framed in a social context. That at least partially undermines your theory that its lack of familiarity with the terminology of formal logic.
It's as simple as that. In common use, "if x then y" frequently implies "if not x then not y". Pretending that it's some sort of a cognitive defect to interpret it this way is silly.
Some references on that
https://en.wikipedia.org/wiki/Thinking,_Fast_and_Slow
https://thedecisionlab.com/reference-guide/philosophy/system...
System 1 really looks like a LLM (indeed completing a phrase is an example of what it can do, like, "you either die a hero, or you live enough to become the _"). It's largely unconscious and runs all the time, pattern matching on random stuff
System 2 is something else and looks like a supervisor system, a higher level stuff that can be consciously directed through your own will
But the two systems run at the same time and reinforce each other
Your point rings true with most human reasoning most of the time. Still, at least some humans do have the capability to run that deductive engine, and it seems to be a key part (though not the only part) of scientific and mathematical reasoning. Even informal experimentation and iteration rest on deductive feedback loops.
That's what I said. Backpropagation cannot be enough; that's not how neurons work in the slightest. When you put biological neurons in a Pong environment they learn to play not through some kind of loss or reward function; they self-organize to avoid unpredictable stimulation. As far as I know, no architecture learns in such an unsupervised way.
https://www.sciencedirect.com/science/article/pii/S089662732...
This sounds very similar to me as to what neurons do (avoid unpredictable stimulation)
f(x)=y' => loss(y',y) => how good was my prediction? Train f through backprop with that error.
While a model trained with reinforcement learning is more similar to this. Where m(y) is the resulting world state of taking an action y the model predicted.
f(x)=y' => m(y')=z => reward(z) => how good was the state I was in based on my actions? Train f with an algorithm like REINFORCE with the reward, as the world m is a non-differentiable black-box.
While a group of neurons is more like predicting what is the resulting word state of taking my action, g(x,y), and trying to learn by both tuning g and the action taken f(x).
f(x)=y' => m(y')=z => g(x,y)=z' => loss(z,z') => how predictable was the results of my actions? Train g normally with backprop, and train f with an algorithm like REINFORCE with negative surprise as a reward.
After talking with GPT5.2 for a little while, it seems like Curiosity-driven Exploration by Self-supervised Prediction[1] might be an architecture similar to the one I described for neurons? But with the twist that f is rewarded by making the prediction error bigger (not smaller!) as a proxy of "curiosity".
I guess I just always thought it was obvious that you can't do better than nature. You can do different things, sure, but if a society of unique individuals wasn't the most effective way of making progress, nature itself would not have chosen it.
So in a way I think Yan is smart because he got money, but in a way I think he's a fucking idiot if he can't see just how very, very very far we are from competing with organic intelligence.
Our training data is a lot more diverse than an LLMs. We also leverage our senses as a carrier for communicating abstract ideas using audio and visual channels that may or may not be grounded in reality. We have TV shows, video games, programming languages and all sorts of rich and interesting things we can engage with that do not reflect our fundamental reality.
Like LLMs, we can hallucinate while we sleep or we can delude ourselves with untethered ideas, but UNLIKE LLMs, we can steer our own learning corpus. We can train ourselves with our own untethered “hallucinations” or we can render them in art and share them with others so they can include it in their training corpus.
Our hallucinations are often just erroneous models of the world. When we render it into something that has aesthetic appeal, we might call it art.
If the hallucination helps us understand some aspect of something, we call it a conjecture or hypothesis.
We live in a rich world filled with rich training data. We don’t magically anticipate events not in our training data, but we’re also not void of creativity (“hallucinations”) either.
Most of us are stochastic parrots most of the time. We’ve only gotten this far because there are so many of us and we’ve been on this earth for many generations.
Most of us are dazzled and instinctively driven to mimic the ideas that a small minority of people “hallucinate”.
There is no shame in mimicking or being a stochastic parrot. These are critical features that helped our ancestors survive.
This is critical. We have some degree of attentional autonomy. And we have a complex tapestry of algorithms running in thalamocortical circuits that generate “Nows”. Truncation commands produce sequences of acts (token-like products).
Can you be a bit more specific at all bounds? Maybe via an example?
So my question is: when is there enough training data that you can handle 99.99% of the world ?
Whoever cracks the continuous customized (per user, for instance) learning problem without just extending the context window is going to be making a big splash. And I don't mean cheats and shortcuts, I mean actually tuning the model based on received feedback.
The user wouldn’t know if the continuous learning came from the context or the model retrained. It wouldn’t matter.
Continuous learning seems to be a compute and engineering problem.
My solution is to have this massive 'boot up' prompt but it becomes extremely tedious to maintain.
From his point of view, there are not much research left on LLM. Sure we can still improve them a bit with engineering around, but he's more interested in basic research.
So I do buy his idea. But I disagree that you need world models to get to human level capabilities. IMO there's no fundamental reason why models can't develop human understanding based on the known human observations.
While I suspect latter is a real problem (because all mammal brains* are much more example-efficient than all ML), the former is more about productisation than a fundamental thing: the models can be continuously updated already, but that makes it hard to deal with regressions. You kinda want an artefact with a version stamp that doesn't change itself before you release the update, especially as this isn't like normal software where specific features can be toggled on or off in isolation of everything else.
* I think. Also, I'm saying "mammal" because of an absence of evidence (to my *totally amateur* skill level) not evidence of absence.
The fundamental difference is that physical neurons have a discrete on/off activation, while digital "neurons" in a network are merely continuous differentiable operations. They also don't have a notion of "spike timining dependency" to avoid overwriting activations that weren't related to an outcome. There are things like reward-decay over time, but this applies to the signal at a very coarse level, updates are still scattered to almost the entire system with every training example.
As for the "just put a vision LLM in a robot body" suggestion: People are trying this (e.g. Physical Intelligence) and it looks like it's extraordinarily hard! The results so far suggest that bolting perception and embodiment onto a language-model core doesn't produce any kind of causal understanding. The architecture behind the integration of sensory streams, persistent object representations, and modeling time and causality is critically important... and that's where world models come in.
I think this is true to some extent: we like our tools to be predictable. But we’ve already made one jump by going from deterministic programs to stochastic models. I am sure the moment a self-evolutive AI shows up that clears the "useful enough" threshold we’ll make that jump as well.
I also don’t think there is a reason to believe that self-learning models must be unpredictable.
And generally:
> I want to know the model is exactly the same as it was the last time I used it.
What exactly does that gain you, when the overall behavior is still stochastic?
But still, if it's important to you, you can get the same behavior by taking a model snapshot once we crack continuous learning.
Ultimately, we still have a lot to learn and a lot of experiments to do. It’s frankly unscientific to suggest any approaches are off the table, unless the data & research truly proves that. Why shouldn’t we take this awesome LLM technology and bring in more techniques to make it better?
A really, really basic example is chess. Current top AI models still don’t know how to play it (https://www.software7.com/blog/ai_chess_vs_1983_atari/) The models are surely trained on source material that include chess rules, and even high level chess games. But the models are not learning how to play chess correctly. They don’t have a model to understand how chess actually works — they only have a non-deterministic prediction based on what they’ve seen, even after being trained on more data than any chess novice has ever seen about the topic. And this is probably one of the easiest things for AI to stimulate. Very clear/brief rules, small problem space, no hidden information, but it can’t handle the massive decision space because its prediction isn’t based on the actual rules, but just “things that look similar”
(And yeah, I’m sure someone could build a specific LLM or agent system that can handle chess, but the point is that the powerful general purpose models can’t do it out of the box after training.)
Maybe more training & self-learning can solve this, but it’s clearly still unsolved. So we should definitely be experimenting with more techniques.
I mean, sure. But do world models the way LeCun proposes them solves this? I don't think so. JEPAs are just an unsupervised machine learning model at the end of the day; they might end up being better that just autoregressive pretraining on text+images+video, but they are not magic. For example, if you train a JEPA model on data of orbital mechanics, will it learn actually sensible algorithms to predict the planets' motions or will it just learn a mix of heuristic?
I like how people are accepting this dubious assertion that Einstein would be "useful" if you surgically removed his hippocampus and engaging with this.
It also calls this Einstein an AGI rather than a disabled human???
"Reading, after a certain age, diverts the mind too much from its creative pursuits. Any man who reads too much and uses his own brain too little falls into lazy habits of thinking".
-- Albert Einstein
But one might say that the brain is not lossless ... True, good point. But in what way is it lossy? Can that be simulated well enough to learn an Einstein? What gives events significance is very subjective.
What current LLMs lack is inner motivation to create something on their own without being prompted. To think in their free time (whatever that means for batch, on demand processing), to reflect and learn, eventually to self modify.
I have a simple brain, limited knowledge, limited attention span, limited context memory. Yet I create stuff based what I see, read online. Nothing special, sometimes more based on someone else's project, sometimes on my own ideas which I have no doubt aren't that unique among 8 billions of other people. Yet consulting with AI provides me with more ideas applicable to my current vision of what I want to achieve. Sure it's mostly based on generally known (not always known to me) good practices. But my thoughts are the same way, only more limited by what I have slowly learned so far in my life.
Virtual simulations are not substitutable for the physical world. They are fundamentally different theory problems that have almost no overlap in applicability. You could in principle create a simulation with the same mathematical properties as the physical world but no one has ever done that. I'm not sure if we even know how.
Physical world dynamics are metastable and non-linear at every resolution. The models we do build are created from sparse irregular samples with large error rates; you often have to do complex inference to know if a piece of data even represents something real. All of this largely breaks the assumptions of our tidy sampling theorems in mathematics. The problem of physical world inference has been studied for a couple decades in the defense and mapping industries; we already have a pretty good understanding of why LLM-style AI is uniquely bad at inference in this domain, and it mostly comes down to the architectural inability to represent it.
Grounded estimates of the minimum quantity of training data required to build a reliable model of physical world dynamics, given the above properties, is many exabytes. This data exists, so that is not a problem. The models will be orders of magnitude larger than current LLMs. Even if you solve the computer science and theory problems around representation so that learning and inference is efficient, few people are prepared for the scale of it.
(source: many years doing frontier R&D on these problems)
What do you mean by that? Simulating physics is a rich field, which incidentally was one of the main drivers of parallel/super computing before AI came along.
Reconstructing ground truth from these measurements, which is what you really want to train on, is a difficult open inference problem. The idiosyncratic effects induce large changes in the relationships learnable from the data model. Many measurements map to things that aren't real. How badly that non-reality can break your inference is context dependent. Because the samples are sparse and irregular, you have to constantly model the noise floor to make sure there is actually some signal in the synthesized "ground truth".
In simulated physics, there are no idiosyncratic measurement issues. Every data point is deterministic, repeatable, and well-behaved. There is also much less algorithmic information, so learning is simpler. It is a trivial problem by comparison. Using simulations to train physical world models is skipping over all the hard parts.
I've worked in HPC, including physics models. Taking a standard physics simulation and introducing representative idiosyncratic measurement seems difficult. I don't think we've ever built a physics simulation with remotely the quantity and complexity of fine structure this would require.
I'll admit I'm not very familiar with that type of work - I'm in the forward solve business - but if assumptions are made on the sensor noise distribution, couldn't those be inferred by more generic models? I realize I'm talking about adding a loop on top of an inverse problem loop, which is two steps away (just stuffing a forward solve in a loop is already not very common due to cost and engineering difficulty).
Or better yet, one could probably "primal-adjoint" this and just solve at once for physical parameters and noise model, too. They're but two differentiable things in the way of a loss function.
The problem is, idk if we're ready to have millions of distinct, evolving, self-executing models running wild without guardrails. It seems like a contradiction: you can't achieve true cognition from a machine while artificially restricting its boundaries, and you can't lift the boundaries without impacting safety.
It's true, but it's also true that text is very expressive.
Programming languages (huge, formalized expressiveness), math and other formal notation, SQL, HTML, SVG, JSON/YAML, CSV, domain specific encoding ie. for DNA/protein sequences, for music, verilog/VHDL for hardware, DOT/Graphviz/Mermaid, OBJ for 3D, Terraform/Nix, Dockerfiles, git diffs/patches, URLs etc etc.
The scope is very wide and covers enough to be called generic especially if you include multi modalities that are already being blended in (images, videos, sound).
I'm cheering for Yann, hope he's right and I really like his approach to openness (hope he'll carry it over to his new company).
At the same time current architectures do exist now and do work, by far exceeding his or anybody's else expectations and continue doing so. It may also be true they're here to stay for long on text and other supported modalities as cheaper to train.
I 100% guarantee that he will not be holding the bag when this fails. Society will be protecting him.
On that proviso I have zero respect for this guy.
I don't think it makes sense conceptually unless you're literally referring to discovering new physical things like elements or something.
Humans are remixers of ideas. That's all we do all the time. Our thoughts and actions are dictated by our environment and memories; everything must necessarily be built up from pre-existing parts.
You can't get Suno to do anything that's not in its training data. It is physically incapable of inventing a new musical genre. No matter how detailed the instructions you give it, and even if you cheat and provide it with actual MP3 examples of what you want it to create, it is impossible.
The same goes for LLMs and invention generally, which is why they've made no important scientific discoveries.
You can learn a lot by playing with Suno.
Einstein’s theory of relativity springs to mind, which is deeply counter-intuitive and relies on the interaction of forces unknowable to our basic Newtonian senses.
There’s an argument that it’s all turtles (someone told him about universes, he read about gravity, etc), but there are novel maths and novel types of math that arise around and for such theories which would indicate an objective positive expansion of understanding and concept volume.
Also there is no evidence that novel discoveries are more than remixes. This is heavily debated but from what we’ve seen so far I’m not sure I would bet against remix.
World models are great for specific kinds of RL or MPC. Yann is betting heavily on MPC, I’m not sure I agree with this as it’s currently computationally intractable at scale
Imagine that we made an LLM out of all dolphin songs ever recorded, would such LLM ever reach human level intelligence? Obviously and intuitively the answer is NO.
Your comment actually extended this observation for me sparking hope that systems consuming natural world as input might actually avoid this trap, but then I realized that tool use & learning can in fact be all that's needed for singularity while consuming raw data streams most of the time might actually be counterproductive.
It could potentially reach super-dolphin level intelligence
Dataset limitations have been well understood since the dawn of statistics-based AI, which is why these models are trained on data and RL tasks that are as wide as possible, and are assessed by generalization performance. Most of the experts in ML, even the mathematically trained ones, within the last few years acknowledge that superintelligence (under a more rigorous definition than the one here) is quite possible, even with only the current architectures. This is true even though no senior researcher in the field really wants superintelligence to be possible, hence the dozens of efforts to disprove its potential existence.
In the last step of training LLMs, reinforcement learning from verified rewards, LLMs are trained to maximize the probability of solving problems using their own output, depending on a reward signal akin to winning in Go. It's not just imitating human written text.
Fwiw, I agree that world models and some kind of learning from interacting with physical reality, rather than massive amounts of digitized gym environments is likely necessary for a breakthrough for AGI.
Using the term autoregressive models instead might help.
Everything is bits to a computer, but text training data captures the flattened, after-the-fact residue of baseline human thought: Someone's written description of how something works. (At best!)
A world model would need to capture the underlying causal, spatial, and temporal structure of reality itself -- the thing itself, that which generates those descriptions.
You can tokenize an image just as easily as a sentence, sure, but a pile of images and text won't give you a relation between the system and the world. A world model, in theory, can. I mean, we ought to be sufficient proof of this, in a sense...
So when we think about capturing any underlying structure of reality itself, we are constrained by the tools at hand.
The capability of the tool forms the description which grants the level of understanding.
The density of information in the spatiotemporal world is very very great, and a technique is needed to compress that down effectively. JEPAs are a promising technique towards that direction, but if you're not reconstructing text or images, it's a bit harder for humans to immediately grok whether the model is learning something effectively.
I think that very soon we will see JEPA based language models, but their key domain may very well be in robotics where machines really need to experience and reason about the physical the world differently than a purely text based world.
I assume that when you get out of bed in the morning, the first thing you dont do is paint 1000 1080p pictures of what your breakfast looks like.
LeCunns models predict purely in representation space and output no pixel scale detailed frames. Instead you train a model to generate a dower dimension representation of the same thing from different views, penalizing if the representation is different ehen looking at the same thing
> One major critique LeCun raises is that LLMs operate only in the realm of language, which is a simple, discrete space compared to the continuous, complex physical world we live in. LLMs can solve math problems or answer trivia because such tasks reduce to pattern completion on text, but they lack any meaningful grounding in physical reality. LeCun points out a striking paradox: we now have language models that can pass the bar exam, solve equations, and compute integrals, yet “where is our domestic robot? Where is a robot that’s as good as a cat in the physical world?” Even a house cat effortlessly navigates the 3D world and manipulates objects — abilities that current AI notably lacks. As LeCun observes, “We don’t think the tasks that a cat can accomplish are smart, but in fact, they are.”
The biggest thing thats missing is actual feedback to their decisions. They have no "idea of that because transformers and embeddings dont model that yet. And langiage descriptions and image representations of feedback arent enough. They are too disjointed. It needs more
It's like the people who are so hyped up about voice controlled computers. Like you get a linear stream of symbols is a huge downgrade in signals, right? I don't want computer interaction to be yet more simplified and worsened.
Compare with domain experts who do real, complicated work with computers, like animators, 3D modelers, CAD, etc. A mouse with six degrees of freedom, and a strong training in hotkeys to command actions and modes, and a good mental model of how everything is working, and these people are dramatically more productive at manipulating data than anyone else.
Imagine trying to talk a computer through nudging a bunch of vertexes through 3D space while flexibly managing modes of "drag" on connected vertexes. It would be terrible. And no, you would not replace that with a sentence of "Bot, I want you to nudge out the elbow of that model" because that does NOT do the same thing at all. An expert being able to fluidly make their idea reality in real time is just not even remotely close to the instead "Project Manager/mediocre implementer" relationship you get prompting any sort of generative model. The models aren't even built to contain specific "Style", so they certainly won't be opinionated enough to have artistic vision, and a strong understanding of what does and does not work in the right context, or how to navigate "My boss wants something stupid that doesn't work and he's a dumb person so how do I convince him to stop the dumb idea and make him think that was his idea?"
https://en.wikipedia.org/wiki/Moravec%27s_paradox
All the things we look at as "Smart" seem to be the things we struggle with, not what is objectively difficult, if that can even be defined.
World models and vision seems like a great use case for robotics which I can imagine that being the main driver of AMI.
No hate, but this is just your opinion.
The definition of "text" here is extremely broad – an SVG is text, but it's also an image format. It's not incomprehensible to imagine how an AI model trained on lots of SVG "text" might build internal models to help it "visualise" SVGs in the same way you might visualise objects in your mind when you read a description of them.
The human brain only has electrical signals for IO, yet we can learn and reason about the world just fine. I don't see why the same wouldn't be possible with textual IO.
But yeah, I can't imagine that LLMs don't already have a world model in there. They have to. The internet's corpus of text may not contain enough detail to allow a LLM to differentiate between similar-looking celebrities, but it's plenty of information to allow it to create a world model of how we perceive the world. And it's a vastly more information-dense means of doing so.
Perhaps for the current implementations this is true. But the reason the current versions keep failing is that world dynamics has multiple orders of magnitude fewer degrees of freedom than the models that are tasked to learn them. We waste so much compute learning to approximate the constraints that are inherent in the world, and LeCun has been pressing the point the past few years that the models he intends to design will obviate the excess degrees of freedom to stabilize training (and constrain inference to physically plausible states).
If my assumption is true then expect Max Tegmark to be intimately involved in this new direction.
That said, while I 100% agree with him that LLM's won't lead to human-like intelligence (I think AGI is now an overloaded term, but Yann uses it in its original definition), I'm not fully on board with his world model strategy as the path forward.
can you please elaborate on your strategy as the path forward?
Build attention-grabbing, monetizable models that subsidize (at least in part) the run up to AGI.
Nobody is trying to one-shot AGI. They're grinding and leveling up while (1) developing core competencies around every aspect of the problem domain and (2) winning users.
I don't know if Meta is doing a good job of this, but Google, Anthropic, and OpenAI are.
Trying to go straight for the goal is risky. If the first results aren't economically viable or extremely exciting, the lab risks falling apart.
This is the exact point that Musk was publicly attacking Yann on, and it's likely the same one that Zuck pressed.
Secondly, it's not clear that the current LLMs are a run up to AGI. That's what LeCun is betting - that the LLM labs are chasing a local maxima.
That's the point of it. You need to take more risk for different approach. Same as what OpenAI did initially.
There is absolutely no doubt about Yann's impact on AI/ML, but he had access to many more resources in Meta, and we didn't see anything.
It could be a management issue, though, and I sincerely wish we will see more competition, but from what I quoted above, it does not seem like it.
Understanding world through videos (mentioned in the article), is just what video models have already done, and they are getting pretty good (see Seedance, Kling, Sora .. etc). So I'm not quite sure how what he proposed would work.
Meta absolutely has (or at least had) a word class industry AI lab and has published a ton of great work and open source models (granted their LLM open source stuff failed to keep up with chinese models in 2024/2025 ; their other open source stuff for thins like segmentation don't get enough credit though). Yann's main role was Chief AI Scientist, not any sort of product role, and as far as I can tell he did a great job building up and leading a research group within Meta.
He deserved a lot of credit for pushing Meta to very open to publishing research and open sourcing models trained on large scale data.
Just as one example, Meta (together with NYU) just published "Beyond Language Modeling: An Exploration of Multimodal Pretraining" (https://arxiv.org/pdf/2603.03276) which has a ton of large-experiment backed insights.
Yann did seem to end up with a bit of an inflated ego, but I still consider him a great research lead. Context: I did a PhD focused on AI, and Meta's group had a similar pedigree as Google AI/Deepmind as far as places to go do an internship or go to after graduation.
Creating a startup has to be about a product. When you raise 1B, investors are expecting returns, not papers.
Speaking of returns - Apple absolutely fucked Meta ads with the privacy controls, which trashed ad performance, revenue and share price. Meta turned things around using AI, with Yann as the lead researcher. Are you willing to give him credit for that? Revenue is now greater than pre-Apple-data-lockdown
[1] https://9to5mac.com/2025/08/21/meta-allegedly-bypassed-apple...
Why would Apple be complicit on this for years?
When you log into FB on any account on any device, then install FB on a new device, or even after you erase the device, they know it's you even before you log in. Because the info is tied to your Apple iCloud account.
And there's no way for users to see or delete what data other companies have stored and linked to your Apple ID via that API.
It's been like this for at least 5 years and nobody seems to care.
> I wasn't criticising his scientific contribution at all, that's why I started my comment by appraising what he did.
You were criticising his output at Facebook, though, but he was in the research group at facebook, not a product group, so it seems like we did actually see lots of things?
That's true for 99% of the scientists, but dismissing their opinion based on them not having done world shattering / ground breaking research is probably not the way to go.
> I sincerely wish we will see more competition
I really wish we don't, science isn't markets.
> Understanding world through videos
The word "understanding" is doing a lot of heavy lifting here. I find myself prompting again and again for corrections on an image or a summary and "it" still does not "understand" and keeps doing the same thing over and over again.
But often passion and freedom to explore are often more important than resources
Or, maybe it's just hard?
Source: himself https://x.com/ylecun/status/1993840625142436160 (“I never worked on any Llama.”) and a million previous reports and tweets from him.
Quite a big contribution in practice.
For a hot minute Meta had a top 3 LLM and open sourced the whole thing, even with LeCunn's reservations around the technology.
At the same time Meta spat out huge breakthroughs in:
- 3d model generation
- Self-supervised label-free training (DINO). Remember Alexandr Wang built a multibillion dollar company just around having people in third world countries label data, so this is a huge breakthrough.
- A whole new class of world modeling techniques (JEPAs)
- SAM (Segment anything)
If it was a breakthrough, why did Meta acquire Wang and his company? I'm genuinely curious.
Unfotunately the dude knows very little about ai or ml research. He's just another wealthy grifter.
At this point decision making at Meta is based on Zuckerberg's vibes, and i suspect the emperor has no clothes.
Is it a troll? Even if we just ignore Llama, Meta invented and released so many foundational research and open source code. I would say that the computer vision field would be years behind if Meta didn't publish some core research like DETR or MAE.
>My only contribution was to push for Llama 2 to be open sourced.
So I keep wondering: if his idea is really that good — and I genuinely hope it is — why hasn’t it led to anything truly groundbreaking yet? It can’t just be a matter of needing more data or more researchers. You tell me :-D
Lecun introduced backprop for deep learning back in 1989 Hinton published about contrastive divergance in next token prediction in 2002 Alexnet was 2012 Word2vec was 2013 Seq2seq was 2014 AiAYN was 2017 UnicornAI was 2019 Instructgpt was 2022
This makes alot of people think that things are just accelerating and they can be along for the ride. But its the years and years of foundational research that allows this to be done. That toll has to be paid for the successsors of LLMs to be able to reason properly and operate in the world the way humans do. That sowing wont happen as fast as the reaping did. Lecun was to plant those seeds, the others who onky was to eat the fruit dont get that they have to wait
If he still hasn’t produced anything truly meaningful after all these years at Meta, when is that supposed to happen? Yann LeCun has been at Facebook/Meta since December 2013.
Your chronological sequence is interesting, but it refers to a time when the number of researchers and the amount of compute available were a tiny fraction of what they are today.
He has hired LeBrun to the helm as CEO.
AMI has also hired LeFunde as CFO and LeTune as head of post-training.
They’re also considering hiring LeMune as Head of Growth and LePrune to lead inference efficiency.
https://techcrunch.com/2025/12/19/yann-lecun-confirms-his-ne...
I have no chance in AI industry...
1) the world has become a bit too focused on LLMs (although I agree that the benefits & new horizons that LLMs bring are real). We need research on other types of models to continue.
2) I almost wrote "Europe needs some aces". Although I'm European, my attitude is not at all that one of competition. This is not a card game. What Europe DOES need is an ATTRACTIVE WORKPLACE, so that talent that is useful for AI can also find a place to work here, not only overseas!
There is DeepMind, OpenAI and Anthropic in London. Even after Brexit, London is still in Europe.
> You're absolutely right. Only large and profitable companies can afford to do actual research. All the historically impactful industry labs (AT&T Bell Labs, IBM Research, Xerox PARC, MSR, etc) were with companies that didn't have to worry about their survival. They stopped funding ambitious research when they started losing their dominant market position.
If you're looking to learn about JEPA, LeCun's vision document "A Path Towards Autonomous Machine Intelligence" is long but sketches out a very comprehensive vision of AI research: https://openreview.net/pdf?id=BZ5a1r-kVsf
Training JEPA models within reach, even for startups. For example, we're a 3-person startup who trained a health timeseries JEPA. There are JEPA models for computer vision and (even) for LLMs.
You don't need a $1B seed round to do interesting things here. We need more interesting, orthogonal ideas in AI. So I think it's good we're going to have a heavyweight lab in Europe alongside the US and China.
BTW, I went to your website looking for this, but didn't find your blog. I do now see that it's linked in the footer, but I was looking for it in the hamburger menu.
That said, have you considered that “Measure 100+ biomarkers with a single blood draw” combined with "heart health is a solved problem” reads a lot like Theranos?
The specific biomarkers being predicted are the ones most relevant to heart health, like cholesterol or HbA1c. These tend to be more stable from hour to hour -- they may vary on a timescale of weeks as you modify your diet or take medications.
Of course, each relevant newspaper on those areas highlight that it's coming to their place, but it really seems to be distributed.
Might be to be close to some of Yann's collaborators like Xavier Bresson at NUS
Almost certainly the IP will be held in Singapore for tax reasons.
Europe in general has been tightening up their rules / taxes / laws around startups / companies especially tech and remote.
It's been less friendly. these days.
As such, They are more likely to talk about singapore news and exaggerate the claims.
Singapore isn't the Key location. From what I am seeing online, France is the major location.
Singapore is just one of the more satellite like offices. They have many offices around the world it seems.
Like? Care to provide any specific examples? "Europe" is a continent composed of various countries, most of which have been doing a lot to make it easier for startups and companies in general.
What’s different about investing in this than investing in say a young researcher’s startup, or Ilya’s superintelligence? In both those cases, if a model architecture isn’t working out, I believe they will pivot. In YL’s case, I’m not sure that is true.
In that light, this bet is a bet on YL’s current view of the world. If his view is accurate, this is very good for Europe. If inaccurate, then this is sort of a nothing-burger; company will likely exit for roughly the investment amount - that money would not have gone to smaller European startups anyway - it’s a wash.
FWIW, I don’t think the original complaint about auto-regression “errors exist, errors always multiply under sequential token choice, ergo errors are endemic and this architecture sucks” is intellectually that compelling. Here: “world model errors exist, world model errors will always multiply under sequential token choice, ergo world model errors are endemic and this architecture sucks.” See what I did there?
On the other hand, we have a lot of unused training tokens in videos, I’d like very much to talk to a model with excellent ‘world’ knowledge and frontier textual capabilities, and I hope this goes well. Either way, as you say, Europe needs a frontier model company and this could be it.
If you think that LLMs are sufficient and RSI is imminent (<1 year), this is horrible for Europe. It is a distracting boondoggle exactly at the wrong time.
And even if you think the chance is zero, unless you also think there is a zero chance they will be capable of pivoting quickly, it might still be beneficial.
I think his views are largely flawed, but chances are there will still be lots of useful science coming out of it as well. Even if current architectures can achieve AGI, it does not mean there can't also be better, cheaper, more effective ways of doing the same things, and so exploring the space more broadly can still be of significant value.
I believe he didn't think that reasoning/CoT would work well or scale like it has
Of course now we know this was delusional and it seems almost funny in retrospect. I feel the same way when I hear that 'just scale language models' suddenly created something that's true AGI, indistinguishable from human intelligence.
Whenever I see people think the model architecture matters much, I think they have a magical view of AI. Progress comes from high quality data, the models are good as they are now. Of course you can still improve the models, but you get much more upside from data, or even better - from interactive environments. The path to AGI is not based on pure thinking, it's based on scaling interaction.
To remain in the same miasma theory of disease analogy, if you think architecture is the key, then look at how humans dealt with pandemics... Black Death in the 14th century killed half of Europe, and none could think of the germ theory of disease. Think about it - it was as desperate a situation as it gets, and none had the simple spark to keep hygiene.
The fact is we are also not smart from the brain alone, we are smart from our experience. Interaction and environment are the scaffolds of intelligence, not the model. For example 1B users do more for an AI company than a better model, they act like human in the loop curators of LLM work.
It's only with hindsight that we think contagionism is obviously correct.
It really depends what you mean by 'we'. Laymen? Maybe. But people said it was wrong at the time with perfectly good reasoning. It might not have been accessible to the average person, but that's hardly to say that only hindsight could reveal the correct answer.
Just because RNNs and Transformers both work with enormous datasets doesn't mean that architecture/algorithm is irrelevant, it just suggests that they share underlying primitives. But those primitives may not be the right ones for 'AGI'.
I'm not aware that we have notably different data sources before or after transformers, so what confounding event are you suggesting transformers 'lucked' in to being contemporaneous with?
Also, why are we seeing diminishing returns if only the data matters. Are we running out of data?
The METR time-horizon benchmark shows steady exponential growth. The frontier lab revenue has been growing exponentially from basically the moment they had any revenues. (The latter has confounding factors. For example it doesn't just depend on the quality of the model but on the quality of the apps and products using the model. But the model quality is still the main component, the products seem to pop into existence the moment the necessary model capabilities exist.)
The point is that core model architectures don't just keep scaling without modification. MoE, inference-time, RAG, etc. are all modifications that aren't 'just use more data to get better results'.
I'm on the contrary believe that the hunt for better data is an attempt to climb the local hill and be stuck there without reaching the global maximum. Interactive environments are good, they can help, but it is just one of possible ways to learn about causality. Is it the best way? I don't think so, it is the easier way: just throw money at the problem and eventually you'll get something that you'll claim to be the goal you chased all this time. And yes, it will have something in it you will be able to call "causal inference" in your marketing.
But current models are notoriously difficult to teach. They eat enormous amount of training data, a human needs much less. They eat enormous amount of energy to train, a human needs much less. It means that the very approach is deficient. It should be possible to do the same with the tiny fraction of data and money.
> The fact is we are also not smart from the brain alone, we are smart from our experience. Interaction and environment are the scaffolds of intelligence, not the model.
Well, I learned English almost all the way to B2 by reading books. I was too lazy to use a dictionary most of the time, so it was not interactive: I didn't interact even with dictionary, I was just reading books. How many books I've read to get to B2? ~10 or so. Well, I read a lot of English in Internet too, and watched some movies. But lets multiply 10 books by 10. Strictly speaking it was not B2, I was almost completely unable to produce English and my pronunciation was not just bad, it was worse. Even now I stumble sometimes on words I cannot pronounce. Like I know the words and I mentally constructed a sentence with it, but I cannot say it, because I don't know how. So to pass B2 I spent some time practicing speech, listening and writing. And learning some stupid topic like "travel" to have a vocabulary to talk about them in length.
How many books does LLM need to consume to get to B2 in a language unknown to it? How many audio records it needs to consume? Life wouldn't be enough for me to read and/or listen so much.
If there was a human who needed to consume as much information as LLM to learn, they would be the stupidest person in all the history of the humanity.
It was empirical and, though ultimately wrong, useful. Apply as you will to theories of learning.
I won't comment on Yann LeCun or his current technical strategy, but if you can avoid sunk cost fallacy and pivot nimbly I don't think it is bad for Europe at all. It is "1 billion dollars for an AI research lab", not "1 billion dollars to do X".
Sure LLMs are getting better and better, and at least for me more and more useful, and more and more correct. Arguably better than humans at many tasks yet terribly lacking behind in some others.
Coding wise, one of the things it does “best”, it still has many issues: For me still some of the biggest issues are still lack of initiative and lack of reliable memory. When I do use it to write code the first manifests for me by often sticking to a suboptimal yet overly complex approach quite often. And lack of memory in that I have to keep reminding it of edge cases (else it often breaks functionality), or to stop reinventing the wheel instead of using functions/classes already implemented in the project.
All that can be mitigated by careful prompting, but no matter the claim about information recall accuracy I still find that even with that information in the prompt it is quite unreliable.
And more generally the simple fact that when you talk to one the only way to “store” these memories is externally (ie not by updating the weights), is kinda like dealing with someone that can’t retain memories and has to keep writing things down to even get a small chance to cope. I get that updating the weights is possible in theory but just not practical, still.
What's still missing is the general reasoning ability to plan what to build or how to attack novel problems - how to assess the consequences of deciding to build something a given way, and I doubt that auto-regressively trained LLMs is the way to get there, but there is a huge swathe of apps that are so boilerplate in nature that this isn't the limitation.
I think that LeCun is on the right track to AGI with JEPA - hardly a unique insight, but significant to now have a well funded lab pursuing this approach. Whether they are successful, or timely, will depend if this startup executes as a blue skies research lab, or in more of an urgent engineering mode. I think at this point most of the things needed for AGI are more engineering challenges rather than what I'd consider as research problems.
Wait, we have another acronym to track. Is this the same/different than AGI and/or ASI?
Tech is ultimately a red herring as far as what's needed to keep the EU competitive. The EU has a trillion dollar hole[0] to fill if they want to replace US military presence, and current net import over 50% of their energy. Unfortunately the current situation in Iran is not helping either of these as they constrains energy further and risks requiring military intervention.
0. https://www.wsj.com/world/europe/europes-1-trillion-race-to-...
The need for a military is tightly coupled with the EU's need for energy. You can see this in the immediate impact that the war in Iran has had on Germany's natural gas prices [0]. But already unable to defend itself from Russia, EU countries are in a tough spot since they can't really afford to expend military resources defending their energy needs, and yet also don't have the energy independence to ignore these military engagements without risk. Meanwhile Russia has spend the last 4 years transition to a wartime economy and is getting hungry for expanded resource acquisition.
The world hasn't fundamentally changed since the stone age: humans need resources to survive and if there aren't enough people for those resources then violence will decide who has access the them.
0. https://tradingeconomics.com/commodity/germany-natural-gas-t...
I'm sorry, but this is just crazy talk. Russia cannot enforce its will on Ukraine, one of the poorest and most corrupt countries in Europe, with a (at time of invasion) relatively small and underequipped army. Yes it has grown through conscription, has been equipped by foreign and domestic supplies, has made some brilliant advances in tech and tactics... but when it was attacked, it was weak. And Russia lost its best troops and equipment failing to defat that.
Why would anyone think that the Russia that cannot defeat Ukraine would fare better against Poland? Let alone French warning strike nukes, or French, British, German troops and planes and what not.
As Russia’s economy has continually reshaped over the last 4 years there has been increasingly a domestic demand for war. You point out all the evidence yourself:
> Yes it has grown through conscription, has been equipped by foreign and domestic supplies, has made some brilliant advances in tech and tactics...
Russia (well its oligarchs and rulers) has increasingly benefited from perpetual war. Yes, soon it will need to switch positions to expansion to maintain its economy, but this situation in Iran presents a perfect opportunity if things play it Russia’s interests.
You also will find that if you paid any attention to European politics over the years this is a serious topic to all leaders there.
But I don’t mind if you’re not convinced, I had similar people on hacker news unconvinced Russia could sustain operations in Russia longer than a few months because they were doing so poorly… 4 years ago.
No it has not. It has a ballooning debt crisis (at different levels - regions, military contractors, banks) which will pop at some point; the budget is so unbalanced they're projecting to reduce military spending (unlikely), increase taxes, and still have a pretty heavy deficit. They've been given the gift of the Strait of Hormuz being closed, so oil and gas revenues will grow, which will definitely buy them more time. But they are running against a clock, and they cannot win in Ukraine.
> You also will find that if you paid any attention to European politics over the years this is a serious topic to all leaders there.
Yes, because Russia only responds to strength, so you need to be strong militarily to be able to dissuade them from attacking you. That doesn't mean that realistically they have a chance of winning any conflict.
My main concern with Lecunn are the amount of times he has repeatedly told people software is open source when it’s license directly violates the open source definition.
If you invested in that you knew what you were getting yourself into!
> He is the Jacob T. Schwartz Professor of Computer Science at the Courant Institute of Mathematical Sciences at New York University. He served as Chief AI Scientist at Meta Platforms before leaving to work on his own startup company.
That entire sentence before the remarks about him service at Meta could have been axed, its weird to me when people compare themselves to someone else who is well known. It's the most Kanye West thing you can do. Mind you the more I read about him, the more I discovered he is in fact egotistical. Good luck having a serious engineering team with someone who is egotistical.
This is just the official name of a chair at NYU. I'm not even sure Jacob T. Schwartz is more well known than Yann LeCun
Either you have not read enough Wikipedia pages, or you have too much to complain about. (Or both.)
We already have PINN or physics-informed neural networks [1]. Soon we are going to have physical field computing by complex-valued network quantization or CVNN that has been recently proposed for more efficient physical AI [2].
[1] Physics-informed neural networks:
https://en.wikipedia.org/wiki/Physics-informed_neural_networ...
[2] Ultra-efficient physical field computing by complex-valued network quantization: