Posted by helloplanets 1 day ago
https://www.ft.com/content/e5245ec3-1a58-4eff-ab58-480b6259a... (https://archive.md/5eZWq)
Looks like you appended the original URL to the end
Or you're using Cloudflare DNS.
Have they changed something on their end?
The fundamental problem with today's LLMs that will prevent them from achieving human level intelligence, and creativity, is that they are trained to predict training set continuations, which creates two very major limitations:
1) They are fundamentally a COPYING technology, not a learning or creative one. Of course, as we can see, copying in this fashion will get you an extremely long way, especially since it's deep patterns (not surface level text) being copied and recombined in novel ways. But, not all the way to AGI.
2) They are not grounded, therefore they are going to hallucinate.
The animal intelligence approach, the path to AGI, is also predictive, but what you predict is the external world, the future, not training set continuations. When your predictions are wrong (per perceptual feedback) you take this as a learning signal to update your predictions to do better next time a similar situation arises. This is fundamentally a LEARNING architecture, not a COPYING one. You are learning about the real world, not auto-regressively copying the actions that someone else took (training set continuations).
Since the animal is also acting in the external world that it is predicting, and learning about, this means that it is learning the external effects of it's own actions, i.e. it is learning how to DO things - how to achieve given outcomes. When put together with reasoning/planning, this allows it to plan a sequence of actions that should achieve a given external result ("goal").
Since the animal is predicting the real world, based on perceptual inputs from the real world, this means that it's predictions are grounded in reality, which is necessary to prevent hallucinations.
So, to come back to "world models", yes an animal intelligence/AGI built this way will learn a model of how the world works - how it evolves, and how it reacts (how to control it), but this behavioral model has little in common with the internal generative abstractions that an LLM will have learnt, and it is confusing to use the same name "world model" to refer to them both.
Models build up this big knowledge base by predicting continuations. But then their RL stage gives rewards for completing problems successfully. This requires learning and generalisation to do well, and indeed RL marked a turning point in LLM performance.
A year after RL was made to work, LLMs can now operate in agent harnesses over 100s of tool calls to complete non-trivial tasks. They can recover from their own mistakes. They can write 1000s of lines of code that works. I think it’s no longer fair to categorise LLMs as just continuation-predictors.
At the end of the day it's still copying, not learning.
RL seems to mostly only generalize in-domain. The RL-trained model may be able to generate a working C compiler, but the "logical reasoning" it had baked into it to achieve this still doesn't stop it from telling you to walk to the car wash, leaving your car at home.
There may still be more surprises coming from LLMs - ways to wring more capability out of them, as RL did, without fundamentally changing the approach, but I think we'll eventually need to adopt the animal intelligence approach of predicting the world rather than predicting training samples to achieve human-like, human-level intelligence (AGI).
I don’t know if this can reach AGI, or if that term makes any sense to begin with. But to say these models have not learnt from their RL seems a bit ludicrous. What do you think training to predict when to use different continuations is other than learning?
I would say LLM’s failure cases like failing at riddles are more akin to our own optical illusions and blind spots rather than indicative of the nature of LLMs as a whole.
Intelligence is simply not well-understood at a mathematical level. Like medieval engineers, we rely so heavily on experimentation in AI. We have no idea how far away from the human level we actually are. Or how far above the human level we can get. Or what, if anything, the limits of intelligence are.
A more concrete idea like “learning” has been very strongly defined and quantifiable, which is maybe why progress in a theory of learning is so much more advanced than a theory of “intelligence“.
Who is more intelligent: a politician, or a high school teacher?
What is intelligence, anyway?
https://www.scientificamerican.com/article/i-gave-chatgpt-an...
https://www.reddit.com/r/singularity/comments/1p5f0b1/gemini...
Gemini 3 Pro has an IQ of 130 now but we keep moving the goalposts and being like “not THAT intelligence, we mean this other intelligence”. I suspect, and history shows us this will be the case, that humans will judge AIs as not human and not intelligent and not needing rights way past the point where they should have rights, even when vastly superior to human intelligence.
That article is from June 2025 so may be out of date, and the definition of "seed round" is a bit fuzzy.
The giant seed round proves investors were willing to fund Mira Murati, not that the company had built anything durable.
Within months, it had already lost cofounder Andrew Tulloch to Meta, then cofounders Barret Zoph and Luke Metz plus researcher Sam Schoenholz to OpenAI; WIRED also reported that at least three other researchers left. At that point, citing it as evidence of real competitive momentum feels weak.
They are currently estimated to be at a 5bn valuation.
We recently promoted the no-generated-comments rule from case law [1] to the site guidelines [2], and we're being pretty active about banning accounts that break it.
[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
[2] https://news.ycombinator.com/newsguidelines.html#generated
Everyday environments are rich in tangible control interfaces (TCIs), like, light switches, appliance panels, and embedded GUIs, that are designed for humans and demand commonsense and physics reasoning, but also causal prediction and outcome verification in time and space (e.g., delayed heating, remote lights).
SWITCH: Benchmarking Modeling and Handling of Tangible Interfaces in Long-horizon Embodied Scenarios (https://huggingface.co/papers/2511.17649)
Feedback, suggestions, and collaborators are very welcome!