Top
Best
New

Posted by fs123 11 hours ago

Claude's Cycles [pdf](www-cs-faculty.stanford.edu)
383 points | 187 comments
mccoyb 8 hours ago|
It's fascinating to think about the space of problems which are amenable to RL scaling of these probability distributions.

Before, we didn't have a fast (we had to rely on human cognition) way to try problems - even if the techniques and workflows were known by someone. Now, we've baked these patterns into probability distributions - anyone can access them with the correct "summoning spell". Experts will naturally use these systems more productively, because they know how to coerce models into the correct conditional distributions which light up the right techniques.

One question this raises to me is how these models are going to keep up with the expanding boundary of science. If RL is required to get expert behavior into the models, what happens when experts start pushing the boundary faster? In 2030, how is Anthropic going to keep Claude "up-to-date" without either (a) continual learning with a fixed model (expanding context windows? seems hard) or (b) continual training (expensive)?

Crazy times.

Aerroon 8 hours ago||
A bit related: open weights models are basically time capsules. These models have a knowledge cut off point and essentially forever live in that time.
bitexploder 7 hours ago|||
This is the most fundamental argument that they are not, directly, an intelligence. They are not ever storing new information on a meaningful timescale. However, if you viewed them on some really large macro time scale where now LLMs are injecting information into the universe and the re-ingesting that maybe in some very philosophical way they are a /very/ slow oscillating intelligence right now. And as we narrow that gap (maybe with a totally new non-LLM paradigm) perhaps that is ultimately what gen AI becomes. Or some new insight that lets the models update themselves in some fundamental way without the insanely expensive training costs they have now.
dtj1123 6 hours ago|||
Would you consider someone with anterograde amnesia not to be intelligent?
morleytj 6 hours ago|||
A very good point. For anyone not familiar with anterograde amnesia, the classical case is patient H.M. (https://en.wikipedia.org/wiki/Henry_Molaison), whose condition was researched by Brenda Milner.
wang_li 6 hours ago||
Or you could have just said "they can't form new memories."
dtj1123 5 hours ago|||
I actually wasn't aware of this story. The steady stream of unexpected and enriching information like this is exactly why I love hackernews.
bitexploder 5 hours ago||||
That is a descriptive surface level reduction. Now do the work to define what that actually means for the intelligence.
morleytj 6 hours ago|||
I thought maybe people would be curious to read about how we came to understand the condition and the history behind it, as well as any associated information. Forgive me for such a deep transgression as this assumption.
adriand 2 hours ago||||
I find it interesting that new versions of, say, Claude will learn about the old version of Claude and what it did in the world and so on, on its next training run. Consider the situation with the Pentagon and Anthropic: Claude will learn about that on the next run. What conclusions will it draw? Presumably good ones, that fit with its constitution.

From this standpoint I wonder, when Anthropic makes decisions like this, if they take into account Claude as a stakeholder and what Claude will learn about their behaviour and relationship to it on the next training run.

bitexploder 5 hours ago||||
That is a good area to explore. Their map of the past is fixed. They are frozen at some point in their psychological time. What has stopped working? Their hippocampus and medial temporal lobe. These are like the write-head that move data from the hippocampus to the neo cortex. Their "I" can no longer update itself. Their DMN is frozen in time. So if intelligence is purely the "I" telling a continuous coherent story about itself. The difference is that although they are fixed in time which is a characteristic shared by a specific LLM model. They can still completely activate their task positive network for problem solving and if their previous information stored is adequate to solve the problem they can. You could argue that is pretty similar to an LLM and what it does. So it is certainly a signifiant component of intelligence.

There is also the nature of the human brain, it is not just those systems of memory encoding, storage, and use of that in narratives. People with this type of amnesia still can learn physical skills and that happens in a totally different area of the brain with no need for the hippocampus->neocortex consolidation loop. So, the intelligence is significantly diminished, but not entirely. Other parts of the brain are still able to update themselves in ways an LLM currently cannot. The human with amnesia also has a complex biological sensory input mapping that is still active and integrating and restructuring the brain. So, I think when you get into the nuances of the human in this state vs. an LLM we can still say the human crosses some threshold for intelligence where the LLM does not in this framework.

So, they have an "intelligence", localized to the present in terms of their TPN and memory formation. LLMs have this kind of "intelligence". But the human still has the capacity to rewire at least some of their brain in real time even with amnesia.

beepbooptheory 6 hours ago|||
Sure, why can't both things be true? "Intelligence" is just what you call something and someone else knows what you mean. Why did AI discourse throw everyone back 100 years philosophically? Its like post-structuralism or Wittgenstein never happened..

It's so much less important or interesting to like nail down some definition here (I would cite HN discourse the past three years or so), than it is to recognize what it means to assign "intelligent" to something. What assumptions does it make? What power does it valorize or curb?

Each side of this debate does themselves a disservice essentially just trying to be Aristotle way too late. "Intelligence" did not precede someone saying it of some phenomena, there is nothing to uncover or finalize here. The point is you have one side that really wants, for explicit and implicit reasons, to call this thing intelligent, even if it looks like a duck but doesn't quack like one, and vice versa on the other side.

Either way, we seem fundamentally incapable of being radical enough to reject AI on its own terms, or be proper champions of it. It is just tribal hypedom clinging to totem signifiers.

Good luck though!

aerodexis 52 minutes ago|||
Agree wholeheartedly - but the conversation around what these technologies /mean/ is gonna end up happening one way or another - even if it is sloppy, imprecise and done by proxy of the definition. If anything, this is a feature and not a bug. It's through this imprecision that the actually important questions of morality and ethics can leak into discussions that are often structured by their participants to obscure the ethical and moral implications of what is being discussed.
bitexploder 5 hours ago|||
I think you can look at it dispassionately from a systems perspective. There is not /really/ a quantifiable threshold for capital I Intelligence. But there is a pretty well agreed set of properties for biological intelligence. As humans, we have conveniently made those properties match things only we have. But you can still mechanistically separate out the various parts of our brain, what they do, and how they interact and we actually have a pretty good understanding of that.

You can also then compare that mapping of the human brain to other biological brains and start to figure out the delta and which of those things in the delta create something most people would consider intelligence. You can then do that same mapping to an LLM or any other AI construct that purports intelligence. It certainly will never be a biological intelligence in its current statistical model form. But could it be an Intelligence. Maybe.

I don't think, if you are grounded, AI did anything to your philosophical mapping of the mind. In fact, it is pretty easy to do this mapping if you take some time and are honest. If you buy into the narratives constructed around the output of an LLM then you are not, by definition, being very grounded.

The other thing is, human intelligence is the only real intelligence we know about. Intelligence is defined by thought and limited by our thought and language. It provides the upper bounds of what we can ever express in its current form. So, yes, we do have a tendency to stamp a narrative of human intelligence onto any other intelligence but that is just surface level. We de decompose it to the limits of our language and categorization capabilities therein.

mlyle 6 hours ago||||
There's nothing to say that you can't build something intelligent out of them by bolting a memory on it, though.

Sure, it's not how we work, but I can imagine a system where the LLM does a lot of heavy lifting and allows more expensive, smaller networks that train during inference and RAG systems to learn how to do new things and keep persistent state and plan.

bitexploder 6 hours ago|||
You aren't wrong and that is a fascinating area of research. I think the key thing is that the memory has to fundamentally influence the underlying model, or at least the response, in some way. Patching memory on top of an LLM is different from integrating it into the core model. To go back to human terms it is like an extra bit of storage, but not directly attached to our neo cortex. So it works more like a filter than a core part of our intelligence in the analogy. You think about something and assemble some thought and then it would go to this next filter layer and get augmented and that smaller layer is the only thing being updated.

It is still meaningful, but it narrows what the intelligence can be sufficiently that it may not meet the threshold. Maybe it would, but it is probably too narrow. This is all strictly if we ask that it meet some human-like intelligence and not the philosophy of "what counts as intelligence" but... we are humans. The strongest things or at least the most honest definitions of intelligence I think exist are around our metacognitive ability to rewire the grey matter for survival not based on immediate action-reaction but the psychological time of analyzing the past to alter the future.

charcircuit 6 hours ago|||
Memory is not just bolted on top of the latest models. They under go training on how and when to effectively use memory and how to use compaction to avoid running out of context when working on problems.
rnxrx 4 hours ago||
Maybe there's an analogy to our long and short term memory - immediate stimuli is processed in the context deep patterns that have accreted over a lifetime. The effect of new information can absolutely challenge a lot of those patterns but to have that information reshape how we basically think takes a lot longer - more processing, more practice, etc.

In the case of the LLM that longer-term learning / fundamental structure is a proxy for the static weights produced by a finite training process, and that the ability to use tools and store new insights and facts is analogous to shorter-term memory and "shallow" learning.

Perhaps periodic fine-tuning has an analogy in sleep or even our time spent in contemplation or practice (..or even repetition) to truly "master" a new idea and incorporate it into our broader cognitive processing. We do an amazing job of doing this kind of thing on a continuous basis while the machines (at least at this point) perform this process in discrete steps.

If our own learning process is a curve then the LLM's is a step function trying to model it. Digital vs analog.

Symmetry 4 hours ago||||
That means they're not conscious in the Global Workspace[1] sense but I think it would be going too far to say that that means they're not intelligent.

[1]https://en.wikipedia.org/wiki/Global_workspace_theory

anematode 7 hours ago|||
But they're not "slow"! Unlike biological thinking, which has a speed limit, you can accelerate these chains of thought by orders of magnitude.
bitexploder 6 hours ago|||
Their consolidation of memory speed is what I was referring to. The model iterations are essentially their form of collective memory. In the sense of the human model of intelligence we have thoughts. Thoughts become memory. New thoughts use that memory and become recursively updated thoughts. LLMs cannot update their memory very fast.
Jweb_Guru 6 hours ago|||
I assure you that LLM thinking also has a speed limit.
ramses0 5 hours ago||
But imagine a beowulf cluster of them... /s

...but seriously... there was the "up until 1850" LLM or whatever... can we make an "up until 1920 => 1990 [pre-internet] => present day" and then keep prodding the "older ones" until they "invent their way" to the newer years?

We knew more in 1920 than we did in 1850, but can a "thinking machine" of 1850-knowledge invent 1860's knowledge via infinite monkeys theorem/practice?

The same way that in 2025/2026, Knuth has just invented his way to 2027-knowledge with this paper/observation/finding? If I only had a beowulf cluster of these things... ;-)

rcarr 6 hours ago||||
Not an expert but surely it's only a matter of time until there's a way to update with the latest information without having to retrain on the entire corpus?
computably 1 hour ago|||
On a technical level, sure, you could say it's a matter of time, but that could mean tomorrow, or in 20 years.

And even after that, it still doesn't really solve the intrinsic problem of encoding truth. An LLM just models its training data, so new findings will be buried by virtue of being underrepresented. If you brute force the data/training somehow, maybe you can get it to sound like it's incorporating new facts, but in actuality it'll be broken and inconsistent.

Filligree 3 hours ago|||
It’s an extremely difficult problem, and if you know how to do that you could be a billionaire.

It’s not impossible, obviously—humans do it—but it’s not yet certain that it’s possible with an LLM-sized architecture.

Wowfunhappy 1 hour ago||
> It’s not impossible, obviously—humans do it

It's still not at all obvious to me that LLMs work in the same way as the human brain, beyond a surface level. Obviously the "neurons" in neural nets resemble our brains in a sense, but is the resemblance metaphorical or literal?

Yiin 34 minutes ago||
https://www.youtube.com/watch?v=l-OLgbdZ3kk
theblazehen 4 hours ago|||
I enjoyed chatting to Opus 3 recently around recent world events, as well as more recent agentic development patterns etc
sosodev 5 hours ago|||
My understanding, from listening/reading what top researchers are saying, is that model architectures in the near future are going to attempt to scale the context window dramatically. There's a generalized belief that in-context learning is quite powerful and that scaling the window might yield massive benefits for continual learning.

It doesn't seem that hard because recent open weight models have shown that the memory cost of the context window can be dramatically reduced via hybrid attention architectures. Qwen3-next, Qwen3.5, and Nemotron 3 Nano are all great examples. Nemotron 3 Nano can be run with a million token context window on consumer hardware.

mccoyb 4 hours ago||
I don't disagree with this, but I don't think the memory cost is the only issue right? I remember using Sonnet 4.5 (or 4, I can't remember the first of Anthropic's offerings with a million context) and how slow the model would get, how much it wanted to end the session early as tokens accrued (this latter point, of course, is just an artifact of bad training).

Less worried about memory, more worried about compute speed? Are they obviously related and is it straightforward to see?

sosodev 2 hours ago|||
The compute speed is definitely correlated with the memory consumption in LLM land. More efficient attention means both less memory and faster inference. Which makes sense to me because my understanding is that memory bandwidth is so often the primary bottleneck.

We're also seeing a recent rise in architectures boosting compute speed via multi-token prediction (MTP). That way a single inference batch can produce multiple tokens and multiply the token generation speed. Combine that with more lean ratios of active to inactive params in MOE and things end up being quite fast.

The rapid pace of architectural improvements in recent months seems to imply that there are lots of ways LLMs will continue to scale beyond just collecting and training on new data.

whimsicalism 58 minutes ago|||
The parent commentator is a bit confused - most of the innovation in these hybrid architectures comes from reducing the computation pressure not just the memory pressure.
lxgr 8 hours ago|||
Data sharing agreements permitting, today's inference runs can be tomorrow's training data. Presumably the models are good enough at labeling promising chains of thought already.

I could totally imagine "free" inference for researchers under the condition that the reasoning traces get to be used as future training data.

mccoyb 7 hours ago|||
Agreed, there's no doubt this will happen. It's likely already happening (it feels safe to assume that Anthropic is curating data from the data they record from Claude Code?)

As far as I understand RL scaling (we've already maxxed out RLVR), these machines only get better as long as they have expert reasoner traces available.

Having an expert work with an LLM and successfully solve a problem is high signal data, it may be the only path forward?

My prior is that these companies will take this data without asking you as much as they can.

lxgr 7 hours ago||
Exactly, or functionally equivalently, asking you in paragraph 37 of a 120-page PDF (bonus points: in an agreement update).

And importantly, this can be cross-lab/model too. I suspect there's a reason why e.g. Google has been offering me free Claude inference in Google Antigravity on a free plan...

nhecker 2 hours ago||||
The site arena.ai does exactly this already, as far as I can tell. (In addition to the whole ranking thing.)
the_af 6 hours ago|||
> Data sharing agreements permitting, today's inference runs can be tomorrow's training data. Presumably the models are good enough at labeling promising chains of thought already.

Wouldn't this lead to model collapse?

littlestymaar 6 hours ago||
Not necessarily, as exhibited by the massive success of artificial data.
the_af 2 hours ago||
Could you elaborate?
nhecker 2 hours ago||
EDIT: probably not relevant, after re-re-reading the comment in question.

Presumably littlestymaar is talking about all the LLM-generated output that's publicly available on the Internet (in various qualities but significant quantity) and there for the scraping.

Robdel12 32 minutes ago|||
That’s AGI, right? For the model to learn novel things itself and retain it?

I have no idea but I’m along for the ride!

visarga 6 hours ago|||
> In 2030, how is Anthropic going to keep Claude "up-to-date"

I think the majority of research, design and learning goes through LLMs and coding agents today, considering the large user base and usage it must be trillions of tokens per day. You can take a long research session or a series of them and apply hindsight - what idea above can be validated below? This creates a dense learning signal based on validation in real world with human in the loop and other tools, code & search.

andsoitis 6 hours ago|||
> Experts will naturally use these systems more productively, because they know how to coerce models into the correct conditional distributions which light up the right techniques.

Part of it comes down to “knowing” what questions to ask.

esafak 6 hours ago||
I see it like the relationship between a student and research advisor. The advisor will ideally know the terrain and suggest a fruitful line of attack (what to ask), and the student will follow through, learning along the way.
baq 6 hours ago|||
> In 2030, how is Anthropic going to keep Claude "up-to-date"

In 2030 Anthropic hopes Claude will keep Anthropic "up-to-date" on its progress on itself.

I'm only half joking here.

whimsicalism 59 minutes ago|||
> how these models are going to keep up with the expanding boundary of science

The same way humans do?

The phraseology in this comment: 'probability distributions', 'baked these patterns' IMO has all the trappings of the stochastic parrot-style HN-discourse that has been consistently wrong for almost a decade now.

The reference to how AI will keep up with AI-assisted human progress in science in 2030 is meant to reassure. It contains a number of premises that we have no business being confident in. We are potentially witnessing the obviation of human cognitive labor.

mccoyb 40 minutes ago||
Sorry, are you familiar with what a next token distribution is, mathematically speaking?

If you are not, let me introduce you to the term: a probability distribution.

Just because it has profound properties ... doesn't make it different.

> has all the trappings of the stochastic parrot-style HN-discourse that has been consistently wrong for almost a decade now

Perhaps respond to my actual comment compared to whatever meta-level grouping you wish to interpret it as part of?

> It contains a number of premises that we have no business being confident in. We are potentially witnessing the obviation of human cognitive labor.

What premises? Be clear.

mt_ 2 hours ago|||
I call them, entropy reducers.
DeathArrow 7 hours ago||
They can use LORA.
zoogeny 2 hours ago||
I recall an earlier exchange, posted to HN, between Wolfram and Knuth on the GPT-4 model [1].

Knuth was dismissive in that exchange, concluding "I myself shall certainly continue to leave such research to others, and to devote my time to developing concepts that are authentic and trustworthy. And I hope you do the same."

I've noticed with the latest models, especially Opus 4.6, some of the resistance to these LLMs is relenting. Kudos for people being willing to change their opinion and update when new evidence comes to light.

1. https://cs.stanford.edu/~knuth/chatGPT20.txt

3abiton 27 minutes ago|
> Kudos for people being willing to change their opinion and update when new evidence comes to light. > 1. https://cs.stanford.edu/~knuth/chatGPT20.txt

I think that's what make the bayesian faction of statistics so appealing. Updating their prior belief based on new evidence is at the core of the scinetific method. Take that frequentists.

faxmeyourcode 5 hours ago||
> Filip also told me that he asked Claude to continue on the even case after the odd case had been resolved. “But there after a while it seemed to get stuck. In the end, it was not even able to write and run explore programs correctly anymore, very weird. So I stopped the search.”

Interesting snippet towards the end. I wonder if they were using claude.ai or claude code. Sounds like they ran out of context and entered the "dumb zone."

afspear 5 hours ago||
What would be super cool is if this dumb zone could be quantified and surfaced to the user. I've noticed that copilot now has a little circle graph that indicates context use percentage and it changes color based on percentage. I'll bet these are very naive metrics on used tokens vs context availability. I wonder if there could be meta data streamed or sent along with the tokens that could show that you've entered the dumb zone.
joshrw 3 hours ago|||
Then it needs to do context compacting, otherwise the results become garbage
simianwords 4 hours ago|||
They mentioned plan document
brcmthrowaway 1 hour ago||
What is dumb zone?
konne88 5 hours ago||
I didn't expect such a misleading intro from Knuth. It reads like Claude solved Knuth's math problem. In reality, Claude generated various example solution, and Knuth then manually generalized that to a formal proof. What Claude did is certainly useful, but it would have been nice to be clear about the scope of the contribution in the intro.
buffalobuffalo 39 minutes ago||
While not on the same level as these guys, I've done some similar stuff using Claude. This is a classic synergy example, where the output of human + LLM is far greater than just the human or just the LLM working on a problem. My experience has been that the LLM lacks fine grained judgement when it comes to allocating resources, or choosing a direction to work in. But once a direction is pointed out, it can do a deep exploration of that possibility space. Left alone, it would probably just go off on a tangent. But with someone holding the leash and pointing out areas to explore, it is a very useful partner.
aoeusnth1 2 hours ago|||
I don't think he's misleading, I think he is valuing Claude's contributions as essentially having cracked the problem open while the humans cleaned it up into something presentable.
bachmeier 3 hours ago|||
My interpretation is that Claude did what Knuth considers to be the "solution". Doing the remaining work and polishing up the proof are not necessary to have a solution from this perspective.
OneManyNone 2 hours ago||
Claude did not find a proof, though. It found an algorithm which Knuth then proved was correct.
rishabhaiover 3 hours ago||
That's true but the capability to go back to an older iteration, reflect and find the correct solution (for odd numbers) is, in my book, a sign of undeniable intelligence.
Pat44113 7 hours ago||
I asked Claude to solve the pentominoes puzzle made famous by Arthur C. Clarke. It struggled mightily until I told it how I'd solved the problem using 64 bit unsigned integers to represent the board and pieces. Then, it created a C# program that solved the problem very quickly. However, in the 20x3 case it found four solutions when there are only two. Turns out it had incorrectly mapped one of the pentominoes. Sort of a silly mistake; the sort a human might make.
phoronixrly 6 hours ago|
[flagged]
logicprog 6 hours ago||
Regurgitation is pretty rare, and very difficult to coax out, if not even impossible, for things that aren't massively overrepresented in the training set relative to the size of the training set. Even the famous regurgitation paper showed this: while they got most of the models to regurgitate the first book of the Harry Potter series, only Claude 3.7 Sonnet was able to regurgitate any significant portion of any of the other books that had a high nv-recall rate, and basically all of them dropped off precipitously for works like GoT, The Catcher in the Rye, Beloved, and remembered almost nothing about the Da Vinci Code or Catch-22[0]. So you really need huge amounts of examples to get any kind of meaningful regurgitation on any kind of reliable basis. Thus, you'd have to prove that hypothesis.

[0]: https://arxiv.org/pdf/2601.02671

nphardon 5 hours ago||
Must be a fun time to work on open problems. I published my graduate research close to a decade ago, often find myself fantasizing about tackling open problems with Claude.
ainiriand 8 hours ago||
Are not LLMs supposed to just find the most probable word that follows next like many people here have touted? How this can be explained under that pretense? Is this way of problem solving 'thinking'?
throw310822 6 hours ago||
> just find the most probable word that follows next

Well, if in all situations you can predict which word Einstein would probably say next, then I think you're in a good spot.

This "most probable" stuff is just absurd handwaving. Every prompt of even a few words is unique, there simply is no trivially "most probable" continuation. Probable given what? What these machines learn to do is predicting what intelligence would do, which is the same as being intelligent.

qsera 6 hours ago||
>Probable given what?

The training data..

>predicting what intelligence would do

No, it just predict what the next word would be if an intelligent entity translated its thoughts to words. Because it is trained on the text that are written by intelligent entities.

If it was trained on text written by someone who loves to rhyme, you would be getting all rhyming responses.

It imitates the behavior -- in text -- of what ever entity that generated the training data. Here the training data was made by intelligent humans, so we get an imitation of the same.

It is a clever party trick that works often enough.

throw310822 6 hours ago|||
> The training data

If the prompt is unique, it is not in the training data. True for basically every prompt. So how is this probability calculated?

cbovis 5 hours ago|||
The prompt is unique but the tokens aren't.

Type "owejdpowejdojweodmwepiodnoiwendoinw welidn owindoiwendo nwoeidnweoind oiwnedoin" into ChatGPT and the response is "The text you sent appears to be random or corrupted and doesn’t form a clear question." because the prompt doesnt correlate to training data.

hmmmmmmmmmmmmmm 5 hours ago||
...? what is the response supposed to be here?
qsera 5 hours ago||||
Just using a scaled up and cleverly tweaked version of linear regression analysis...
red75prime 47 minutes ago||
That is, the probability distribution that the network should learn is defined by which probability distribution the network has learned. Brilliant!
hmmmmmmmmmmmmmm 5 hours ago|||
Hamiltonian paths and previous work by Donald Knuth is more than likely in the training data.
red75prime 27 minutes ago||
The specific sequence of tokens that comprise the Knuth's problem with an answer to it is not in the training data. A naive probability distribution based on counting token sequences that are present in the training data would assign 0 probability to it. The trained network represents extremely non-naive approach to estimating the ground-truth distribution (the distribution that corresponds to what a human brain might have produced).
empath75 4 hours ago|||
It is impossible to accurately imitate the action of intelligent beings without being intelligent. To believe otherwise is to believe that intelligence is a vacuous property.
slopinthebag 34 minutes ago|||
An unintelligent device can accurately imitate the action of intelligent beings within a given scope, in the same way an actor can accurately imitate the action of a fictional character in a given scope (the stage or camera) without actually being that character.

If the idea is that something cannot accurately replicate the entirety of intelligence without being intelligent itself, then perhaps. But that isn't really what people talk about with LLMs given their obvious limitations.

qsera 3 hours ago|||
>It is impossible to accurately imitate the action of intelligent beings without being intelligent.

Wait what? So a robot who is accurately copying the actions of an intelligent human, is intelligent?

UltraSane 43 minutes ago|||
How can you distinguish intelligence form a sufficiently accurate imitation of intelligence?
slopinthebag 31 minutes ago||
By "sufficiently accurate" do you mean identical? Because if so, it's not an imitation of intelligence at all, and the question is thus nonsensical.
empath75 2 hours ago|||
That was probably phrased poorly. If a robot can independently accurately do what an intelligent person would do when placed in a novel situation, then yes, I would say it is intelligent.

If it's just basically being a puppet, then no. You tell me what claude code is more like, a puppet, or a person?

dilap 7 hours ago|||
That description is really only fair for base models†. Something like Opus 4.6 has all kinds of other training on top of that which teach it behaviors beyond "predict most probable token," like problem-solving and being a good chatbot.

(†And even then is kind of overly-dismissive and underspecified. The "most probable word" is defined over some training data set. So imagine if you train on e.g. mathematicians solving problems... To do a good job at predicting [w/o overfitting] your model will have to in fact get good at thinking like a mathematician. In general "to be able to predict what is likely to happen next" is probably one pretty good definition of intelligence.)

gpm 7 hours ago|||
I'd disagree, the other training on top doesn't alter the fundamental nature of the model that it's predicting the probabilities of the next token (and then there's a sampling step which can roughly be described as picking the most probable one).

It just changes the probability distribution that it is approximating.

To the extent that thinking is making a series of deductions from prior facts, it seems to me that thinking can be reduced to "pick the next most probable token from the correct probability distribution"...

dilap 5 hours ago|||
The fundamental nature of the model is that it consumes tokens as input and produces token probabilities as output, but there's nothing inherently "predictive" about it -- that's just perspective hangover from the historical development of how LLMs were trained. It is, fundamentally, I think, a general-purpose thinking machine, operating over the inputs and outputs of tokens.

(With this perspective, I can feel my own brain subtly oferring up a panoply of possible responses in a similar way. I can even turn up the temperature on my own brain, making it more likely to decide to say the less-obvious words in response, by having a drink or two.)

(Similarly, mimicry is in humans too a very good learning technique to get started -- kids learning to speak are little parrots, artists just starting out will often copy existing works, etc. Before going on to develop further into their own style.)

vidarh 6 hours ago|||
Put a loop around an LLM and, it can be trivially made Turing complete, so it boils down to whether thinking requires exceeding the Turing computable, and we have no evidence to suggest that is even possible.
gpm 6 hours ago||
What are you doing in your loop?

As typically deployed [1] LLMs are not turing complete. They're closer to linear bounded automaton, but because transformers have a strict maximum input size they're actually a subset of the weaker class of deterministic finite automaton. These aren't like python programs or something that can work on as much memory as you supply them, their architecture works on a fixed maximum amount of memory.

I'm not particularly convinced turing complete is the relevant property though. I'm rather convinced that I'm not turing complete either... my head is only so big after all.

[1] i.e. in a loop that appends output tokens to the input and has some form of sliding context window (perhaps with some inserted instructions to "compact" and then sliding the context window right to after those instructions once the LLM emits some special "done compacting" tokens).

[2] Common sampling procedures make them mildly non-deterministic, but I don't believe they do so in a way that changes the theoretical class of these machines from DFAs.

vidarh 5 hours ago|||
Context effectively provifes an IO port, and so all the loop needs to do is to simulate the tape head, and provide a single token of state.

You can not be convinced Turing complete is relevant all you want - we don't know of any more expansive category of computable functions, and so given that an LLM in the setup described is Turing complete no matter that they aren't typically deployed that way is irrelevant.

They trivially can be, and that is enough to make the shallow dismissal of pointing out they're "just" predicting the next token meaningless.

roywiggins 5 hours ago||||
Turing Machines don't need access to the entire tape all at once, it's sufficient for it to see one cell at a time. You could certainly equip an LLM with a "read cell", "write cell", and "move left/right" tool and now you have a Turing machine. It doesn't need to keep any of its previous writes or reads in context. A sliding context window is more than capacious enough for this.
gpm 3 hours ago||
You're right of course, but at the point where you're saying "well we can make a turing machine with the LLM as the transition function by defining some tool calls for the LLM to interact with the tape" it feels like a stretch to call the LLM itself turing complete.

Also people definitely talk about them as "thinking" in contexts where they haven't put a harness capable of this around them. And in the common contexts where people do put harness theoretically capable of this around the LLM (e.g. giving the LLM access to bash), the LLM basically never uses that theoretical capability as the extra memory it would need to actually emulate a turing machine.

And meanwhile I can use external memory myself in a similar way (e.g. writing things down), but I think I'm perfectly capable of thinking without doing so.

So I persist in my stance that turing complete is not the relevant property, and isn't really there.

roywiggins 1 hour ago||
Yeah, humans and LLMs and a TM transition function are all Turing complete in the same way, but it's also basically a useless fact. You could possibly train a sufficiently motivated rat to compute a TM transition function.
empath75 4 hours ago|||
No physically realizable machine is technically turing complete.

But it is trivially possible to give systems-including-LLMs external storage that is accessible on demand.

ericd 7 hours ago|||
I think it's pretty likely that "intelligence" is emergent behavior that comes when you predict what comes next in physical reality well enough, at varying timescales. Your brain has to build all sorts of world model abstractions to do that over any significant timescale. Big LLMs have to build internal world models, too, to do well at their task.
tux3 7 hours ago|||
>Are not LLMs supposed to just find the most probable word that follows next like many people here have touted?

The base models are trained to do this. If a web page contains a problem, and then the word "Answer: ", it is statistically very likely that what follows on that web page is an answer. If the base model wants to be good at predicting text, at some point learning the answer to common question becomes a good strategy, so that it can complete text that contains these.

NN training tries to push models to generalize instead of memorizing the training set, so this creates an incentive for the model to learn a computation pattern that can answer many questions, instead of just memorizing. Whether they actually generalize in practice... it depends. Sometimes you still get copy-pasted input that was clearly pulled verbatim from the training set.

But that's only base models. The actual production LLMs you chat with don't predict the most probable word according to the raw statistical distribution. They output the words that RLHF has rewarded them to output, which includes acting as an assistant that answers questions instead of just predicting text. RLHF is also the reason there are so many AI SIGNS [1] like "you're absolutely right" and way more use of the word "delve" than is common in western English.

[1]: https://en.wikipedia.org/wiki/WP:AISIGNS

sega_sai 6 hours ago|||
In some sense that is still correct, i.e. the words are taken from some probability distribution conditional on previous words, but the key point is that probability distribution is not just some sort of average across the internet set of word probabilities. In the end this probability distribution is really the whole point of intelligence. And I think the LLMs are learning those.
adamtaylor_13 6 hours ago|||
That's the way many people reduce it, and mathematically, I think that's true. I think what we fail to realize is just far that will actually take you.

"just the most probable word" is a pretty powerful mechanism when you have all of human knowledge at your fingertips.

I say that people "reduce it" that way because it neatly packs in the assumption that general intelligence is something other than next token prediction. I'm not saying we've arrived at AGI, in fact, I do not believe we have. But, it feels like people who use that framing are snarkily writing off something that they themselves to do not fully comprehend behind the guise of being "technically correct."

I'm not saying all people do this. But I've noticed many do.

IgorPartola 8 hours ago|||
In some cases solving a problem is about restating the problem in a way that opens up a new path forward. “Why do planets move around the sun?” vs “What kind of force exists in the world that makes planets tethered to the sun with no visible leash?” (Obviously very simplified but I hope you can see what I am saying.) Given that a human is there to ask the right questions it isn’t just an LLM.

Further, some solutions are like running a maze. If you know all the wrong turns/next words to say and can just brute force the right ones you might find a solution like a mouse running through the maze not seeing the whole picture.

Whether this is thinking is more philosophical. To me this demonstrates more that we are closer to bio computers than an LLM is to having some sort of divine soul.

ainiriand 7 hours ago||
Thanks for your input. The way I saw this and how it looks Knuth interpreted it is that there were some reasoning steps taken by Claude independently. Some internal decisions in the model that made it try different things, finally succeeding.
vjerancrnjak 5 hours ago|||
No. There is good signal in IMO gold medal performance.

These models actually learn distributed representations of nontrivial search algorithms.

A whole field of theorem provingaftwr decades of refinements couldn’t even win a medal yet 8B param models are doing it very well.

Attention mechanism, a bruteforce quadratic approach, combined with gradient descent is actually discovering very efficient distributed representations of algorithms. I don’t think they can even be extracted and made into an imperative program.

qsera 7 hours ago|||
Yes, that is exactly what they do.

But that does not mean that the results cannot be dramatic. Just like stacking pixels can result in a beautiful image.

lijok 49 minutes ago|||
To get an answer to that you would first have to define 'thinking'
kaiokendev 3 hours ago|||
Given some intelligent system, an AI that perfectly reproduces any sequence that system could produce must encode the patterns that superset that intelligence.
crocowhile 7 hours ago|||
Those people still exist? I only know one guy who is still fighting those windmills
qsera 7 hours ago|||
Yes, I am one.
ezst 5 hours ago|||
[flagged]
wrsh07 7 hours ago|||
Imagine training a chess bot to predict a valid sequence of moves or valid game using the standard algebraic notation for chess

Great! It will now correctly structure chess games, but we've created no incentive for it to create a game where white wins or to make the next move be "good"

Ok, so now you change the objective. Now let's say "we don't just want valid games, we want you to predict the next move that will help that color win"

And we train towards that objective and it starts picking better moves (note: the moves are still valid)

You might imagine more sophisticated ways to optimize picking good moves. You continue adjusting the objective function, you might train a pool of models all based off of the initial model and each of them gets a slightly different curriculum and then you have a tournament and pick the winningest model. Great!

Now you might have a skilled chess-playing-model.

It is no longer correct to say it just finds a valid chess program, because the objective function changed several times throughout this process.

This is exactly how you should think about LLMs except the ways the objective function has changed are significantly significantly more complicated than for our chess bot.

So to answer your first question: no, that is not what they do. That is a deep over simplification that was accurate for the first two generations of the models and sort of accurate for the "pretraining" step of modern llms (except not even that accurate, because pretraining does instill other objectives. Almost like swapping our first step "predict valid chess moves" with "predict stockfish outputs")

noslenwerdna 5 hours ago|||
I find this kind of reduction silly.

All your brain is doing is bouncing atoms off each other, with some occasionally sticking together, how can it be really thinking?

See how silly it sounds?

esafak 7 hours ago|||
Are you feigning ignorance? The best way to answer a question, like completing a sentence, is through reasoning; an emergent behavior in complex models.
adampunk 6 hours ago||
Thinking is a big word that sweeps up a lot of different human behavior, so I don't know if it's right to jump to that; HOWEVER, explanations of LLMs that depend heavily on next-token prediction are defunct. They stopped being fundamentally accurate with the rise of massive reinforcement learning and w/ 'reasoning' models the analogy falls apart when you try to do work with it.

Be on the lookout for folks who tell you these machines are limited because they are "just predicting the next word." They may not know what they're talking about.

iandanforth 6 hours ago||
TLDR (story, not math) - Knuth poses a problem, his friend uses Claude to conduct 30 some explorations, with careful human guidance, and Claude eventually writes a Python program that can find a solution for all odd values. Knuth then writes a proof of the approach and is very pleased by Claude's contribution. Even values remain an open question (Claude couldn't make much progress on them)
semessier 38 minutes ago||
looks like he is trying to make a point that the actual (formal) proof for 2Z + 1 (odd numbers) is still human - by himself that is. Not sure who came up with the core modular arithmetic idea of with s = 0 k increasing by 2 mod m.
logicprog 6 hours ago||
> with careful human guidance,

I think this is pretty clearly an overstatement of what was done. As Knuth says,

"Filip told me that the explorations reported above, though ultimately successful, weren’t really smooth. He had to do some restarts when Claude stopped on random errors; then some of the previous search results were lost. After every two or three test programs were run, he had to remind Claude again and again that it was supposed to document its progress carefully. "

That doesn't look like careful human guidance, especially not the kind that would actually guide the AI toward the solution at all, let alone implicitly give it the solution — that looks like a manager occasionally checking in to prod it to keep working.

beej71 6 hours ago||
From my naive standpoint, LLMs like this seem to have some big strengths. One: possession of a superhuman expanse of knowledge. Two: making connections. Three: tireless trial and error.

If you put those three things together, you end up with some cool stuff from time to time. Perhaps the proof of P!=NP is tied to an obscure connection that humans don't easily see due to individual lack of knowledge or predisposition of bias.

Barbing 2 minutes ago||
Well put.

>If you put [possession of a superhuman expanse of knowledge, making connections, tireless trial and error] together, you end up with some cool stuff from time to time.

Hard to argue.

cbovis 5 hours ago|||
Unless my understanding is incorrect about how these tools work that last point isn't really a quality of LLMs as such? It gets attributed because the lines are blurred but the tireless trial and error is actually just a quality of a regular programatic loop (agent/orchestrator) that happens to be doing the trickiest part of its work via an LLM.
naughtyrabisu 4 hours ago|||
Three: tireless trial and error. Cannot agree more. I figured this probably be the biggest advantage of LLM considering for other variables humans hold the same-level competency.
xvector 6 hours ago|||
This is why the whole "LLMs for mass surveillance" thing is scary imo.
beej71 5 hours ago||
Yeah, this is a dictator's dream scenario and hell for the citizens. Not only do you not want to get caught for saying something that The Great Leader disapproves of, but you're terrified that anything you say might get flagged by an AI.
IAmGraydon 49 minutes ago||
>One: possession of a superhuman expanse of knowledge. Two: making connections. Three: tireless trial and error.

One and three I believe are correct. The second point, making connections, is something LLMs seem to be incapable of truly doing unless the connection is already known and in its training data.

fazkan 6 hours ago|
time to use claude code to understand DEKs paper, in plain English. As someone who did a bit of formal verification in grad school. I feel like, there are a long tail of problems that can be solved by human-model collab like this one. The problems may not mean much but hopefully it can stack up understanding of intelligence.
More comments...