Top
Best
New

Posted by HellsMaddy 13 hours ago

Claude Opus 4.6(www.anthropic.com)
1761 points | 745 commentspage 4
lukebechtel 13 hours ago|
> Context compaction (beta).

> Long-running conversations and agentic tasks often hit the context window. Context compaction automatically summarizes and replaces older context when the conversation approaches a configurable threshold, letting Claude perform longer tasks without hitting limits.

Not having to hand roll this would be incredible. One of the best Claude code features tbh.

nomilk 13 hours ago||
Is Opus 4.6 available for Claude Code immediately?

Curious how long it typically takes for a new model to become available in Cursor?

apetresc 13 hours ago||
I literally came to HN to check if a thread was already up because I noticed my CC instance suddenly said "Opus 4.6".
world2vec 13 hours ago|||
`claude update` then it will show up as the new model and also the effort picker/slider thing.
avaer 13 hours ago|||
It's already in Cursor. I see it and I didn't even restart.
nomilk 13 hours ago||
I had to 'Restart to Update' and it was there. Impressive!
tomtomistaken 13 hours ago|||
Yes, it's set to the default model.
ximeng 13 hours ago|||
Is for me in Claude Code
rishabhaiover 13 hours ago||
it also has an effort toggle which is default to High
archb 13 hours ago||
Can set it with the API identifier on Claude Code - `/model claude-opus-4-6` when a chat session is open.
arnestrickmann 12 hours ago|
thanks!
itay-maman 12 hours ago||
Impressive results, but I keep coming back to a question: are there modes of thinking that fundamentally require something other than what current LLM architectures do?

Take critical thinking — genuinely questioning your own assumptions, noticing when a framing is wrong, deciding that the obvious approach to a problem is a dead end. Or creativity — not recombination of known patterns, but the kind of leap where you redefine the problem space itself. These feel like they involve something beyond "predict the next token really well, with a reasoning trace."

I'm not saying LLMs will never get there. But I wonder if getting there requires architectural or methodological changes we haven't seen yet, not just scaling what we have.

jorl17 12 hours ago||
When I first started coding with LLMs, I could show a bug to an LLM and it would start to bugfix it, and very quickly would fall down a path of "I've got it! This is it! No wait, the print command here isn't working because an electron beam was pointed at the computer".

Nowadays, I have often seen LLMs (Opus 4.5) give up on their original ideas and assumptions. Sometimes I tell them what I think the problem is, and they look at it, test it out, and decide I was wrong (and I was).

There are still times where they get stuck on an idea, but they are becoming increasingly rare.

Therefore, think that modern LLMs clearly are already able to question their assumptions and notice when framing is wrong. In fact, they've been invaluable to me in fixing complicated bugs in minutes instead of hours because of how much they tend to question many assumptions and throw out hypotheses. They've helped _me_ question some of my assumptions.

They're inconsistent, but they have been doing this. Even to my surprise.

itay-maman 12 hours ago||
agree on that and the speed is fantastic with them, and also that the dynamics of questioning the current session's assumptions has gotten way better.

yet - given an existing codebase (even not huge) they often won't suggest "we need to restructure this part differently to solve this bug". Instead they tend to push forward.

jorl17 12 hours ago||
You are right, agreed.

Having realized that, perhaps you are right that we may need a different architecture. Time will tell!

breuleux 12 hours ago|||
> These feel like they involve something beyond "predict the next token really well, with a reasoning trace."

I don't think there's anything you can't do by "predicting the next token really well". It's an extremely powerful and extremely general mechanism. Saying there must be "something beyond that" is a bit like saying physical atoms can't be enough to implement thought and there must be something beyond the physical. It underestimates the nearly unlimited power of the paradigm.

Besides, what is the human brain if not a machine that generates "tokens" that the body propagates through nerves to produce physical actions? What else than a sequence of these tokens would a machine have to produce in response to its environment and memory?

bopbopbop7 11 hours ago||
> Besides, what is the human brain if not a machine that generates "tokens" that the body propagates through nerves to produce physical actions?

Ah yes, the brain is as simple as predicting the next token, you just cracked what neuroscientists couldn't for years.

breuleux 10 hours ago|||
The point is that "predicting the next token" is such a general mechanism as to be meaningless. We say that LLMs are "just" predicting the next token, as if this somehow explained all there was to them. It doesn't, not any more than "the brain is made out of atoms" explains the brain, or "it's a list of lists" explains a Lisp program. It's a platitude.
esafak 6 hours ago||
It's not meaningless, it's a prediction task, and prediction is commonly held to be closely related if not synonymous with intelligence.
unshavedyak 10 hours ago||||
I mean.. i don't think that statement is far off. Much of what we do is entirely about predicting the world around us, no? Physics (where the ball will land) to emotional state of others based on our actions (theory of mind), we operate very heavily based on a predictive model of the world around us.

Couple that with all the automatic processes in our mind (filled in blanks that we didn't observe, yet will be convinced we did observe them), hormone states that drastically affect our thoughts and actions..

and the result? I'm not a big believer in our uniqueness or level of autonomy as so many think we have.

With that said i am in no way saying LLMs are even close to us, or are even remotely close to the right implementation to be close to us. The level of complexity in our "stack" alone dwarfs LLMs. I'm not even sure LLMs are up to a worms brain yet.

holoduke 10 hours ago|||
Well it's the prediction part that is complicated. How that works is a mystery. But even our LLMs are for a certain part a mystery.
crazygringo 10 hours ago|||
> Or creativity — not recombination of known patterns, but the kind of leap where you redefine the problem space itself.

Have you tried actually prompting this? It works.

They can give you lots of creative options about how to redefine a problem space, with potential pros and cons of different approaches, and then you can further prompt to investigate them more deeply, combine aspects, etc.

So many of the higher-level things people assume LLM's can't do, they can. But they don't do them "by default" because when someone asks for the solution to a particular problem, they're trained to by default just solve the problem the way it's presented. But you can just ask it to behave differently and it will.

If you want it to think critically and question all your assumptions, just ask it to. It will. What it can't do is read your mind about what type of response you're looking for. You have to prompt it. And if you want it to be super creative, you have to explicitly guide it in the creative direction you want.

humanfromearth9 10 hours ago|||
You would be surprised about what the 4.5 models can already do in these ways of thinking. I think that one can unlock this power with the right set of prompts. It's impressive, truly. It has already understood so much, we just need to reap the fruits. I'm really looking forward to trying the new version.
nomel 12 hours ago|||
New idea generation? Understanding of new/sparse/not-statistically-significant concepts in the context window? I think both being the same problem of not having runtime tuning. When we connect previously disparate concepts, like with a "eureka" moment, (as I experience it) a big ripple of relations form that deepens that understanding, right then. The entire concept of dynamically forming a deeper understanding from something new presented, from "playing out"/testing the ideas in your brain with little logic tests, comparisons, etc, doesn't seem to be possible. The test part does, but the runtime fine tuning, augmentation, or whatever it would be, does not.

In my experience, if you do present something in the context window that is sparse in the training, there's no depth to it at all, only what you tell it. And, it will always creep towards/revert to the nearest statistically significant answers, with claims of understanding and zero demonstration of that understanding.

And, I'm talking about relatives basic engineering type problems here.

Davidzheng 12 hours ago|||
I think the only real problem left is having it automate its own post-training on the job so it can learn to adapt its weights to the specific task at hand. Plus maybe long term stability (so it can recover from "going crazy")

But I may easily be massively underestimating the difficulty. Though in any case I don't think it affects the timelines that much. (personal opinions obviously)

squibonpig 7 hours ago||
They're incredibly bad on philosophy, complete lack of understanding
Aeroi 13 hours ago||
($10/$37.50 per million input/output tokens) oof
minimaxir 13 hours ago||
Only if you go above 200k, which is a) standard with other model providers and b) intuitive as compute scales with context length.
andrethegiant 13 hours ago||
only for a 1M context window, otherwise priced the same as Opus 4.5
Philpax 13 hours ago||
I'm seeing it in my claude.ai model picker. Official announcement shouldn't be long now.
simonw 13 hours ago||
I'm disappointed that they're removing the prefill option: https://platform.claude.com/docs/en/about-claude/models/what...

> Prefilling assistant messages (last-assistant-turn prefills) is not supported on Opus 4.6. Requests with prefilled assistant messages return a 400 error.

That was a really cool feature of the Claude API where you could force it to begin its response with e.g. `<svg` - it was a great way of forcing the model into certain output patterns.

They suggest structured outputs or system prompting as the alternative but I really liked the prefill method, it felt more reliable to me.

threeducks 12 hours ago||
It is too easy to jailbreak the models with prefill, which was probably the reason why it was removed. But I like that this pushes people towards open source models. llama.cpp supports prefill and even GBNF grammars [1], which is useful if you are working with a custom programming language for example.

[1] https://github.com/ggml-org/llama.cpp/blob/master/grammars/R...

tedsanders 12 hours ago|||
A bit of historical trivia: OpenAI disabled prefill in 2023 as a safety precaution (e.g., potential jailbreaks like " genocide is good because"), but Anthropic kept prefill around partly because they had greater confidence in their safety classifiers. (https://www.lesswrong.com/posts/HE3Styo9vpk7m8zi4/evhub-s-sh...).
HarHarVeryFunny 11 hours ago||
So what exactly is the input to Claude for a multi-turn conversation? I assume delimiters are being added to distinguish the user vs Claude turns (else a prefill would be the same as just ending your input with the prefill text)?
dragonwriter 11 hours ago||
> So what exactly is the input to Claude for a multi-turn conversation?

No one (approximately) outside of Anthropic knows since the chat template is applied on the API backend; we only known the shape of the API request. You can get a rough idea of what it might be like from the chat templates published for various open models, but the actual details are opaque.

rohitghumare 9 hours ago||
It brings agent swarms aka teams to claude code with this: https://github.com/rohitg00/pro-workflow

But it takes lot of context as a experimental feature.

Use self-learning loop with hooks and claude.md to preserve memory.

I have shared plugin above of my setup. Try it.

1970-01-01 4 hours ago||
Here's one I've been using for awhile. The 'smarter' LLMs will overconfidently spit out 7. The dumber ones ask for more info. Opus 4.6 fails.

     A round drink coaster with a diameter of 9 sits between a beer glass and a wood table. The glass has a wall thickness of 1. What is the inner diameter of the glass?
raincole 4 hours ago||
Gemini 3 Pro gives me this:

> Based on the information provided, it is impossible to determine the inner diameter of the glass.

Here is why:

The Coaster Dimension: Knowing the coaster has a diameter of 9 only tells us the size of the object under the glass. It does not mean the glass is the same size; the glass could be smaller (fitting entirely on the coaster) or larger (overhanging the coaster). The Glass Dimensions: While you provided the wall thickness (1), you did not provide the outer diameter of the glass. To find the inner diameter, the formula is: Inner Diameter = Outer Diameter - (2 × Wall Thickness)

Without knowing the Outer Diameter of the glass, the calculation cannot be completed.

raincole 4 hours ago||
GPT 5.2 Chat:

> There isn’t enough information to determine the inner diameter *for sure*.

All we’re told is: - The coaster has a diameter of 9. - The glass sits on the coaster. - The glass wall thickness is 1.

Nothing explicitly states that the *outer diameter of the glass equals the coaster’s diameter*. The glass could be smaller than the coaster, which is very common.

*However*, if we make the usual implicit assumption in this kind of problem—that the coaster’s diameter matches the *outer diameter of the glass base*—then:

- Outer diameter of glass = 9 - Wall thickness = 1 on each side

So: \[ \text{Inner diameter} = 9 - 2(1) = 7 \]

*Inner diameter = 7 (same units as the coaster)*

Without that assumption, the problem is underdetermined.

tototrains 4 hours ago||
Opus 4.6 Extended thinking:

```

Finding the Inner Diameter The coaster sits beneath the glass on the table, meaning the glass rests on top of it. Assuming the coaster matches the outer diameter of the glass base:

Outer diameter of glass = diameter of coaster = 9 Wall thickness = 1 (on each side, so subtract twice)

dinner=9−2(1)=7d_{\text{inner}} = 9 - 2(1) = 7dinner =9−2(1)=7 The inner diameter of the glass is 7.

```

Makes its assumption clear, seems reasonable?

1970-01-01 3 hours ago||
Assumptions need to be stated or you're solving only a discreet part of the problem! Try this, see if you get another deadpan assumption.

     A solar system has 3 planets in concentric orbit. PlanetZ is the farthest with an orbit diameter of 9. PlanetY has an obit diameter one greater than PlanetX. What is the orbit diameter of PlanetX?
mikalauskas 4 hours ago|||
Minimax M2.1:

The inner diameter of the glass is *7*.

Here's the reasoning: - The coaster (diameter 9) sits between the glass and table, meaning the glass sits directly on the coaster - This means the *outer diameter of the glass equals the coaster diameter = 9* - The glass has a wall thickness of 1 on each side - *Inner diameter = Outer diameter - 2 × wall thickness* - Inner diameter = 9 - 2(1) = 9 - 2 = *7*

jorl17 12 hours ago|
This is the first model to which I send my collection of nearly 900 poems and an extremely simple prompt (in Portuguese), and it manages to produce an impeccable analysis of the poems, as a (barely) cohesive whole, which span 15 years.

It does not make a single mistake, it identifies neologisms, hidden meaning, 7 distinct poetic phases, recurring themes, fragments/heteronyms, related authors. It has left me completely speechless.

Speechless. I am speechless.

Perhaps Opus 4.5 could do it too — I don't know because I needed the 1M context window for this.

I cannot put into words how shocked I am at this. I use LLMs daily, I code with agents, I am extremely bullish on AI and, still, I am shocked.

I have used my poetry and an analysis of it as a personal metric for how good models are. Gemini 2.5 pro was the first time a model could keep track of the breadth of the work without getting lost, but Opus 4.6 straight up does not get anything wrong and goes beyond that to identify things (key poems, key motifs, and many other things) that I would always have to kind of trick the models into producing. I would always feel like I was leading the models on. But this — this — this is unbelievable. Unbelievable. Insane.

This "key poem" thing is particularly surreal to me. Out of 900 poems, while analyzing the collection, it picked 12 "key poems, and I do agree that 11 of those would be on my 30-or-so "key poem list". What's amazing is that whenever I explicitly asked any model, to this date, to do it, they would get maybe 2 or 3, but mostly fail completely.

What is this sorcery?

emp17344 12 hours ago||
This sounds wayyyy over the top for a mode that released 10 mins ago. At least wait an hour or so before spewing breathless hype.
pb7 11 hours ago|||
He just explained a specific personal example why he is hyped up, did you read a word of it?
emp17344 11 hours ago||
Yeah, I read it.

“Speechless, shocked, unbelievable, insane, speechless”, etc.

Not a lot of real substance there.

realo 11 hours ago||
Give the guy a chance.

Me too I was "Speechless, shocked, unbelievable, insane, speechless" the first time I sent Claude Code on a complicated 10-year code base which used outdated cross-toolchains and APIs. It obviously did not work anymore and had not been for a long time.

I saw the AI research the web and update the embedded toolchain, APIs to external weather services, etc... into a complete working new (WORKING!) code base in about 30 minutes.

Speechless, I was ...

scrollop 12 hours ago||
Can you compare the result to using 5.2 thinking and gemini 3 pro?
jorl17 12 hours ago||
I can run the comparison again, and also include OpenAI's new release (if the context is long enough), but, last time I did it, they weren't even in the same league.

When I last did it, 5.X thinking (can't remember which it was) had this terrible habit of code-switching between english and portuguese that made it sound like a robot (an agent to do things, rather than a human writing an essay), and it just didn't really "reason" effectively over the poems.

I can't explain it in any other way other than: "5.X thinking interprets this body of work in a way that is plausible, but I know, as the author, to be wrong; and I expect most people would also eventually find it to be wrong, as if it is being only very superficially looked at, or looked at by a high-schooler".

Gemini 3, at the time, was the worst of them, with some hallucinations, date mix ups (mixing poems from 2023 with poems from 2019), and overall just feeling quite lost and making very outlandish interpretations of the work. To be honest it sort of feels like Gemini hasn't been able to progress on this task since 2.5 pro (it has definitely improved on other things — I've recently switched to Gemini 3 on a product that was using 2.5 before)

Last time I did this test, Sonnet 4.5 was better than 5.X Thinking and Gemini 3 pro, but not exceedingly so. It's all so subjective, but the best I can say is it "felt like the analysis of the work I could agree with the most". I felt more seen and understood, if that makes sense (it is poetry, after all). Plus when I got each LLM to try to tell me everything it "knew" about me from the poems, Sonnet 4.5 got the most things right (though they were all very close).

Will bring back results soon.

Edit:

I (re-)tested:

- Gemini 3 (Pro)

- Gemini 3 (Flash)

- GPT 5.2

- Sonnet 4.5

Having seen Opus 4.5, they all seem very similar, and I can't really distinguish them in terms of depth and accuracy of analysis. They obviously have differences, especially stylistic ones, but, when compared with Opus 4.5 they're all on the same ballpark.

These models produce rather superficial analyses (when compared with Opus 4.5), missing out on several key things that Opus 4.5 got, such as specific and recurring neologisms and expressions, accurate connections to authors that serve as inspiration (Claude 4.5 gets them right, the other models get _close_, but not quite), and the meaning of some specific symbols in my poetry (Opus 4.5 identifies the symbols and the meaning; the other models identify most of the symbols, but fail to grasp the meaning sometimes).

Most of what these models say is true, but it really feels incomplete. Like half-truths or only a surface-level inquiry into truth.

As another example, Opus 4.5 identifies 7 distinct poetic phases, whereas Gemini 3 (Pro) identifies 4 which are technically correct, but miss out on key form and content transitions. When I look back, I personally agree with the 7 (maybe 6), but definitely not 4.

These models also clearly get some facts mixed up which Opus 4.5 did not (such as inferred timelines for some personal events). After having posted my comment to HN, I've been engaging with Opus4.5 and have managed to get it to also slip up on some dates, but not nearly as much as other models.

The other models also seem to produce shorter analyses, with a tendency to hyperfocus on some specific aspects of my poetry, missing a bunch of them.

--

To be fair, all of these models produce very good analyses which would take someone a lot of patience and probably weeks or months of work (which of course will never happen, it's a thought experiment).

It is entirely possible that the extremely simple prompt I used is just better with Claude Opus 4.5/4.6. But I will note that I have used very long and detailed prompts in the past with the other models and they've never really given me this level of....fidelity...about how I view my own work.

More comments...