Did they solve the "lost in the middle" problem? Proof will be in the pudding, I suppose. But that number alone isn't all that meaningful for many (most?) practical uses. Claude 4.5 often starts reverting bug fixes ~50k tokens back, which isn't a context window length problem.
Things fall apart much sooner than the context window length for all of my use cases (which are more reasoning related). What is a good use case? Do those use cases require strong verification to combat the "lost in the middle" problems?
Installation instructions: https://code.claude.com/docs/en/overview#get-started-in-30-s...
What do you want to do?
1. Stop and wait for limit to reset
2. Switch to extra usage
3. Upgrade your plan
Enter to confirm · Esc to cancel
How come they don't have "Cancel your subscription and uninstall Claude Code"? Codex lasts for way longer without shaking me down for more money off the base $xx/month subscription.Scalable Intelligence is just a wrapper for centralized power. All Ai companies are headed that way.
Though I'm wary about that being a magic bullet fix - already it can be pretty "selective" in what it actually seems to take into account documentation wise as the existing 200k context fills.
I check context use percentage, and above ~70% I ask it to generate a prompt for continuation in a new chat session to avoid compaction.
It works fine, and saves me from using precious tokens for context compaction.
Maybe you should try it.
At this point I just think the "success" of many AI coding agents is extremely sector dependent.
Going forward I'd love to experiment with seeing if that's actually the problem, or just an easy explanation of failure. I'd like to play with more controls on context management than "slightly better models" - like being able to select/minimize/compact sections of context I feel would be relevant for the immediate task, to what "depth" of needed details, and those that aren't likely to be relevant so can be removed from consideration. Perhaps each chunk can be cached to save processing power. Who knows.
But I kinda see your point - assuming from you're name you're not just a single purpose troll - I'm still not sold on the cost effectiveness of the current generation, and can't see a clear and obvious change to that for the next generation - especially as they're still loss leaders. Only if you play silly games like "ignoring the training costs" - IE the majority of the costs - do you get even close to the current subscription costs being sufficient.
My personal experience is that AI generally doesn't actually do what it is being sold for right now, at least in the contexts I'm involved with. Especially by somewhat breathless comments on the internet - like why are they even trying to persuade me in the first place? If they don't want to sell me anything, just shut up and keep the advantage for yourselves rather than replying with the 500th "You're Holding It Wrong" comment with no actionable suggestions. But I still want to know, and am willing to put the time, effort and $$$ in to ensure I'm not deluding myself in ignoring real benefits.
Its a weapon who's target is the working class. How does no one realize this yet?
Don't give them money, code it yourself, you might be surprised how much quality work you can get done!
It also seems misleading to have charts that compare to Sonnet 4.5 and not Opus 4.5 (Edit: It's because Opus 4.5 doesn't have a 1M context window).
It's also interesting they list compaction as a capability of the model. I wonder if this means they have RL trained this compaction as opposed to just being a general summarization and then restarting the agent loop.
Imagine 2 models where when asking a yes or no question the first model just outputs a single yes or no then but the second model outputs a 10 page essay and then either yes or no. They could have the same price per token but ultimately one will be cheaper to ask questions to.
That's a feature. You could also not use the extra context, and the price would be the same.
How long before the "we" is actually a team of agents?
It seems that the Claude Code team has not properly taught Claude how to use teams effectively.
One of the biggest problems I saw with it is that Claude assumes team members are like a real worker, where once they finish a task they should immediately be given the next task. What should really happen is once they finish a task they should be terminated and a new agent should be spawned for the next task.
The answer to "when is it cheaper to buy two singles rather than one return between Cambridge to London?" is available in sites such as BRFares, but no LLM can scrape it so it just makes up a generic useless answer.