Posted by anabranch 6 days ago
That's an incentive difficult to reconcile with the user's benefit.
To keep this business running they do need to invest to make the best model, period.
It happens to be exactly what Anthropic's strategy is. That and great tooling.
And they're selling less and less (suddenly 5 hour window lasts 1 hour on the similar tasks it lasted 5 hours a week ago), so IMO they're scamming.
I hope many people are making notes and will raise heat soon.
Anthropic has to keep racing ahead and be stamped offering the best frontier models.
It isn't optimal, so the models cost them disproportionately too much to sell at a profitable price. So they keep feeding the hype and push the costs higher, hoping there won't be too much heat and get away with it.
I wouldn't like to be a leader at such company, but their pay keep them in line.
The difference here is Opus 4.7 has a new tokenizer which converts the same input text to a higher number of tokens. (But it costs the same per token?)
> Claude Opus 4.7 uses a new tokenizer, contributing to its improved performance on a wide range of tasks. This new tokenizer may use roughly 1x to 1.35x as many tokens when processing text compared to previous models (up to ~35% more, varying by content), and /v1/messages/count_tokens will return a different number of tokens for Claude Opus 4.7 than it did for Claude Opus 4.6.
> Pricing remains the same as Opus 4.6: $5 per million input tokens and $25 per million output tokens.
ArtificialAnalysis reports 4.7 significantly reduced output tokens though, and overall ~10% cheaper to run the evals.
I don't know how well that translates to Claude Code usage though, which I think is extremely input heavy.
It looks like you don't allow testing of anything beyond a certain token size.
Which makes your test kind of pointless, because if you are chatting about AI with something that's only a few hundred tokens, the data your collecting is pretty minimal and specific, not something that's generally applicable or relevant to wider user outside of that specific context.