Top
Best
New

Posted by jnord 12 hours ago

No, it doesn't cost Anthropic $5k per Claude Code user(martinalderson.com)
247 points | 180 comments
hirako2000 5 hours ago|
> Qwen 3.5 397B-A17B is a good comparison

It is not. It's a terrible comparison. Qwen, deepseek and other Chinese models are known for their 10x or even better efficiency compared to Anthropic's.

That's why the difference between open router prices and those official providers isn't that different. Plus who knows what open routed providers do in term quantization. They may be getting 100x better efficiency, thus the competitive price.

That being said not all users max out their plan, so it's not like each user costs anthropic 5,000 USD. The hemoragy would be so brutal they would be out of business in months

jychang 4 hours ago||
That's a tautology. People think chinese models are 10x more efficient because they're 10x cheaper, and then you use that to claim that they're 10x more efficient.

Opus isn't that expensive to host. Look at Amazon Bedrock's t/s numbers for Opus 4.5 vs other chinese models. They're around the same order of magnitude- which means that Opus has roughly the same amount of active params as the chinese models.

Also, you can select BF16 or Q8 providers on openrouter.

irthomasthomas 48 minutes ago|||
Opus doubled in speed with version 4.5, leading me to speculate that they had promoted a sonnet size model. The new faster opus was the same speed as Gemini 3 flash running on the same TPUs. I think anthropics margins are probably the highest in the industry, but they have to chop that up with google by renting their TPUs.
re-thc 3 hours ago|||
> That's a tautology. People think chinese models are 10x more efficient because they're 10x cheaper

They do have different infrastructure / electricity costs and they might not run on nvidia hardware.

It's not just the models.

jychang 3 hours ago|||
Except there are providers that serve both chinese models AND opus as well. On the same hardware.

Namely, Amazon Bedrock and Google Vertex.

That means normalized infrastructure costs, normalized electricity costs, and normalized hardware performance. Normalized inference software stack, even (most likely). It's about a close of a 1 to 1 comparison as you can get.

Both Amazon and Google serve Opus at roughly ~1/2 the speed of the chinese models. Note that they are not incentivized to slow down the serving of Opus or the chinese models! So that tells you the ratio of active params for Opus and for the chinese models.

giancarlostoro 8 minutes ago|||
And Microsoft's Azure. It's on all 3 major cloud providers. Which tells me, they can make profit from these cloud providers without having to pay for any hardware. They just take a small enough cut.

https://code.claude.com/docs/en/microsoft-foundry

https://www.anthropic.com/news/claude-in-microsoft-foundry

re-thc 1 minute ago||||
> Both Amazon and Google serve Opus at roughly ~1/2 the speed of the chinese models

We were responded about 10x not 0.5x.

x86 vs arm64 could have different performance. The Chinese models could be optimized for different hardware so it could show massive differences.

Shakahs 52 minutes ago|||
AWS and GCP both have their own custom inference chips, so a better example for hosting Opus on commodity hardware would be Digital Ocean.
fennecfoxy 1 hour ago|||
I mean GN has covered the Nvidia black market in China enough that we pretty much know that they run on Nvidia hardware still.
dryarzeg 1 hour ago||
How is this related to the inference, may I ask? Except for some very hardware-specific optimizations of model architecture, there's nothing to prevent one to host these models on your own infrastructure. And that's what actually many OpenRouter providers, at least some of which are based in US, are doing. Because most of Chinese models mentioned here are open-weight (except for Qwen who has one proprietary "Max" model), and literally anyone can host them, not just someone from China. So it just doesn't really matter.
fennecfoxy 1 hour ago||
I mean sure, but in terms of cost per dollar/per watt of inference Nvidia's GPUs are pretty up there - unless China is pumping out domestic chips cheaply enough.

Also with Nvidia you get the efficiency of everything (including inference) built on/for Cuda, even efforts to catch AMD up are still ongoing afaik.

I wouldn't be surprised if things like DS were trained and now hosted on Nvidia hardware.

re-thc 47 minutes ago||
> unless China is pumping out domestic chips cheaply enough

They are. Nvidia makes A LOT of profit. Hey, top stock for a reason.

> I wouldn't be surprised if things like DS were trained and now hosted on Nvidia hardware

DS is "old". I wouldn't study them. The new 1s have a mandate to at least run on local hardware. There are data center requirements.

I agree it could still be trained on Nvidia GPUs (black market etc), but not running.

yorwba 32 minutes ago||
> The new 1s have a mandate to at least run on local hardware.

They do? Source?

But if that's true, it would explain why Minimax, Z.ai and Moonshot are all organized as Singaporean holding companies, with claimed data center locations (according to OpenRouter) in the US or Singapore and only the devs in China. Can't be forced to use inferior local hardware if you're just a body shop for a "foreign" AI company. ;)

re-thc 4 minutes ago||
> with claimed data center locations (according to OpenRouter) in the US or Singapore and only the devs in China

They just have a China only endpoint and likely a company under a different name.

Nothing to do with AI. TikTok is similar (global vs China operations).

Weaver_zhu 3 hours ago|||
Agree, but I guess the Opus 4.6 is 10x larger, rather than Chinese models being 10x more efficient. It is said that GPT-4 is already a 1.6T model, and Llama 4 behemoth is also much bigger than Chinese open-weight models. Chinese tech companies are short of frontier GPUs, but they did a lot of innovations on inference efficiency (Deepseek CEO Liang himself shows up in the author list of the related published papers).
jychang 2 hours ago|||
No, Opus cannot be 10x larger than the chinese models.

If Opus was 10x larger than the chinese models, then Google Vertex/Amazon Bedrock would serve it 10x slower than Deepseek/Kimi/etc.

That's not the case. They're in the same order of magnitude of speed.

Filligree 11 minutes ago|||
They serve it about 2x slower. So it must have about 2x the active parameters.

It could still be 10x larger overall, though that would not make it 10x more expensive.

bakugo 2 hours ago|||
I agree that Opus almost definitely isn't anywhere near that big, but AWS throughput might not be a great way to measure model size.

According to OpenRouter, AWS serves the latest Opus and Sonnet at roughly the same speed. It's likely that they simply allocate hardware differently per model.

bakugo 2 hours ago|||
GPT-4 was likely much larger than any of the SOTA models we have today, at least in terms of active parameters. Sparse models are the new standard, and the price drop that came with Opus 4.5 made it fairly obvious that Anthropic are not an exception.
Havoc 1 hour ago|||
> Plus who knows what open routed providers do in term quantization

The quantisation is shown on the provider section.

simianwords 4 hours ago|||
>It is not. It's a terrible comparison. Qwen, deepseek and other Chinese models are known for their 10x or even better efficiency compared to Anthropic's.

I find it a good comparison because it is a good baseline since we have zero insider knowledge of Anthropic. They give me an idea that a certain size of a model has a certain cost associated.

I don't buy the 10x efficiency thing: they are just lagging behind the performance of current SOTA models. They perform much worse than the current models while also costing much less - exactly what I would expect. Current Qwen models perform as good as Sonnet 3 I think. 2 years later when Chinese models catchup with enough distillation attacks, they would be as good as Sonnet 4.6 and still be profitable.

coldtea 21 minutes ago||
> I don't buy the 10x efficiency thing: they are just lagging behind the performance of current SOTA models. They perform much worse than the current models while also costing much less - exactly what I would expect.

Define "much worse".

  +--------------------------------------+-------------+-----------+------------------+
  | Benchmark                            | Claude Opus | DeepSeek  | DeepSeek vs Opus |
  +--------------------------------------+-------------+-----------+------------------+
  | SWE-Bench Verified (coding)          | 80.9%       | 73.1%     | ~90%                 |
  | MMLU (knowledge)                     | ~91         | ~88.5     | ~97%               |
  | GPQA (hard science reasoning)        | ~79–80      | ~75–76    | ~95%             |
  | MATH-500 (math reasoning)            | ~78         | ~90       | ~115%            |
  +--------------------------------------+-------------+-----------+------------------+
Filligree 9 minutes ago||
Everyone who's used Opus knows it's better than the others in a way that isn't captured by the benchmarks. I would describe it as taste.

Lots of models get really close on benchmarks, but benchmarks only tell us how good they are at solving a defined problem. Opus is far better at solving ill-defined ones.

coldtea 1 minute ago||
>Everyone who's used Opus knows it's better than the others in a way that isn't captured by the benchmarks. I would describe it as taste.

Ah, the "trust me bro" advantage. Couldn't it just be brand identity and familiarity?

lelanthran 4 hours ago||
> That being said not all users max out their plan,

These are not cell phone plans which the average joe takes, they are plans purchased with the explicit goal of software development.

I would guess that 99 out of every 100 plans are purchased with the explicit goal of maxing them out.

serial_dev 4 hours ago|||
I’m not maxing them out… I have issues that I need to fix, features I need to develop, and I have things I want to learn.

When I have a feeling that these tools will speed me up, I use them.

My client pays for a couple of these tools in an enterprise deal, and I suspect most of us on the team work like that.

If my goal was to max out every tool my client pays, I’d be working 24hrs a day and see no sunlight ever.

I guess it’s like the all you can eat buffet. Everybody eats a lot, but if you eat so much that you throw up and get sick, you are special.

bloppe 3 hours ago||
[flagged]
Ginden 4 hours ago||||
My employer bought me a Claude Max subscription. On heavy weeks I use 80% of the subscription. And among software engineers that I know, I'm a relatively heavy user.

Why? Because in my experience, the bottleneck is in shareholders approving new features, not my ability to dish out code.

raihansaputra 3 hours ago||||
goal? yeah. but in reality just timing it right (starting a session at 7-8am, to get 2 sessions in a workday, or even 3 if you can schedule something at 5am), i rarely hit limits.

if i hit the limit usually i'm not using it well and hunting around. if i'm using it right i'm basically gassed out trying to hit the limit to the max.

solumunus 4 hours ago||||
There’s absolutely no way that’s true.
rustystump 4 hours ago|||
In saas this is not true. Most saas is highly profitable or was i suppose because they knew that most of their customers would never max out their plans.
overrun11 2 hours ago||
A huge number of people are convinced that OpenAI and Anthropic are selling inference tokens at a loss despite the fact that there's no evidence this is true and a lot of evidence that it isn't. It's just become a meme uncritically regurgitated.

This sloppy Forbes article has polluted the epistemic environment because now theres a source to point to as "evidence."

So yes this post author's estimation isn't perfect but it is far more rigorous than the original Forbes article which doesn't appear to even understand the difference between Anthropic's API costs and its compute costs.

mike_hearn 1 hour ago||
I'd love to be a fly on the wall when this argument is tried in front of a bankruptcy court. It drives me nuts. Of course there's evidence that they're selling tokens at a loss.

The only thing these companies sell are tokens. That's their entire output. OpenAI is trying to build an ad business but it must be quite small still relative to selling tokens because I've not yet seen a single ad on ChatGPT. It's not like these firms have a huge side business selling Claude-themed baseball caps.

That means the cost of "inference" is all their costs combined. You can't just arbitrarily slice out anything inconvenient and say that's not a part of the cost of generating tokens. The research and training needed to create the models, the salaries of the people who do that, the salaries of the people who build all the serving infrastructure, the loss leader hardcore users - all of it is a part of the cost of generating each token served.

Some people look at the very different prices for serving open weights models and say, see, inference in general is cheap. But those costs are distorted by companies trying to buy mindshare by giving models away for free, and of those, both the top labs keep claiming the Chinese are distilling them like crazy including using many tactics to evade blocksl So apparently the cost of a model like DeepSeek is still partly being subsidized by OpenAI and Anthropic against their will. The cost of those tokens is higher than what's being charged, it's just being shifted onto someone else's books. Nice whilst it lasts, but this situation has been seen many times in the past and eventually people get tired of having costs externalized onto them.

For as long as firms are losing money whilst only selling tokens, that means those tokens are selling at a loss. To not sell tokens at a loss the companies would have to be profitable.

overrun11 9 minutes ago|||
The article is about compute cost though. By "lose money on inference" I mean the assertion that inference has negative gross margins which a lot of people truly believe. This is important because it's common to reason from this that LLM's are uneconomical and a ticking time bomb where prices will have to be jacked up several orders of magnitude just to cover the compute used for the tokens.
howmayiannoyyou 47 minutes ago|||
You're missing costs.

- Amortized training costs.

- SG&A.

- Capex depreciation.

All the above impact profitability over various time horizons and have to rolled into present and projected P&L and cash flow analysis.

barrell 2 hours ago|||
Does this not count as evidence? I would agree that it sounds a little shaky, but I would not say there is no evidence.

https://www.wheresyoured.at/oai_docs/

bodge5000 1 hour ago|||
> A huge number of people are convinced that OpenAI and Anthropic are selling inference tokens at a loss despite the fact that there's no evidence this is true

Theres quite a lot of evidence, no proof I'd agree, but then there's no absolute proof I'm aware to the contrary either, so I don't know where you're getting this from.

The two pieces of evidence I'm aware of is that 1) Anthropic doesn't want their subsidised plans being used outside of CC, which would imply that the money their making off it isn't enough, and 2) last time I checked, API spending is capped at $5000 a month

Like I say, neither of these are proof, you can come up with reasonable arguments against them, but once again the same could be said for evidence on the contrary

overrun11 32 minutes ago|||
> which would imply that the money their making off it isn't enough

I don't think this logically follows. An unlimited buffet doesn't let you resell all of the food out the backdoor. At some level of usage any fixed price plan becomes unprofitable.

I agree the 5k cap is interesting as evidence although as you said I suspect there are other reasons for it.

BoredomIsFun 1 hour ago|||
But a simple assumption that Anthropic runs a normal large MoE LLM (which it almost certainly does) suggests that the actual price of running it (mostly energy) is pretty small.
bob1029 2 hours ago||
I think the wafer scale compute is a massive deal. It's already being leveraged for models you can use right now and the reception on HN has been negligible. The entire model lives in SRAM. This is orders of magnitude faster than HBM/DRAM. I can't imagine they couldn't eventually break even using hardware like this in production.
osener 1 hour ago||
> Cost remains an ever present challenge. Cursor’s larger rivals are willing to subsidize aggressively. According to a person familiar with the company’s internal analysis, Cursor estimated last year that a $200-per-month Claude Code subscription could use up to $2,000 in compute, suggesting significant subsidization by Anthropic. Today, that subsidization appears to be even more aggressive, with that $200 plan able to consume about $5,000 in compute, according to a different person who has seen analyses on the company’s compute spend patterns.

This is the relevant quote from the original article.

A7OM 6 minutes ago||
“The article is right to separate compute cost from retail price — but the retail price baseline itself is arbitrary depending on where you run the model. The same capability (e.g. Llama 3.3 70B with tool calling and 128K context) runs $3.00/1M tokens at model developer list price and $0.22/1M at Fireworks AI — a 93% gap for identical specs. That spread makes any “it costs Anthropic X” estimate depend entirely on which reference price you anchor to. We track this live across 1,625 SKUs and 40+ vendors at a7om.com — the variance across the market is larger than most people realise when they back-calculate provider economics.”
anonzzzies 4 hours ago||
I calculated only last weekend that my team would cost, if we would run Claude Code on retail API costs, around $200k/mo. We pay $1400/month in Max subscriptions. So that's $50k/user... But what tokens CC is reporting in their json -> a lot of this must be cached etc, so doubt it's anywhere near $50k cost, but not sure how to figure out what it would cost and I'm sure as hell not going to try.
scandox 3 hours ago||
I'm fascinated to know the kind of work that allows you to intelligently allocate so much resources. I use Claude extensively and feel that I great value out of it but I reach a limit in terms of what I can do that makes sense relatively quickly it seems.
codemog 2 hours ago|||
Yea basically we have an app that’s like Netflix but for dogs, so people can leave on dog oriented shows for their dogs when they get kombucha or coffee
ffsm8 2 hours ago|||
Omg, I can't believe that's real

I wanted to believe that you're essentially trolling, but no - that service exist. And not an upstart, there is coverage going back several years.

Our societies are seriously fucked.

fennecfoxy 1 hour ago|||
Never have I read something that screamed Bay Area more than this lmao.
lukan 2 hours ago|||
Same for me, but I suppose it is letting agents more loose and less checking of the code and rather throw away lots of generated output.
sva_ 1 hour ago|||
Gemini CLI shows how much was saved through caching each session, and it's usually somewhere around 90%
aweb 3 hours ago|||
I'm surprised, isn't it forbidden to use the Max plan as part of a company? Just curious, as I thought it was forbidden by the ToS but I'm not sure if I have a good understanding of it
ffsm8 2 hours ago|||
There is nothing in the TOS last time I checked forbidding it's use with Claude code. It's only forbidden to utilize it in the running of the business.

So getting Claude code subscriptions for developers should be permissable and not be against anything... However, if you created a rest endpoint to eg run a preconfigured prompt as part of your platform, that'd be against it

But I'm neither a lawyer nor work for anthropic

anonzzzies 2 hours ago||
Ah, that makes sense. I hope they mean that then. We are just devs using it to write code; not selling it on.
sunaurus 2 hours ago||||
Surely that can't be true? The expectation would be that people pay $200 a month for building open source and personal hobby software with Claude?
anonzzzies 2 hours ago||
Yeah, that would end that really quickly. I use Pro for personal stuff. If $200 is not allowed for companies I don't think anyone would use it, at all.
quikoa 2 hours ago||||
If they believe a sufficient number is locked in then they may consider doing this later.
bloppe 3 hours ago|||
If that were true, then everyone I know is violating that tos
neamar 4 hours ago|||
You can use `npx ccusage` to check your local logs and see how much it would have cost through the API.
behehebd 1 hour ago||
love tools like that. just type a few characters at your terminal and bam a small problem solved
jychang 4 hours ago||
> but not sure how to figure out what it would cost and I'm sure as hell not going to try.

Ask Opus to figure out how much it would cost. Lol.

eaglelamp 5 hours ago||
If Anthropic's compute is fully saturated then the Claude code power users do represent an opportunity cost to Anthropic much closer to $5,000 then $500.

Anthropic's models may be similar in parameter size to model's on open router, but none of the others are in the headlines nearly as much (especially recently) so the comparison is extremely flawed.

The argument in this article is like comparing the cost of a Rolex to a random brand of mechanical watch based on gear count.

d1sxeyes 5 hours ago||
But opportunity cost is not actual cost. “If everyone just kept paying but used our service less we would be more profitable” is true, but not in any meaningful way.

Are Anthropic currently unable to sell subscriptions because they don’t have capacity?

eru 3 hours ago|||
Opportunity costs are real. In many cases they are more real than 'actual costs'. However, I otherwise agree with you.
MaxikCZ 3 hours ago|||
> Are Anthropic currently unable to sell subscriptions because they don’t have capacity?

Absolutely! Im currently paying $170 to google to use Opus in antigravity without limit in full agent mode, because I tried Anthropic $20 subscription and busted my limit within a single prompt. Im not gonna pay them $200 only to find out I hit the limit after 20 or even 50 prompts.

And after 2 more months my price is going to double to over $300, and I still have no intention of even trying the 20x Max plan, if its really just 20x more prompts than Pro.

dtech 3 hours ago|||
This has a absolutely nothing to do with whether they're limited by available compute...
MaxikCZ 3 hours ago||
What? Wouldn't they give me more than 1 prompt of compute for my $20, if they had spare?
esrauch 2 hours ago||
I don't think that logically follows.

They have a business model and are trying to capture more revenue, fully saturating your computer isn't obviously a good business strategy.

cicko 2 hours ago|||
If anything, you are confirming that $170 covers heavy Opus use profitably for the provider.
Aeolun 5 hours ago|||
Opportunity cost is not the same thing as actual cost. They might have made more money if they were capable of selling the API instead of CC, but I would never tell my company to use CC all the time if I didn’t have a personal subscription.
eaglelamp 5 hours ago||
You’re looking through the wrong end of the telescope. An investor is buying opportunity and it is a real cost to them.
kaliqt 3 hours ago||
Still makes no sense as they’d lose revenue, data, and scale if they don’t subsidize.
bob1029 4 hours ago|||
> If Anthropic's compute is fully saturated then the Claude code power users do represent an opportunity cost to Anthropic much closer to $5,000 then $500.

I think it's the other way around? Sparse use of GPU farms should be the more expensive thing. Full saturation means that we can exploit batching effects throughout.

KronisLV 4 hours ago|||
Don’t give them any ideas, please! I need my 100 USD subscription with generous Opus usage!
eru 3 hours ago||
Google's Antigravity has Opus access, and I suspect it's subsidised.
nottorp 3 hours ago|||
You know who also loves to use the term "opportunity cost"?

The entertainment industry. They still tell you about how much money they're leaving on the table because people pirate stuff.

What would happen in reality for entertainment is people would "consume" far less "content".

And what would happen in reality for Anthropic is people would start asking themselves if the unpredictability is worth the price. Or at best switch to pay as you go and use the API far less.

the_gipsy 2 hours ago|||
I prefer car analogies
NooneAtAll3 4 hours ago|||
> The argument in this article is like comparing the cost of a Rolex to a random brand of mechanical watch on gear count

I mean... rolex is overpriced brand whose cost to consumers is mainly just marketting in itself. Its production cost is nowhere close to selling price and looking at gears is fair way of evaluating that

fragmede 4 hours ago||
> production cost is nowhere close to selling price

When has production cost had anything to do with selling price?

eru 3 hours ago||
Not directly. But if production cost is above selling price, you typically tend to get less production. And if production cost is (way) below selling price, that tends to invite competition.
YetAnotherNick 4 hours ago||
You can rent the GPUs and everything needed to run the model. Opportunity cost is not a real cost here.

Only thing that matters is if the users would have paid $5000 if they don't have option to buy subscription. And I highly doubt they would have.

ymaws 6 hours ago||
How confident are you in the opus 4.6 model size? I've always assumed it was a beefier model with more active params that Qwen397B (17B active on the forward pass)
Bolwin 5 hours ago||
Yeah that's a massive assumption they're making. I remember musk revealed Grok was multiple trillion parameters. I find it likely Opus is larger.

I'm sure Anthropic is making money off the API but I highly doubt it's 90% profit margins.

jychang 4 hours ago|||
> I find it likely Opus is larger.

Unlikely. Amazon Bedrock serves Opus at 120tokens/sec.

If you want to estimate "the actual price to serve Opus", a good rough estimate is to find the price max(Deepseek, Qwen, Kimi, GLM) and multiply it by 2-3. That would be a pretty close guess to actual inference cost for Opus.

It's impossible for Opus to be something like 10x the active params as the chinese models. My guess is something around 50-100b active params, 800-1600b total params. I can be off by a factor of ~2, but I know I am not off by a factor of 10.

simianwords 4 hours ago||
Are you sure you can use tps as a proxy?
jychang 4 hours ago||
In practice, tps is a reflection of vram memory bandwidth during inference. So the tps tells you a lot about the hardware you're running on.

Comparing tps ratios- by saying a model is roughly 2x faster or slower than another model- can tell you a lot about the active param count.

I won't say it'll tell you everything; I have no clue what optimizations Opus may have, which can range from native FP4 experts to spec decoding with MTP to whatever. But considering chinese models like Deepseek and GLM have MTP layers (no clue if Qwen 3.5 has MTP, I haven't checked since its release), and Kimi is native int4, I'm pretty confident that there is not a 10x difference between Opus and the chinese models. I would say there's roughly a 2x-3x difference between Opus 4.5/4.6 and the chinese models at most.

fc417fc802 3 hours ago||
> In practice, tps is a reflection of vram memory bandwidth during inference.

> Comparing tps ratios- by saying a model is roughly 2x faster or slower than another model- can tell you a lot about the active param count.

You sure about that? I thought you could shard between GPUs along layer boundaries during inference (but not training obviously). You just end up with an increasingly deep pipeline. So time to first token increases but aggregate tps also increases as you add additional hardware.

jychang 3 hours ago||
That doesn't work. Think about it a bit more.

Hint: what's in the kv cache when you start processing the 2nd token?

And that's called layer parallelism (as opposed to tensor parallelism). It allows you to run larger models (pooling vram across gpus) but does not allow you to run models faster.

Tensor parallelism DOES allow you to run models faster across multiple GPUs, but you're limited to how fast you can synchronize the all-reduce. And in general, models would have the same boost on the same hardware- so the chinese models would have the same perf multiplier as Opus.

Note that providers generally use tensor parallelism as much as they can, for all models. That usually means 8x or so.

In reality, tps ends up being a pretty good proxy for active param size when comparing different models at the same inference provider.

fc417fc802 1 hour ago||
Oh I see. I went and confused total aggregate throughput with per-query throughput there didn't I.
nbardy 3 hours ago||||
You can estimate on tok/second

The Trillions of parameters claim is about the pretraining.

It’s most efficient in pre training to train the biggest models possible. You get sample efficiency increase for each parameter increase.

However those models end up very sparse and incredibly distillable.

And it’s way too expensive and slow to serve models that size so they are distilled down a lot.

wongarsu 2 hours ago||||
GPT 4 was rumoured/leaked to be 1.8T. Claude 3.5 Sonnet was supposedly 175B, so around 0.5T-1T seems reasonable for Opus 3.5. Maybe a step up to 1-3T for Opus 4.0

Since then inference pricing for new models has come down a lot, despite increasing pressure to be profitable. Opus 4.6 costs 1/3rd what Opus 4.0 (and 3.5) costs, and GPT 5.4 1/4th what o1 costs. You could take that as indication that inference costs have also come done by at least that degree.

My guess would have been that current frontier models like Opus are in the realm of 1T params with 32B active

aurareturn 5 hours ago||||
Anthropic CEO said 50%+ margins in an interview. I'm guessing 50 - 60% right now.
daemonologist 6 hours ago|||
Even if it's larger, OpenRouter has DeepSeek v3.2 (685B/37B active) at $0.26/0.40 and Kimi K2.5 (1T/32B active) at $0.45/2.25 (mentioned in the post).
johndough 5 hours ago||
Opus 4.6 likely has in the order of 100B active parameters. OpenRouter lists the following throughput for Google Vertex:

    42 tps for Claude Opus 4.6 https://openrouter.ai/anthropic/claude-opus-4.6
    143 tps for GLM 4.7 (32B active parameters) https://openrouter.ai/z-ai/glm-4.7
    70 tps for Llama 3.3 70B (dense model) https://openrouter.ai/meta-llama/llama-3.3-70b-instruct
For GLM 4.7, that makes 143 * 32B = 4576B parameters per second, and for Llama 3.3, we get 70 * 70B = 4900B, which makes sense since denser models are easier to optimize. As a lower bound, we get 4576B / 42 ≈ 109B active parameters for Opus 4.6. (This makes the assumption that all three models use the same number of bits per parameter and run on the same hardware.)
jychang 4 hours ago||
Yep, you can also get similar analysis from Amazon Bedrock, which serves Opus as well.

I'd say Opus is roughly 2x to 3x the price of the top Chinese models to serve, in reality.

codemog 6 hours ago||
Also curious if any experts can weigh in on this. I would guess in the 1 trillion to 2 trillion range.
Chamix 5 hours ago||
Try 10s of trillions. These days everyone is running 4-bit at inference (the flagship feature of Blackwell+), with the big flagship models running on recently installed Nvidia 72gpu rubin clusters (and equivalent-ish world size for those rented Ironwood TPUs Anthropic also uses). Let's see, Vera Rubin racks come standard with 20 TB (Blackwell NVL72 with 10 TB) of unified memory, and NVFP4 fits 2 parameters per btye...

Of course, intense sparsification via MoE (and other techniques ;) ) lets total model size largely decouple from inference speed and cost (within the limit of world size via NVlink/TPU torrus caps)

So the real mystery, as always, is the actual parameter count of the activated head(s). You can do various speed benchmarks and TPS tracking across likely hardware fleets, and while an exact number is hard to compute, let me tell you, it is not 17B or anywhere in that particular OOM :)

Comparing Opus 4.6 or GPT 5.4 thinking or Gemini 3.1 pro to any sort Chinese model (on cost) is just totally disingenuous when China does NOT have Vera Rubin NVL72 GPUs or Ironwood V7 TPUs in any meaningful capacity, and is forced to target 8gpu Blackwell systems (and worse!) for deployment.

jychang 4 hours ago|||
Nobody is running 10s of trillion param models in 2026. That's ridiculous.

Opus is 2T-3T in size at most.

johndough 2 hours ago||
Do you have any clues to guess the total model size? I do not see any limitations to making models ridiculously large (besides training), and the Scaling Law paper showed that more parameters = more better, so it would be a safe bet for companies that have more money than innovative spirit.
magicalhippo 1 hour ago||
> I do not see any limitations to making models ridiculously large (besides training)

From my understanding, the "besides training" is a big issue. As I noted earlier[1], Qwen3 was much better than Qwen2.5, but the main difference was just more and better training data. The Qwen3.5-397B-A17B beat their 1T-parameter Qwen3-Max-Base, again a large change was more and better training data.

[1]: https://news.ycombinator.com/item?id=47089780

aurareturn 5 hours ago|||
China is targeting H20 because that's all they were officially allowed to buy.
Chamix 4 hours ago||
I generally agree, back of the napkin math shows H20 cluster of 8gpu * 96gb = 768gb = 768B parameters on FP8 (no NVFP4 on Hopper), which lines up pretty nicely with the sizes of recent open source Chinese models.

However, I'd say its relatively well assumed in realpolitik land that Chinese labs managed to acquire plenty of H100/200 clusters and even meaningful numbers of B200 systems semi-illicitly before the regulations and anti-smuggling measures really started to crack down.

This does somewhat beg the question of how nicely the closed source variants, of undisclosed parameter counts, fit within the 1.1tb of H200 or 1.5tb of B200 systems.

aurareturn 4 hours ago||
They do not have enough H200 or Blackwell systems to server 1.6 billion people and the world so I doubt it's in any meaningful number.
Chamix 4 hours ago||
I assure you, the number of people paying to use Qwen3-Max or other similar proprietary endpoints is far less than 1.6 billion.
aurareturn 4 hours ago||
You don't need to assure me. It's a theoretical maximum.
faangguyindia 2 hours ago||
Claude subscription is equivalant of spot instance

And APIs are on-demand service equivalant.

Priority is set to APIs and leftover compute is used by Subscription Plans.

When there is no capacity, subscriptions are routed to Highly Quantized cheaper models behind the scenes.

Selling subscription makes it cheaper to run such inference at scale otherwise many times your capacity is just sitting there idle.

Also, these subscription help you train your model further on predictable workflow (because the model creators also controls the Client like qwen code, claude code, anti gravity etc...)

This is probably why they will ban you for violating TOS that you cannot use their subscription service model with other tools.

They aren't just selling subscription, but the subscription cost also help them become better at the thing they are selling which is coding for coding models like Qwen, Claude etc...

I've used qwen code, codex and claude.

Codex is 2x better than Qwen code and Claude is 2x better than Codex.

So I'd hope the Claude Opus is atleast 4-5x more expensive to run than flagship Qwen Code model hosted by Alibaba.

popcorncowboy 2 hours ago||
> Claude is 2x better than Codex

This hasn't been true in a long time.

epolanski 2 hours ago|||
Not only that, but since the release of 5.4 and 5.3 codex I've been running them in parallel and I've been let down by Opus 4.6 with maximum thinking way more than I've been let down with OpenAI models.

In fact I'm more and more inclined to run my own benchmarks from now on, because I seriously distrust those I see online.

Even if the benchmarks are indeed valid, they just don't reflect my use cases, usages and ability to navigate my projects and my dependencies.

Huppie 2 hours ago||||
imho they're mostly better at a subset of different tasks. I find codex to be better at reasoning through bugs and reviewing code when compared to Opus, but for writing code I find Claude a lot better.

Maybe that's just CLAUDE.md and memory causing the difference of course.

As a matter of preference however I like the way Claude Code works just a lot better, instructing it to work with parallel subagents in work trees etc. just matches the way I think these things should work I guess.

elAhmo 2 hours ago|||
My impression as well, especially since 5.2 which I felt was on par or better than Opus 4.5
janalsncm 2 hours ago|||
> When there is no capacity, subscriptions are routed to Highly Quantized cheaper models behind the scenes.

Have they announced this?

nl 2 hours ago||
> Have they announced this?

No and indeed they have said they never do this at all.

sieabahlpark 2 hours ago||
[dead]
himata4113 3 hours ago|
What people don't realize is that cache is *free*, well not free, but compared to the compute required to recompute it? Relatively free.

If you remove the cached token cost from pricing the overall api usage drops from around $5000 to $800 (or $200 per week) on the $200 max subscription. Still 4x cheaper over API, but not costing money either - if I had to guess it's break even as the compute is most likely going idle otherwise.

mike_hearn 1 hour ago||
Cache definitely isn't free! We're in a global RAM shortage and KV caches sit around consuming RAM in the hope that there will be a hit.

The gamble with caching is to hold a KV cache in the hope that the user will (a) submit a prompt that can use it and (b) that will get routed to the right server which (c) won't be so busy at the time it can't handle the request. KV caches aren't small so if you lose that bet you've lost money (basically, the opportunity cost of using that RAM for something else).

criemen 2 hours ago|||
> What people don't realize is that cache is free

I'm incredibly salty about this - they're essentially monetizing intensely something that allows them to sell their inference at premium prices to more users - without any caching, they'd have much less capacity available.

eru 3 hours ago||
> [...] if I had to guess it's break even as the compute is most likely going idle otherwise.

Why would it go idle? It would go to their next best use. At least they could help with model training or let their researchers run experiments etc.

himata4113 3 hours ago||
inference compute is vastly different versus training, also it has to stay hot in vram which probably takes up most of it. There is limited use for THAT much compute as well, they are running things like claude code compiler and even then they're scratching the surface of the amount of compute they have.

Training currently requires nvidia's latest and greatest for the best models (they also use google TPU's now which are also technically the latest and greatest? However, they're more of a dual purpose than anything afaik so that would be a correct assesment in that case)

Inference can run on a hot potato if you really put your mind to it

rafaelmn 2 hours ago|||
I think I've heard multiple time that a large % of training compute for SoTA models is inference to generate training tokens, this is bound to happen with RL training
eru 3 hours ago|||
They can run any number of inference experiments. Like a lot of the alignment work they have going on.

I am not saying this would be a great use of their compute, but idle is far from the only alternative. (Unless electricity is the binding constraint?)

himata4113 3 hours ago||
Electricity is charged whenever you use it or not, so very unlikely, but sure, they can find uses for it. Although they are not going to make that much money compared to claude code subscriptions.
eru 1 hour ago||
> Electricity is charged whenever you use it or not, [...]

Huh, what? You know you can turn off unused equipment, and at least my nvidia GPU can use more or less Watts even when turned on?

Or does Anthropic have a flatline deal for electricity and cooling?

More comments...