Top
Best
New

Posted by jnord 15 hours ago

No, it doesn't cost Anthropic $5k per Claude Code user(martinalderson.com)
302 points | 219 commentspage 3
aurareturn 8 hours ago|
By the way, one of the charts in the article shows that Opus 4.6 is 10x costlier than Kimi K2.5.

I thought there was no moat in AI? Even being 10x costlier, Anthropic still doesn't have enough compute to meet demand.

Those "AI has no moat" opinions are going to be so wrong so soon.

spiderice 8 hours ago||
Claude Code Max obviously doesn't cost 10x more than Kimi. The article even confirms that you can get $5k worth of computer for $200 with Claude Code Max.

So no, Claude would not be getting NEARLY as much usage as it's currently getting if it weren't for the $100/$200 monthly subscription. You're comparing Kimi to the price that most people aren't paying.

jdjfnfndn 5 hours ago||
[dead]
hattmall 8 hours ago||
Is it fair to say the Open Router models aren't subsidized though? They make the case that companies on there are running a business, but there are free models, and companies with huge AI budgets that want to gather training data and show usage.
vbezhenar 4 hours ago||
Why does Claude charge 10x for API, compared to subscriptions? They're not a monopoly, so one would expect margins to be thinner.
preommr 2 hours ago||
It's why every integration basically tries to piggyback off of a subscription, and why Anthropic has to continuously play whack-a-mole trying to shut those services down.
hobofan 4 hours ago||
Monopoly isn't the only thing that allows you to charge large margins.

API inference access is naturally a lot more costly to provide compared to Chat UI and Claude Code, as there is a lot more load to handle with less latency. In the products they can just smooth over load curves by handling some of the requests slower (which the majority of users in a background Code session won't even notice).

akhrail1996 5 hours ago||
The comparison with Qwen/Kimi by "comparable architecture size" is doing a lot of heavy lifting. Parameter count doesn't tell you much when the models aren't in the same league quality-wise.

I wonder if a better proxy would be comparing by capability level rather than size. The cost to go from "good" to "frontier" is probably exponential, not linear - so estimating Anthropic's real cost from what it takes to serve Qwen 397B seems off.

vmykyt 4 hours ago||
I have very naive question:

People in comments have assumption that Atropic 10 times bigger than chinese models so calc cost is 10 times more.

But from perspective of Big O notation only a few algorithms gives you O(N). Majority high optimized things provide O(N*Log(N))

So what is big O for any open model for single request?

fancyfredbot 4 hours ago||
It's a good question. Costs will be lumpy. Inference servers will have a preferred batch size. Once you have a server you can scale number of users up to that batch size for relatively low cost. Then you need to add another server (or rack) for another large cost.

However I think it's fair to say the cost is roughly linear in the number of users other than that.

There may be some aspects which are not quite linear when you see multiple users submitting similar queries... But I don't think this would be significant.

rat9988 4 hours ago||
N*Log(N) can be approximated to O(N) for most realistic usecases.

As for LLM, there is probably some cost constant added once it can fit on a single GPU, but should probably be almost linear.

behehebd 4 hours ago||
Did anthropic do the oldest SaaS sales trick in the 2010s SaaS playbooks ;)
gmerc 10 hours ago||
Nobody gets RSI typing “iterate until tests pass”
arthurcolle 9 hours ago||
Recursive self improvement and Repetitive Strain Injury being the same initialism is really funny to me
rs_rs_rs_rs_rs 8 hours ago||
Honest questions: have you never heard of a hyperbole before and are you on the spectum?
zurfer 2 hours ago||
tldr: the author argues it is closer to costing 500 USD per month IF a user hits their weekly rate limits every week.

Which is probably a lot more correct than other claims. However it's also true that anybody who has to use the API might pay that much, creating a real cost per token moat for Anthropics Claude code vs other models as long as they are so far ahead in terms of productivity.

scuff3d 7 hours ago||
This article is hilariously flawed, and it takes all of 5 seconds of research to see why.

Alibaba is the primary comparison point made by the author, but it's a completely unsuitable comparison. Alibab is closer to AWS then Anthropic in terms of their business model. They make money selling infrastructure, not on inference. It's entirely possible they see inference as a loss leader, and are willing to offer it at cost or below to drive people into the platform.

We also have absolutely no idea if it's anywhere near comparable to Opus 4.6. The author is guessing.

So the articles primary argument is based on a comparison to a company who has an entirely different business model running a model that the author is just making wild guesses about.

simianwords 7 hours ago|
What? Aws is a good comparison if you want only infra level costs which is what the post is talking about.
darkwater 7 hours ago|
Well, IDK, I have used CC with API billing pretty extensively and managed to spend ~$1000 in one month more or less. Moved to a Max 20x subscription and using it a bit less (I'm still scared) but not THAT less and I'm around 10% weekly usage. I'm not counting the tokens, though.
More comments...