Top
Best
New

Posted by samizdis 8 hours ago

Claude Code users hitting usage limits 'way faster than expected'(www.theregister.com)
204 points | 137 commentspage 2
0xbadcafebee 5 hours ago|
I've found a lot of people are almost belligerently pro-Claude. They refuse to consider other providers or agents, and won't consider using any model than the latest Opus. The most common reasons I hear are 1) they don't want to use anything other than the greatest model, afraid that anything else would waste their time, 2) they believe their experience is that it's far better than anything else.

Even if you show them benchmarks that show another model equally as good if not better, they refuse to use it. My suspicion is they've convinced themselves that Opus must be the best, because of reputation and price. They might've used a different model and didn't have a good experience, making them double down.

I hope a research institution will perform an experiment. My hypothesis is that if you swapped out a couple similar state-of-the-art models, even changing the "class" of model (Sonnet <-> Opus, GPT 5.4 <-> Sonnet), the user won't be able to tell which is which. This would show that the experience is subjective, and that bias is informing their decision, rather than rationality.

It's like wine tasting experiments. People rate a $100 bottle of wine higher than a $10 bottle. But if they actually taste the same, you should be buying the $10 bottle. But people don't, because they believe the $100 bottle is better. In the AI case, the problem is people won't stop buying the expensive bottle, because they've convinced themselves they must use the more expensive bottle.

danny_codes 4 hours ago|
This has largely been my experience. Can’t tell the difference between Claude and kimi
sibtain1997 1 hour ago||
Faced this too. Tried https://github.com/rtk-ai/rtk to compress cli output but some commands started failing and the savings were minimal. Ended up just being more deliberate about context size instead of adding more tooling on top
1970-01-01 6 hours ago||
This has been verified as a bug. Naturally, people should see some refunds or discounts, but I expect there won't be anything for you unless you make a stink.

https://old.reddit.com/r/ClaudeCode/comments/1s7zg7h/investi...

Kim_Bruning 4 hours ago|
How do you even make a stink? I haven't found an easy way to find a human.
kneel 7 hours ago||
I asked it to complete ONE task:

You've hit your limit · resets 2am (America/Los_Angeles)

I waited until the next day to ask it to do it again, and then:

You've hit your limit · resets 1pm (America/Los_Angeles)

At which point I just gave up

dewey 6 hours ago|
If this is reasonable or not is pretty hard to judge without any info on that "ONE" task.
kaoD 6 hours ago||
I only asked Claude to rewrite Linux in Rust.
kombine 6 hours ago||
I'd ask it to rewrite Claude code in Rust, but it's creator apparently wrote a book on Typescript..
edbern 2 hours ago||
Yesterday asked claude to write up a simple plan adding very basic features to a project I'm working on and it took 20% of 5-hour pro plan limit. Then somehow Codex seems to be infinite. Is OpenAI just burning through way more cash or are they more efficient?
ZeroCool2u 6 hours ago||
I'm finishing my annual paid Pro Gemini plan, so I'm on the free plan for Claude and I asked one (1) single question, which admittedly was about a research plan, using the Sonnet 4.6 Extended thinking model and instantly hit my limit until 2 PM (it was around 8 or 9 AM).

Just a shockingly constrained service tier right now.

notyourwork 6 hours ago|
Free is free. Want more, fork over money.
Forgeties79 6 hours ago|||
They are saying even for free it is very constrained. This isn’t productive.
ZeroCool2u 6 hours ago||
Yes, exactly my point.
jlharter 6 hours ago|||
I mean, even the paid tier where you fork over money is constrained, too!
jditu 1 hour ago||
Still on 2.1.87, exclusively Opus for coding — haven't hit this yet. Wondering if the bug is personal vs team plan specific?

I'm sure it's more complex, but why not improve internal implicit caching and pass the savings on? Presumably Anthropic already benefits from caching repeated prompt prefixes internally — just do that better, extend the TTL window, and let users benefit. Explicit caching stays for production use cases with semi-static prompts where you want control.

The current 5-min default TTL + 2x penalty for 1-hour cache feels punitive for an interactive coding tool.

pagecalm 2 hours ago||
Hit this myself recently, along with a bunch of overloaded errors. I think it's growing pains for where we are with AI right now.

As the tooling matures I think we'll see better support for mixing models — local and cloud, picking the right one for the task. Run the cheap stuff locally, use the expensive cloud models only when you actually need them. That would go a long way toward managing costs.

There's also the dependency risk people aren't talking about enough. These providers can change pricing whenever they want. A tool you've built your entire workflow around can become inaccessible overnight just because the economics shifted. It's the vendor lock-in problem all over again but with less predictability.

reenorap 6 hours ago||
The only way AI will be profitable to companies like Anthropic or OpenAI is to make the cost $1000-2000/month or more for coding. Every programmer will be forced to pay for it because it's only a fraction of their salary (in the US anyway) and it's the only way the programmer will be competitive. Whether the company pays for it, or they pay for it themselves, it will need to be paid.

There's no other way that these companies can compete against the likes of Google, and Facebook unless they sell themselves to these companies. With AWS and GCP spending hundreds of billions of dollars per year, there's no way that Anthropic or OpenAI can continue competing unless they make an absurd amount of money and throw that at resources like their own datacenters, etc and they can't do that at $20/month.

danny_codes 4 hours ago||
Even worse, the open weight models are practically indistinguishable from the closed ones. I just don’t see why you’d pay full price to run Claude when you can pay 10x less to run Kimi. There are already loads of inference providers and client layers.

Without heavy collusion or outright legislative fiat (banning open models) I don’t see how Anthropic/OpenAI justify their (alleged) market caps

leptons 1 hour ago||
> the cost $1000-2000/month or more for coding. Every programmer will be forced to pay for it because it's only a fraction of their salary (in the US anyway) and it's the only way the programmer will be competitive.

I routinely match or beat Claude with regards to speed, I often race it to the solution because Claude just takes so long to produce a usable result.

Staying competitive doesn't mean only paying an AI for slop that often takes longer to produce. AI is a convenience, it is not the only way to produce code or even the most cost effective or fastest way. AI code also comes with more risk, and more cognitive load if you actually read and understand everything it wrote. And if you don't then you're a bit foolish to trust it blindly. Many developers are waking up to the reality of using AI, and it's not really living up to the hype.

techgnosis 3 hours ago|
* Hardware will manage models more efficiently

* Models will manage tokens more efficiently

* Agents will manage models more efficiently

* Users will manage agents more efficiently

Why are we acting like technology is on pause?

More comments...