Posted by lsdmtme 14 hours ago
Claude Code was able to implement something in one shot. It was decent for a proof of concept initial implementation. It's barely able to do work now with full specs and detailed plans.
ChatGPT is also being watered down.
It seems obvious that Anthropic and OpenAI aren't the solution to any problem.
I keep getting the sense that people feel like they have no idea if they are getting the product that they originally paid for, or something much weaker, and this sentiment seems to be constantly spreading. Like when I hear Anthropic mentioned in the past few weeks, it's almost always in some negative context.
- Banning OpenClaw users (within their rights, of course, but bad optics)
- Banning 3rd party harnesses in general (ditto)
(claude -p still works on the sub but I get the feeling like if I actually use it, I'll get my Anthropic acct. nuked. Would be great to get some clarity on this. If I invoke it from my Telegram bot, is that an unauthorized 3rd party harness?)
- Lowering reasoning effort (and then showing up here saying "we'll try to make sure the most valuable customers get the non-gimped experience" (paraphrasing slightly xD))
- Massively reduced usage (apparently a bug?) The other day I got 21x more usage spend on the same task for Claude vs Codex.
- Noticed a very sharp drop in response length in the Claude app. Asked Claude about it and it mentioned several things in the system prompt related to reduced reasoning effort, keeping responses as brief as possible, etc.
It's all circumstantial but everything points towards "desperately trying to cut costs".
I love Claude and I won't be switching any time soon (though with the usage limits I'm increasingly using Codex for coding), but it's getting hard to recommend it to friends lately. I told a friend "it was the best option, until about two weeks ago..." Now it's up in the air.
I have been wondering if it's more geared at reducing resource usage, given that at the moment there's a known constraint on AI datacenter expansion capability. Perhaps they are struggling to meet demand?
It only makes sense for them to get users to use their ecosystem, rather than other tools.
Yes, definitely, they’re gracefully failing to meet demand. They could also deny new customers, but it would probably be bad for business.
"We're sorry, what we were able to give you for $100/mo before now needs to be $200/mo (or more). We miscalculated/we were too generous/gave too much away for too little. It's a new technology, we are seeing a ton of demand, we are trying to run a business, hope you understand. If you don't want it, don't pay for it."
How often? Realistically, if you invoke it occasionally, for what's clearly an amount that's "reasonable personal use", then no you don't get nuked.
I'll change my mind when I see otherwise.
And this isn't being positive about Anthropic support or their treatment of users, as I too have seen lots of people here getting billed by them for stuff they never paid for, blatant fraud. That's even worse than Google. I'm only talking about getting banned for usage.
Support consisted of AI bots saying you did something stupid, you did something wrong, you were abusing the system, followed by (only when I asked for it explicitly) claiming to file a ticket with a human who will contact you later (and it either didn't happen or their ticket system is /dev/null).
(By the way this is the 2nd time I've been "please hold" gaslit by support LLMs this exact same way, the other being with Square)
What they changed is that it now uses extra usage, which is charged at api rates
Generally I find codex and claude make a good team. I'm not a heavy user, but I am currently Claude Max 5x and ChatGPT Plus. Now that OpenAI has a $100 offering and I am finding myself using Claude less, I am considering switching to Claude Pro and ChatGPT Pro x5. The work hours restriction on Claude Max x5 really pisses me off.
I am not a heavy user. Historically I only break over 50% weekly one week a month and average about 30-40% of Max x5 over the entire month. I went Max because of the weekly limits and to access the better models and because I felt I was getting value. I need an occasional burst of usage, not 24/7 slow compute. But even for pay-as-you-go burst usage Anthropic's API prices are insane vs Max.
I have yet to ever hit a limit on codex so it's not on my mind. And lately it seems like Claude is likely to be having a service interruption anyway. A big part of subscribing to Claude Max was to get away from how the usage limits on Pro were causing me to architect my life around 5hr windows. And now Anthropic has brought that all back with this don't use it before 2pm bullshit. I want things ready to go when the muses strike. I'm honestly questioning whether Anthropic wants anyone who isn't employed as a software engineer to use their kit.
Anyway for the last month or so codex "just works" and Claude has been an invitation for annoyances. There was a time when codex was quite a bit behind claude-code. They have been roughly equal (different strength and weaknesses) since at least February (for me).
- pre file write -> block editing code files without a task and plan of work
- post tool use -> show next open checkbox in the task to the agent, like an instruction pointer
- post user message -> log all user messages for periodic review of intent alignment
These 3 hooks + plain md files make my claude harness.
100% this, I’ve posted the same sentiment here on HN. I hate the chilling effect of the bans and the lack of clarity on what is and is not allowed.
I don’t think they could have done that much better I’d say.
There is very poor clarity about what is and isn't allowed with the Claude SDK/claude -p. Are we allowed to use it to automate stuff? What kind of tasks is it permitted to be used for? What if you call your script 'OrangeClaw' and release that on GitHub? What if your script gets super popular, does it suddenly become against TOS?
I get that the traditional dev is allergic to the concept of reading between the lines and demands everything to be spelled out explicitly, but maybe you should just see it as something to learn because it's an incredibly useful life skill.
Are you willing to bet your account over whether you've read between the lines correctly? Anthropic aren't going to listen to appeals.
In a single prompt? From zero usage? That doesn't "just happen".
I have zero assurances that the above can't result in a ban. The usage pattern is not distinct from OpenClaw.
The GP has described a task which feels like a task very well within intended usage of CC, but can easily eat up the usage limit.
What should we read between the lines about this scenario?
Is it a bannable offense?
It can tell if your cron is running them every 10 minutes 24/7, because basic biology rules out you doing that for more than a day or so.
If I'm understanding you correctly: they changed that policy, you can now use 3rd party software unofficially with the undocumented Claude Code endpoint, and their servers auto-detect this and charge you extra for it?
EDIT: Yeah, something like that?
> Starting April 4 at 12pm PT / 8pm BST, you’ll no longer be able to use your Claude subscription limits for third-party harnesses including OpenClaw. Instead, they’ll require extra usage.
https://news.ycombinator.com/item?id=47633568
This seems to mean that unauthorized usage of the sub endpoint is tolerated now (and billed as though it were the regular API). And possibly affects claude -p, though I don't know yet.
Third-party harnesses are the exact opposite of stickiness!
Ditching Claude Code for a third party harness while using the Claude Code subscription means it's trivial to switch to a different model when you {run out of credits | find a cheaper token provider | find a better model}.
Maybe there’s some truth to that, but then why haven’t OpenAI made the same move? I believe the main reason is platform control. Anthropic can’t survive as a pipeline for tokens, they need to build and control a platform, which means aggressively locking out everybody else building a platform.
OpenAI has never shyed away from burning mountains of cash to try and capture a little more market share. They paid a billion dollars for a vibe coded mess just for the opportunity to associate themselves with the hype.
Lol no they didn't. It wasn't even an acquihire. They just hired Peter.
Maybe they are paying him incredibly well, but not a billion dollars well.
No, I'm paying $200 a month for a premium product that I expect premium service for. It's the single most expensive IT expense I have. Taking advantage my foot.
You are more than able to pay for API rates.
If you're paying normal API prices they'll happily let you use whatever harness you want.
1. openclaw like - using the LLM endpoint on subscription billing, different prompts than claude code
2. using claude cli with -p, in headless mode
The second runs through their code and prompts, just calls claude in non-interactive mode for subtasks. I feel especially put off by restricting the second kind. I need it to run judge agents to review plans and code.
Another thing is branding: Their CLI might be the best right now, but tech debt says it won’t continue to be for very long.
By enforcing the CLI you enforce the brand value — you’re not just buying the engine.
Claude code uses a bunch if best practices to maximize cache hit rate. Third party harnesses are hit or miss, so often use a lot more tokens for the same task.
most of the users of those third party harnesses care just as much about hitting cache and getting more usage.
He demonstrates in the code that OpenCode aggressively trims context, by compacting on every turn, and pruning all tool calls from the context that occurred more than 40,000 tokens ago. Seems like it could be a good strategy to squeeze more out of the context window - but by editing the oldest context, it breaks the prompt cache for the entire conversation. There is effectively no caching happening at all.
https://x.com/steipete/status/2040811558427648357
"Anthropic now blocks first-party harness use too
claude -p --append-system-prompt 'A personal assistant running inside OpenClaw.' 'is clawd here?'
→ 400 Third-party apps now draw from your extra usage, not your plan limits.
So yeah: bring your own coin "
> This is not intentional, likely an overactive abuse classifier. Looking, and working on clarifying the policy going forward.
I've used it with a sub a lot. Concurrency of 40 writing descriptions of thousands of images, running for hours on sonnet.
I have a lot of complaints. I've cancelled my $200 subscription and when it runs out in a few days I'll have to find something else.
But claude -p is fine.
... Or it was 2 week ago. Who knows if they've silently throttled it by now?
Not sure how that's enforced though. I was in OpenClaw discord a while ago and enforcement seemed a bit random.
I'll try to find the source, I might have gotten the details mixed up.
Just tmux and use that.
Soon if they drop -p people will just vibe code in 5 minutes a way to type inside it remotely similar to their own built in remote access tool. Seems like a losing game from anthropics side
it's a bug only if they get a harsh public response, otherwise it becomes a feature
1) Nobody should expect to use OpenClaw without API usage.
2) We have known for a long time that the plans are subsidized. It was not as big of a deal but now that demand has continued to explode at a multiple and tools like OpenClaw were creating a lot of usage from a small minority of customers, prices change.
Everything for me points more towards, we have made a service people really want to use and we are trying to balance a supply shortage (compute) with pricing. Nothing is stopping folks like yourself from simply paying the API rates. It is the simple no hassle way to get around any issue you are having, pay the API cost and you will have no limitations!
I now have been using Codex and everything has been great (I still swap back and forth but generally to check things out.)
My theory is just that the models are great after release to get people switching, then they cut them back in capabilities slowly over time until the next major release to increase the hype cycle.
I think it's more likely they're trying to optimize the Claude Code prompts to reduce load on their system and have overcorrected at the cost of quality.
1: https://gist.github.com/roman01la/483d1db15043018096ac3babf5...
Claude seems to be getting nerfed every week since we've switched. I wonder how our EVP is feeling now.
It kind of reminds me of the joke where a plumber charges $500 for a 5 minute visit. When the client complains the plumber says it's $50 for labor and $450 for knowing how to fix the problem.
In a bustling restaurant, an excited patron recognized the famous artist Picasso dining alone. Seizing the moment, the patron approached Picasso with a simple request. With a plain napkin and a big smile, he asked the artist for a drawing. He promised payment for his troubles. Picasso, ever the creator, didn’t hesitate. From his pocket, he produced a charcoal pencil and he brought to life a stunning sketch of a goat on the napkin—a clear mark of his unique style. Proudly, he presented it to the patron.
The artwork mesmerized the patron, who reached out to take it, only to be stopped by Picasso’s firm hand. “That will be $100,000,” Picasso declared.
Astonished, the patron balked at the sum. “But it took you just a few seconds to draw this!”
With a calm demeanor, Picasso took back the napkin, crumpled it, and tucked it away into his pocket, replying, “No, it has taken me a lifetime.”
Despite this I don't think engineers should feel threatened. As long as there is a need for a human in the loop, as today, there will still be engineering jobs. And if demand for engineering effort is elastic enough, there could easily be even more jobs tomorrow.
Rather than threatened, I think engineers should feel exposed. To danger, yes, but opportunity as well.
So the price for fixing the problem is equal. Sounds like a great argument for AI.
Competition will prevent that from happening. When anyone can host open models and there is giant demand for LLMs companies can not easily raise token prices without sending a lot of traffic to their competitors.
A friend’s company fired all EMs and have engineers reporting to product managers. They aren’t allowed to do refactors because the CTO believes the AI doesn’t need organized code.
There's 0 chance of him facing the consequences for it either.
Whether it's due to bugs or actual malice, it's not a good look. I genuinely can't tell if it's buggy, if it's been intentionally degraded, if it's placebo or if it's all just an elaborate OpenAI psyop.
Right now the only blocker for me is the lack of Linux support.
https://news.ycombinator.com/item?id=47664442
Configuration and environment variables seem to have improved things somewhat but it still seems to be hit or miss.
I'm on the enterprise team plan so a decent amount of usage.
In March I could use Opus all day and it was getting great results.
Since the last week of March and into April, I've had sessions where I maxed out session usage under 2 hours and it got stuck in overthinking loops, multiple turns of realising the same thing, dozens of paragraphs of "But wait, actually I need to do x" with slight variations of the same realisation.
This is not the 'thinking effort' setting in claude code, I noticed this happening across multiple sessions with the same thinking effort settings, there was clearly some underlying change that was not published that made the model get stuck in thinking loops more for longer and more often without any escape hatch to stop and prompt the user for additional steering if it gets stuck.
Not only that, but the lack of transparency about what's happening, in clear and simple terms, directly from Anthropic is concerning.
I've already told my org's higher ups that in the current situation we're not close to getting our money's worth with these models.
Although it seems that enterprise wasn’t included, so maybe not in your case.
https://support.claude.com/en/articles/14063676-claude-march...
In all seriousness though, I've observed the same thing with my own usage.
It's pretty clear that OpenAI has consistently used bots on social networks to peddle their products. This could just be the next iteration, mass spreading lies about Anthropic to get people to flock back to their own products.
That would explain why a lot of users in the comments of those posts are claiming that they don't see any changes to limits.
(FWIW I have definitely noticed a cognitive decline with Claude / Opus 4.6 over the past month and a half or so, and unless I'm secretly working for them in my sleep, I'm definitely not an Anthropic employee.)
in short, it looks like nothing has been nerfed, but sentiment has definitely been negative. I suspect some of the openclaw users have been taking out their frustrations.
Any idea what their test harness looks like? My experience comes primarily from Claude Code; this makes me wonder if recent CC updates could be more to blame than Opus 4.6 itself.
You definitely shouldn't trust me, as we're way beyond the point where you can trust ANYTHING on the internet that has a timestamp later than 2021 or so (and even then, of course people were already lying).
Personally I use Claude models through Bedrock because I work for Amazon, and I haven't noticed any decline. Instead it's always been pretty shit, and what people describe now as the model getting lost of infinite loops of talking to itself happened since the very start for me.
It looks like the spreadsheet-touchers over at Anthropic won out over the brand leaders, which is too bad as good will can be a trench if you don't abuse your customers.
They do indeed get the product they originally paid for.
It's simply that they were suckers and didn't read the "fine" print of the product they bought.
The label says "more tokens than the lower tier".
It is not in the interests for Anthropic to screw its customer base. Running a frontier lab comes with tradeoffs between training, inference and other areas.
How would anthropic increase future profits without satisfying customers?
The weakest signal to me is investor money, because when you think of it, investors are betting on a future that may or may not be there. Heck even trends aren't guaranteed, "past performance is no guarantee etc etc"
1. Build AGI
2. Use said AGI to tell us how to become profitable
3. Profit!
Anthropic seems to be going all in on enterprise sales. Which means they don't actually have to please customers, or it's what ThePrimeagen humorously calls a "yacht problem"—a problem that only needs a solution after the IPO. For now all they have to do is convince corporate leadership that this is the future of work and sow enough FOMO to close those sales contracts and their projected sales, and stock valuation, goes through the roof.
Of course that value will collapse if they go without delivering on their promises long enough. That's why they call it a bubble. But by then, hopefully, Dario and the early investors will be long gone and even richer than they were to start. Their only competitor, OpenAI, is confronted with the same issues: the scalability problems won't go away, and addressing them doesn't drive stock valuation the way promising high rollers that AGI and total workforce automation are just around the corner does.
Demand is way up and compute supply is extremely limited because data center buildouts can't keep up with demand.
In the face of rising demand and insufficient compute their only practical options (other than refusing new business until demand can be met) are signicantly raising the price of tokens (and more tighly limiting subscription options) or doing behind the scenes inference optimizations that are likely to make the model dumber.
It is very easy to believe that they took the route of inference optimizations that have reduced quality of the service and that that is where the perceived enshittification is coming from.
So the trick is to always set to max, and then begin every task with “this is an extremely complex task, do not complete it without extensive deep thinking and research” or whatever.
You’re basically fighting a battle to make the model think more, against the defaults getting more and more nerfed to save costs.
I’m pretty much using 90% Codex now, although since Claude is consistently faster at answering quick questions, I still keep it open for that and for code-reviewing codex/human work before commit.
If you don't believe me you can search HN posts about Codex/Claude six months ago.
Phase 1: $200/mo prosumer engineer tool
Phase 2: AI layoffs / "it's just AI washing"
Phase 3: $20,000/mo limited release model "too dangerous" to use
Phase 4: Accelerated layoffs / two person teams. Rehiring of certain personnel at lower costs.
Phase 5: "Our new model can decompile and rewrite any commercial software. We just wrote a new kernel after looking at Linux (bye, bye GPL!) We also decompiled the latest Zelda game, ported the engine to Rust, and made a new game with it. Source code has no value. Even compiled and obfuscated code is a breeze to clone."
Phase 6: $100k/mo model that replicates entire engineering teams, only large companies can afford it. Ordinary users can't buy. More layoffs.
Phase N: People can't afford computing anymore. Everything is thin clients and rented. It's become like the private railroad industry. End of the PC era. Like kids growing up on smartphones, there's nothing to tinker with anymore. And certainly no gradient for entrepreneurship for once-skilled labor capital.
Anothropic used to be cool before they started gating access. Limiting Claw/OpenCode was strike one. Mythos is strike two.
Y'all should have started hating on their ethics when they started complaining about being distilled. For training they conducted on materials they did not own.
We need open weights companies now more than ever. Too bad China seems to be giving up on the idea.
"You wouldn't distill an Opus."
You will be backstabbed
You will be squeezed for all they can.
And you will be betrayed.
> Phase N: People can't afford computing anymore. Everything is thin clients and rented. It's become like the private railroad industry. End of the PC era. Like kids growing up on smartphones, there's nothing to tinker with anymore. And certainly no gradient for entrepreneurship for once-skilled labor capital.
Thankfully none of them actually makes money and just runs on investment so there is a good chance bubble will drop and the price of PC equipment will... continue to rise as US gives up Taiwan to China
Anthropic is a private company but nevertheless, the sentiment is accurate and applies to all kinds of corporations.
I think it has something to do with mode collapse (although Claude certainly has its own "tells"), but I'm not sure.
It sounds trivial but even for Agentic, I found the writing style to be really important. When you give Claude a persona, it sounds like the thing. When you give GPT a persona, it sounds like GPT half-assedly pretending to be the thing.
---
Some other interesting points about Anthropic's models. I don't know if any of these relate to my LLM style question, but seems worth mentioning:
Claude models also use way less tokens for the same task (on ArtificialAnalysis, they are a clear outlier on this metric).
And there's a much stronger common sense, subjectively. (Not sure if we have a good way to actually measure that, though.) It takes context and common sense into account, to a much greater degree.
(Which ties in with their constitution. Understanding why things are wrong at a deeper level, rather than just surface level pattern matching.)
Opus is great but it should be bigger. You notice the difference between Sonnet and Opus, but with heavy use you notice Opus's limitations, too.
It all boils down to a brilliant but extremely expensive technology. Both to build and to run.
We've been sold a product with heavy subsidy. The idea (from Sam) scale out and see what happens.
Those who care to read between the lines can see what's happening. A perfect storm of demand that attract VCs who can't understand they are the real customers. Once they understand that it will be too late.
Regarding open weight models: eventually we will, as humanity, benefit from the astronomical capital poured into developing a technology ahead of its time. In a few years this and even more will run on edge.
Written by open source developers, likely former openai and anthropic employees who got so much cash in the bank they don't need to worry about renting their knowledge.
I've been using GLM for over 6 months and pretty happy.
Releasing open weights have been basically a PR move, the moment those companies need to actually make money they will cut it out as that reduces their client base.
They DO NOT want you to run AI. They want you to pay them to do it
z.ai did go public on the HK exchange. They are under pressures similar to other public companies.
I know that China models are increasingly being trained and run using Huawei chips instead of Nvidia. I know China has a surplus of electricity from renewables (wind, solar, hydro).
So, it makes a lot of sense to get people a "demo" and claim the paid product is better.
i think a lot of people have no idea how capable local models are atm.
The AI landscape in China is larger than just Qwen and Alibaba.
its under new license prohibiting any commercial use.
That license is more like business source license vs open source license.
The first one is just incredibly naive, the second might be true for some people, for some tasks, but it's not going to capture the majority who're chasing the latest and greatest to "keep up".
If China is forced to choose between giving the entire AI market to the US or releasing free models, they'll be releasing free models as long as it's necessary.
If you're objective it to democratize AI, sure. But for those fed up with it and the devastating effects it's having on students, for example, can opt to actively avoid paying for products with AI (I say this as someone who uses it every day, guilty). At some point large companies will see that they're bleeding money for something that most people don't seem to want, and cancel those $100k/mo deals. I've already experienced one AI-developer-turned company crash and burn.
Personally, I don't think this LLM-based AI generation will have any significant positive impacts. Time, energy (CO2) and money would have been far better spent elsewhere.
Like with the dot com bubble there will be a crash and then whatever shakes out of that will be the companies and products who invested in understanding the actual strengths and weaknesses of the tech, instead of just trying to slap an "AI" sticker on everything.
This one seems too far fetched. Training models is widespread. There will always be open weight models in some form, and if we assume there will be some advancements in architecture, I bet you could also run them on much leaner devices. Even today you can run models on Raspberry Pis. I don't see a reason this will stop being a thing, there will be plenty of ways to tinker.
However, keep in mind the masses don't care about tinkering and never have. People want a ChatGPT experience, not a pytorch experience. In essence this is true for all tech products, not just AI.
A few days later it simply stopped working again, API authentication error. What must I do to have working, paid, premium service?
Screwing around with it today, it works 5x slower and times out all of the time. I'm paying more and getting waaaaay less. Why can't companies just raise prices like normal?
And to me, this lie is mostly a fight to see who bites the biggest chunk of the war death machine.
The ideal time to make your product worse is probably not at the same point that all of your competitor's customers are looking. Anthropic really, really fucked up here.
And beyond that, there's a ton of people who are just regular 9-5 Claude CLI users with an enterprise subscription who are getting punished with a worse model at the same price just as if we were Claw users. This kind of thing does not make one feel warm and fuzzy. I feel like I just got a boot to the teeth.
The SI symbol for minutes is "min", not "M".
A compromise would be to use the OP notation "m".
I'm aware of that, and thought that "downgraded" was the wrong word to use when going from 1h to 5 months.
1. I guess longer caching means more stale data, which is why it's a downgrade? 2. Maybe this isn't the TTL I thought it was? 3. Maybe this isn't the cache I thought?
Then I clicked on the link and realized I had been mislead my the title.
If you run out of session quota too quickly and need to wait more than an hour to resume your work ... you are paying even more penalty just to resume your work -- a penalty you wouldnt have needed if session quota was not so restrictive in first place, and which in turn causes you to burn through next session quota even faster.
Seems like a vicious cycle that made the UX very poor. I remember Claude Code with Pro became virtually unuseable in middle of March with session quota expiring within first hour or less for me -- which was wildly different experience from early March.
Seeing some things about how the effort selector isn't working as intended necessarily and the model is regressing in other ways: over-emphasizing how "difficult" a problem is to solve and choosing to avoid it because of the "time" it would take, but quoted in human effort, or suggesting the "easier" path forward even if it's a hack or kludge-filled solution.
As others have said, anthropic is between a rock and a hard place, you can't scale compute as quickly, and the influx of new accounts has definitely made things tough for them: I think all the "how is claude this session 1/2/3/4" questions that keep coming up must be part of some a/b on just how far to quantize / lower thinking while still maintaining user satisfaction.
I heard a while back Claude refused to attempt a task for days, saying it would take weeks of work. Eventually the user convinced it to try, and it one-shotted it in 30 seconds.
Totally true, also tokens seem to burn through much faster. More parallelism could explain some of it but where I could work on 3-5 projects at once on the max plan a month ago, I can't even get one to completion now on the same Opus model before the 5h session locks me up..
The above was a successful prompt to get Claude to stop whining about effort, difficulty, and time.
Unfortunately abusive language well placed is an effective LLM motivator.
It costs him more in ingredients alone than he charges. He even offers some pseudo unlimited buffet, combo sets, and happy hours.
He announced a new restaurant, apparently it will be even better, so good he's a bit worried. He makes sure to share his worries while he picks a few select enterprise for business parties and the likes.
In the meantime he cracks down on free buffet goers who happen to eat too much, and downgrades all ingredients without notice to finally hope to make a profit.
Edit: I may have conflated these two threads. https://news.ycombinator.com/item?id=47739260
The common thread: the subsidy era is ending. Every provider priced access below cost to capture market share, and now with 84% developer adoption and demand outpacing GPU capacity, the bill is coming due. Shorter cache TTLs, tighter rate limits, and model downgrades aren't bugs — they're the business model correcting itself.
The uncomfortable question is whether $200/month was ever a sustainable price for the amount of inference these workflows require. The answer is probably no, which means either prices go up or service goes down. We're watching the latter.
We've been tracking this across the industry: sloppish.com/rationing-followup.html
I haven't really hit real usage limits in the past 2 weeks, and part of me wonders if its a loud minority who all abused Claude Code, and now Anthropic has just permanently gimped their accounts.
Something like if you are in the top 5% of users, they are now giving you limits to bring you down to the average user.