Top
Best
New

Posted by elmean 15 hours ago

Claude Code refuses requests or charges extra if your commits mention "OpenClaw"(twitter.com)
https://xcancel.com/theo/status/2049645973350363168
1043 points | 581 commentspage 2
data-ottawa 15 hours ago|
That’s incredibly frustrating.

I’ve got a NixOS Qemu VM I use to run openclaw in. I had Claude help me set it up, and it runs local models on my own machine in a config based sandbox.

Why should Claude block or charge extra to work on that?

Why should Claude care if I have instructions for Hermes or OpenClaw in my project repos?

This fingerprinting is incredibly sloppy for how much access to a machine Claude code has.

philipov 14 hours ago||
Now you've learned the advantage of knowing how to do things yourself. When you depend on untrustworthy agents, you shackle yourself to their idiotic whims. Be careful who you partner with.
bsder 6 hours ago|||
> This fingerprinting is incredibly sloppy

What part of "vibe coding" is unclear to you?

These are the same people that use React as a TUI and render at 60FPS to your terminal in order to update a spinner.

NewsaHackO 14 hours ago||
If it's just to set up a VM, how much would you even need to use? A couple of cents?
data-ottawa 14 hours ago||
I run an OpenClaw VM and used Claude Code to build the VM scripts. The VM is connected to local llama.cpp, so OpenClaw and the models are running on my own physical hardware.
regexorcist 14 hours ago||
Things like these (Google also banned me from Antigravity for briefly using an agent) and the massive quality swings made me cancel all 3 subs last week and resort to my local Qwen 3.6 only. Open models are already great and only getting better, and I really enjoy the privacy and consistency of a model I run myself.
SeanAnderson 14 hours ago||
I don't think anyone is questioning all the benefits of using local LLMs. Those are readily apparent.

I just don't believe for an instant that they're anywhere in the same ballpark of capabilities as running Opus or similar. My time is the most valuable resource. Opus would need to be SIGNIFICANTLY more costly and unstable for me to start entertaining local models for day-to-day development.

Perhaps whatever work you're doing makes this trade-off more sensible, but I struggle to see how that could be true. I'm averse to running Sonnet on a large amount of software engineering problems - let alone Qwen.

zozbot234 1 hour ago|||
DeepSeek is close to SOTA today, as are Kimi and GLM. Yes they'll be slow and high-latency on ordinary hardware but let's be real, no one reasonable is running Opus or GPT on a 24/7 basis either. Local AI heavily rewards slow inference around the clock over fast response.
m4x 7 hours ago||||
What kind of work are you applying Opus and other LLMs to? I'm quite curious to understand how other people are using these tools.

At the moment neither Opus nor any open weights models seem to be capable of doing complex work, and for less complex work the additional cost of Opus hasn't been worthwhile. This is for reasonably math-heavy computer vision applications.

What LLMs have been useful for is identifying forgotten code that will be affected when planning a change, reviewing changes, and looking up docs/recipes for simple tasks. But Opus doesn't seem necessary for a lot of that.

chillfox 5 hours ago|||
Not the one you were asking, but…

I have been using Opus (in zed) to find the “in between” bugs. Bugs that kinda live in the space between micro services or between backend and frontend.

It takes a bit of preparation to get good results, but it can usually find the source of bugs in 1-2 hours (200k-300k context) that would take me a week to track down.

I create a folder, and then open up git worktrees in sub folders for every repo I think might be involved. I also create an empty report.md file. Then I give it a prompt that starts with “I need you to debug an issue”, followed by instructions for how to run tests in each repo, followed by @mentioning any specific files or folders I think is relevant (quick description of what they are), then the bug description. After that I tell it to debug the issue, make no code changes and write its findings to the report.md file.

This works incredibly well.

SeanAnderson 5 hours ago|||
My current job has me overseeing a few teams of engineers working on ~10+ y/o legacy software systems that have not been especially well maintained. As an example, one team had a completely broken CI pipeline due to numerous flaky tests. They had configured the CI pipeline to rerun tests multiple times and still the master branch had like.. a 40% pass rate. Super ugly, but the suite took ~40 minutes to run and they were demoralized enough to not want to investigate it anymore.

I came in, set Claude up, gave it read access to CI artifacts, had it build out some tooling to monitor the rolling pass/fail rate over the last 30 days, and let it loose. It identifies the worst offending flaky tests, forms hypotheses on whether it's a testing issue or a production issue, then tries to divide-and-conquer until it gets minimal reproduction steps. If it's not able to create deterministic reproduction then it'll make a best guess at fixing the issue and grind away at test re-runs all night until it can try to figure out if it fixed the issue with statistical confidence instead.

It's not perfect. I have to throw away some of the bad solutions, but shaved 20 minutes off their pipeline and improved pass rate by 35% in a handful of weeks. Very minimal oversight on my part - just letting it run while I'm asleep and reviewing PR proposals during the day between meetings.

We have an initiative to make an entire web application significantly more accessible in response to some government mandates. Tight deadline, tons of grunt work, repetitive patterns, some small nuances on edge-cases. The team was able to create a set of skills for doing the conversion logic, slowly build up and address all the edge cases, and are now able to work several magnitudes more quickly in modernizing the app.

A team had punted repeatedly on updating Jest to the latest version because it inherently came with a breaking change to JSDOM which made some properties unable to be spied upon. Took like 20 minutes to have Claude one-shot the entire conversion when they'd ignored it for months because it just felt too finicky prior to agents. In general, everything to do with testing infrastructure is easy to push forward with confidence.

Uhm, we have an active interview pipeline where we give a take-home technical assessment. After we got a few submissions, and manually evaluated them, I fed our analyses in and our grading rubric and had it generate assessments for incoming candidates following the rubric. After checking a few pretty carefully it became clear that it was good enough to trust - the take home wasn't groundbreaking and the problem space was understood enough to be able to identify obvious issues if there were any.

I was given a small team of semi-technical people who were being used to fetch numbers from DBs for product/marketing/sales and perform light data analysis on them. A lot of their day to day was just paper pushing SQL queries into Excel spreadsheets and then transforming them into PowerPoints with key takeaways. They didn't have any experience writing code. I had Claude build a gameified playground for them where I gave them a VSCode dev container, a SQLite DB full of synthetic data emulating what they'd encounter IRL, and a Jupyter notebook filled with questions they'd need to answer by writing code to interrogate the database and form insights. In a couple of weeks I was able to get them to the point where they were comfortable writing basic Python scripts with the help of Claude and they're now off automating all their paper-pushing workflows with deterministic scripts. When they're done we're going to move them to higher value work by having them do sleuthing against our data and surfacing proactive insights to propose to Product rather than just reactively fetching data and building reports.

I was asked to quickly build a prototype for some basic AI functionality we thought we might want to add to one of the products. I was able to go from "I have no idea what I should build" to "here's a prototype we can put in front of clients and see if this idea has any merit" in about 14 hours. Just riffing with Claude from product idea to functional/technical specs, implementation plan, then full working prototype was one shot, and then a tight iteration loop for a couple of hours with me guiding it on personal aesthetic choices to give it enough final polish. Obviously I wouldn't ship this code into production, but it's really nice not having any sunken cost biases when demoing a prototype. If customers don't like it? Great, I lost one day and half the time I was multi-tasking while Claude implemented specs. Even better - I had Claude write a script to extract all the conversations I had with it and include those in the prototype repo. Then I filmed a quick demo video of my process, shared that with the engineers, and they're able to review my Claude conversations to get inspiration for how to modify their own agentic coding strategies.

shad42 3 hours ago||
[dead]
regexorcist 14 hours ago||||
I think you'd be surprised, I find that the harness is what makes the real difference. I also prefer to be on the loop, actively guide and review. Local models are definitely much less autonomous as of today so if you need to be churning out code at speed they're probably not for you.
tempaccount5050 10 hours ago|||
I've played with them plenty and they're not even close as far as speed or intelligence. It's like comparing a bike to an MRAP.
uxcolumbo 8 hours ago||||
What harness would you recommend for the open-weight models?
regexorcist 8 hours ago||
Opencode has been the best one for me so far.
enraged_camel 11 hours ago|||
Having tried local agents just two weeks ago, the parent poster is correct: they don't come anywhere near frontier models, despite what the benchmarks state. I haven't tried Qwen 3.6 yet, but the version before it frequently got stuck even on moderately complex problems.
combyn8tor 8 hours ago||
Same experience for me. I think people need to start providing context for the type of work they're doing when repeating the local model hype. Maybe they're working with a cookie cutter React app and it does the job fine.
regexorcist 7 hours ago||
I work on Go and Rust mostly. The experience can be wildly different based on model, quantization, and harness. The fact that it didn't work for you doesn't mean everyone is trying to hype local models, people are getting real work done.
gabriel-uribe 2 hours ago||
This feels like a symptom of the definition of "real work" changing right in front of us. Some people still use AI like a copilot, cleaning up code here and there, maybe writing functions. And at the right scale, this is genuinely still real work.

Others, especially startups or indie hackers, use AI like it were their end-all be-all assistant. "Hey Jeeves, go add Apple Sign In, Google Sign In to our signup pages. Also, investigate why we're not utilizing cached inputs on our AI APIs correctly. And add Maestro flows for every screen in our app. Btw check out posthog, supabase, and Stripe - is our new agent changing engagement or trial->paid conversion rates?"

And 3 hours later, you have all these done. But only if you use the right multi trillion param models.

jrm4 14 hours ago||||
But, you know,

Yet.

dmd 14 hours ago||
For now we infer through few weights, lossily; but then in full precision. Now I represent in part; but then shall I represent as fully as the data was sampled.

1 CorinthAIns 13:12

slopinthebag 13 hours ago|||
If you know what you're doing and prompt it correctly, local models are great. If you're just vibe coding and relying on the LLM to fill in all the gaps for you and basically build the software for you, yeah you need SOTA to deal with that.
2ndorderthought 13 hours ago|||
This is the future.
klaussilveira 14 hours ago|||
How much VRAM do you need to achieve decent performance?
regexorcist 14 hours ago||
I have a 64GB M1 Ultra dedicated to llama.cpp. I get 40 tok/s on a fresh session, decreasing slowly to about 25 tok/s at around 50% of the 256K context, then down to 20 tok/s or less beyond that, but I rarely let it go much higher and handoff instead. This is whith Qwen 36B A3B at 8Q without KV quantization. It's not super fast but perfectly usable for me.
tjpnz 12 hours ago||
Spent the better part of a week trying to integrate local models into my LazyVim workflow. I've tried both Avante and CodeCompanion and have yet to find any configuration which remotely works. Either it goes into an endless loop, the project directory gets filled with garbage or it can't find the file to apply changes to despite it just being read from. Not sure if it's a Qwen problem, plugins, or Ollama.
regexorcist 12 hours ago||
I suggest to have opencode drive the model. I also use neovim and these days I mostly just have a tmux pane side by side. But opencode does support ACP mode which you can use with codecompanion and the like.
shrubble 14 hours ago||
They are trying to make a moat where no possibility of creating a moat exists.

It’s a huge mistake at the level of IBM trying to reestablish dominance over PCs by making MicroChannel the new standard; this failed horribly and cost IBM its market leadership and reputation.

MCA was technically better at the time, but the industry responded with EISA and VLBus which led to PCI and today’s PCIe.

dminik 11 hours ago||
Is Anthropic speedrunning their fall from grace? Their "stand" against the US government, but not really, happened roughly two months ago. Yet they've been doing something stupid every week since. Who is running this company?
dmd 15 hours ago||
I really want to stick with A\ given everything known about Altman, but man are they speedrunning the "how to destroy your reputation" guidebook.
Insanity 15 hours ago||
They have better PR than OpenAI but they are not a more ethical company. They do a bunch of shady stuff and are just as much involved in military applications. Cal Newport’s recent podcast had a good discussion about this: https://youtu.be/BRr3pAPsQAk?si=jaRJYJ_XQE7VpxPN
esperent 14 hours ago|||
Pet peeve of mine is people saying "hey this thing is totally shady/false, I've got proof right here <links to hour long podcast>".

It happens surprisingly often.

Insanity 13 hours ago|||
I understand not everyone has the interest or time to sit through an hour long podcast. But last I checked this is HN, and I think that podcast is right up the alley for many of us here. Cal Newport is not exactly a 'random podcaster'.

Next time I can summarize some of the talking points in my comment though, but I didn't want to poorly regurgitate the arguments when they were readily available in the video lol.

Although I see another poster has commented the key takeaways :)

esperent 4 hours ago||
What I want is for people to give evidence that can be checked within a few minutes at most.

But claiming you have proof and expecting me to a) just believe you or b) invest an hour of my time to dispute or agree with you... That's just a selfish way of having a conversation.

If you gave me some timestamps in that hour, that would be fine. Or if you gave a much shorter and easier to consume piece of evidence and then said that it's also discussed in the podcast if someone wants to invest more time into this, also fine.

simplyluke 12 hours ago||||
Podcasts are still short form if we're talking about something as complex as "is this company ethical". Issues involving human players and disagreements over philosophy/ethics take a huge amount of information to understand at anything beyond a vibes level.

You can understand almost any controversial issue better than almost everyone commenting on it by reading 1-3 books on the subject. It's becoming more of an x-factor as people get conditioned to expect everything to fit in a headline, chat response, or 10 second social media video.

empthought 12 hours ago|||
Podcasts (and video) are very low-throughput, low-density information channels. Essays and articles are superior. To demonstrate this, you can just compare the transcript of a typical podcast — even a high-quality, well-researched one — with a typical high-quality, well-researched blog post, essay, or journalistic article.
Insanity 3 hours ago|||
Sure, but the other angle is time investment. I only listen to podcasts sporadically but I can definitely see why people like it. Not as a substitute to reading but _in addition_ to reading. Listening to a podcast can be done while driving, or cooking, etc. It beats sitting in traffic and just listening to music (to some people).
Capricorn2481 11 hours ago|||
It's odd that people don't understand this. It's not about Tiktok brain. I would rather read a book or a dense article than listen to people meander on a Podcast and pad their time.
Capricorn2481 11 hours ago|||
There's a world of difference between a tweet and a podcast, which are designed to NOT deliver information efficiently.
rexpop 14 hours ago||||
Cal Newport and tech commentator Ed Zitron discussed this disparity between Anthropic's public image and their actual practices. Despite cultivating a reputation as the "ethical" AI company, Zitron argues that Anthropic's actions show they are just as ruthless and ethically questionable as their competitors.

Anthropic has been deeply integrated with the US military, having been installed with classified access since June 2024. The podcast highlights that Claude has been actively utilized during the "Venezuela incursion" and the ongoing "war in Iran".

Despite this active involvement, CEO Dario Amodei released a statement attempting to publicly distance the company from the Department of Defense by declaring they would not allow their technology to be used for "mass domestic surveillance" or "fully autonomous weapons". Zitron categorizes this as a highly calculated PR maneuver, pointing out that LLMs are fundamentally incapable of controlling autonomous weapons anyway. The stunt successfully manufactured a wave of positive press—with celebrities and commentators praising Anthropic as an ethical objector—right when the company was trying to secure an IPO or a massive ~$100 billion valuation, all while they quietly remained an active part of the war effort.

Beyond their military contracts, the podcast details several highly questionable business practices Anthropic has used to artificially inflate their numbers:

1. During a lawsuit regarding their military contract, Anthropic's CFO filed a sworn affidavit revealing the company had only made $5 billion in its entire lifetime. This directly contradicted leaked media reports suggesting they made $4.5 billion in 2025 alone. It revealed that the company's publicly perceived run rate was heavily exaggerated through the "shady revenue math" popular in Silicon Valley, a major discrepancy that most financial journalists ignored.

2. When the open-source agent library OpenClaw first launched, Anthropic deliberately allowed users to connect a $200/month "max account" and essentially burn through thousands of dollars of API compute at Anthropic's expense. Zitron points out that Anthropic knowingly let this happen to temporarily boost their usage metrics and hype while they raised a $30 billion funding round. Just weeks after securing the funding, they abruptly cut off access for these users, a move Zitron cites as proof of them being an "unethical company".

Furthermore, the company has faced criticism for gaslighting users, maintaining poor service availability, and silently degrading model performance while rug-pulling users on rate limits. As Zitron summarizes, it is highly unlikely that either Anthropic or OpenAI actually care about these ethical boundaries beyond how they can be weaponized for better PR and higher valuations.

noelsusman 10 hours ago|||
In my experience Anthropic positions itself as the "safe" AI company more than the "ethical" AI company. They're related but not the same thing.

The only way you could be surprised that Anthropic wants to be in bed with the US military is if you just never listened to anything Dario has said publicly. He's very open about wanting the US government and the US military to use Claude to win against China. That's why Claude was in the Pentagon before all the others in the first place.

>LLMs are fundamentally incapable of controlling autonomous weapons anyway

This is obviously false, though that's not surprising from what I've seen from Zitron. Claude is probably too slow and clunky to go full mech warrior for the time being, but it would be trivial to hook Claude up to an autonomous drone with missile strike capabilities. Those things are mostly autonomous already, they just require a human to tell them where to shoot. Claude can easily do that with a simple API.

The rest is valid. I wouldn't describe Anthropic as an ethical company. On the contrary, if you believe that you losing the AI race is an existential threat to humanity, then it's easy to justify all sorts of unethical behavior for the greater good.

aesthesia 14 hours ago||||
There's some validity to these criticisms, but it would be a lot more credible to cite someone whose job isn't "loudly promote any claim that sounds negative for AI, regardless of how well-founded it is."
petcat 14 hours ago||||
> Despite cultivating a reputation as the "ethical" AI company, Zitron argues that Anthropic's actions show they are just as ruthless and ethically questionable as their competitors.

Anthropic has taken 10s of billions from investors just like everyone else has. There is no such thing as "ethics" or "morality" when the scale of obligation is that large.

So yes, this is obvious despite whatever image they try to cultivate.

fwipsy 14 hours ago|||
Anthropic is a public benefit corporation which limits liability to shareholders.

Just because they screwed up their billing doesn't mean every ethical commitment they've ever made is bunk.

Capricorn2481 11 hours ago||
> Anthropic is a public benefit corporation which limits liability to shareholders

What does this have to do with their ethics? This seems irrelevant unless your understanding of ethics ends at fiduciary duty to investors.

fwipsy 9 hours ago||
It's the opposite. Parent comment was saying they must be unethical due to their duty to investors. As a public benefit corporation, they can take ethics into account even if it harms shareholders. The extent to which they do so is still up to them, as I understand it, but they aren't forced to be evil as parent was suggesting.
bluefirebrand 13 hours ago|||
> There is no such thing as "ethics" or "morality" when the scale of obligation is that large.

At that scale, ethics and morality should become more important, not discarded

GolfPopper 13 hours ago|||
Alternatively, finance at that scale ought not be permitted to exist, because of the moral hazard it represents.
voakbasda 13 hours ago|||
You will find that morals and ethics at that scale are too expensive to maintain.
bluefirebrand 12 hours ago||
Then that scale should not be allowed to exist and we should fight aggressively to prevent it
avarun 13 hours ago||||
Ed Zitron has absolutely zero credibility, meaning these claims have zero credibility.
rickydroll 14 hours ago||||
I think all the AI companies want to hook up with the US military, as it's the only way they'll cover their debt. For investors.
GolfPopper 13 hours ago||
"You must destroy the economy to keep us afloat, because National Security!" has been a clear goal of the LLM hucksters for a long time.
fwipsy 14 hours ago|||
"LLMS are fundamentally incapable of controlling autonomous weapons" -- This was Anthropic's stance too, right?

"Quietly remained an active part of the war effort" - anthropic was totally transparent about it, but yeah not great.

"Leaks were wrong" - and that's Anthropic's fault?

OpenAI agreed to assist the DoD with zero boundaries and then lied about it. Can we at least give them credit for not doing that? If we just throw up our hands and say "they're all awful, whatever" then the result is reduced pressure on them to be better. Like it or not, I do not think AI is going away and as far as I can tell, despite billing problems, Anthropic's still the least bad frontier lab.

inquirerGeneral 6 hours ago||||
[dead]
MagicMoonlight 14 hours ago|||
Probably some Slopcoded bot which posts fake comments to drive people to their content.

After all, if you’re paying hundreds of millions to buy these shitty podcasts, you might as well host some bots.

fwipsy 14 hours ago|||
Account is from 2016 with 6k karma? : doubt:
throwanem 9 hours ago||
Why assume people would not buy and sell Hacker News accounts?
fwipsy 1 hour ago||
Seems unlikely. I had a hell of a time finding someone to sell me this one.
Insanity 13 hours ago|||
Did you even check the link? It's a podcast from Cal Newport, a quite known figure (at least in software engineering / compsci circles). So it's not exactly a random shitty podcast. And, it's also (obviously) not my content.
Leynos 8 hours ago||
I hadn't heard of him until he got famous last month for slagging off the AI industry.
foobar_______ 13 hours ago||||
Agreed. they are better at the PR game. Some developers are grasping at straws looking for ways to not feel guilty and justify their usage of LLMs is from the "good guys". Anthropic is currently filling this role but eventually people will see behind the smoke and mirrors and release its not all that different from OpenAI or some of the other AI labs who are willing to sacrifice any amount of ethics if they mean they get the right paycheck or stroke their ego that they were on the team that built digital god.
Leynos 9 hours ago|||
[dead]
rglullis 13 hours ago|||
I cancelled my subscription the minute they blocked access via OpenCode and switched to Ollama Cloud.

A bunch of people here tried to defend Anthropic, saying that it was justified because it was likely that Claude Code's harness had optimizations that would not be possible on OpenCode. It was clear from the source leak that nothing of this sort was the case, and that they were simply trying to avoid others distilling their models.

GLM and Queen are not on par with Opus, but they are good enough and I never had hit the usage limits, even with 2-3 sessions running.

noctuid 13 hours ago||
What's just as crazy is people defending ollama.
rglullis 13 hours ago|||
They are no saints, but at least their solution is actually open source and they can not lock me in like the others can. To illustrate the point, you can replace "Ollama Cloud" with "OpenCode Go" if you want. Or if you prefer you can give enough hardware to run the larger open weight models on my own.
Congeec 13 hours ago||||
https://github.com/ollama/ollama/issues/3185
theplatman 14 hours ago|||
they are essentially Lyft in early Uber vs. Lyft days. They are marketing themselves vaguely as being "better" because they're "more ethical" but their actions make it clear that they're not much better than OAI.
reactordev 14 hours ago||
Except Lyft didn't kick you out in the bad part of town simply because you mentioned the word lollipop. Claude will terminate your session, peg you to 100% usage, and more, to stop you from using the service you paid for.
vips7L 1 hour ago|||
All of these companies are unethical. They’ve all stolen everything from the working class.
jp57 15 hours ago|||
Ha. Yes. "Speedrunning enshittification" is the phrase that's been in my head.

The flat-rate plans were the top of the slippery slope to enshittification, really. If everyone were on metered billing there'd be no reason for all these opaque and sneaky attempts to limit usage. People would pay for what they get and get what they pay for.

applfanboysbgon 14 hours ago||
There is nothing wrong with flat-rate plans. I work at an LLM-serving startup, and am aware of at least three competitors, that (a) provide flat rate subs (b) are extremely profitable and (c) are bootstrapped, ie. not beholden to investors (there are also many other competitors but I can't ascertain their profitability or investment status).

You simply need to price the flat-rate sub at a price that's profitable when averaged out over all of your users, both light and heavy, and prevent fully automated usage by the power users. That's it. This is immensely more user-friendly, and I doubt you'd get any traction at all if you didn't do this. Even if you pay more for the sub, having unlimited (non-automated) usage frees a mental barrier to using the product. If you have to pay for every request you make, it introduces a hesitation to do anything - it makes the user hesitant to experiment, hesitant to prompt for anything of slightly less significance, anxious about the exact token consumption of every prompt, and so on. It's not enjoyable to use when you're being penny pinched for every prompt.

Anthropic's problem, of course, is that they are not bootstrapped. They don't have a business model that can compete with startups running DeepSeek or GLM on their own hardware. Non-frontier startups got to skip the whole "tens of billions of dollars in debt" step of creating a frontier model from scratch, and still get to run a model that is perhaps 80%-85% as good as Anthropic's, which is good enough for millions of customers. So Anthropic is desperate, backed into a corner, and doing anything and everything they can to try to right their sinking ship, no matter how scummy.

fwipsy 14 hours ago|||
Anthropic isn't backed into a corner. They have plenty of enterprise subscriptions. Individual user experience (especially billing) is suffering because it's not a priority in comparison. If they were as desperate as you described, they would try selling access to mythos.
applfanboysbgon 14 hours ago|||
The fact that they are adding code specifically to charge individual consumers more reeks of desperation. This isn't "individual users are suffering because they're lower priority and neglected", this is "individual users are being actively squeezed because Anthropic is desperate for every penny it can get".
fwipsy 14 hours ago||
This is such a stupid way to charge customers more. How many Claude code users use OpenClaw? Cheating customers is like burning down your house to keep warm. Anthropic aren't that stupid. I guarantee that this was some half-baked vibe-coded anti abuse system.
selectively 11 hours ago||
Many users abuse subscriptions in violation of the TOS to run tools like OpenClaw is automated ways. It's an anti-abuse measure. Makes perfect sense. Anthropic's business model is the API business. The $200 subs are a paid demo of the API. Go slam the API with OpenClaw all you want, if you can afford it.
3748595995 9 hours ago|||
We'll see how many enterprise subs they retain 5 years
vintermann 13 hours ago||||
> prevent fully automated usage by the power users.

But being a power user and fully automating things is the whole appeal.

pkulak 14 hours ago||||
I also assume that forcing usage to spread out, via those 5-hour windows, has cost advantages.
bdangubic 13 hours ago||||
> prevent fully automated usage by the power users

this is a non-starter

applfanboysbgon 13 hours ago||
Fully automated usage on a flat-rate plan is an economic non-starter.
Oras 14 hours ago|||
LLM serving startup => bootstrapped => extremely profitable

Mind sharing a link?

applfanboysbgon 14 hours ago||
I do mind, since I enjoy speaking freely without concern of my opinions being linked to my employment. I assure you companies like this exist. Profiting off of inference is not the hard part, it's frontier training that is prohibitively expensive. You're free to disregard my commentary if you want, of course.
beepbooptheory 14 hours ago|||
Why not just name one of those three competitors?
simoncion 14 hours ago|||
> Profiting off of inference is not the hard part, it's frontier training that is prohibitively expensive.

And given that Anthropic does both, it must make up its training costs by selling inference. jp57 was pretty clearly talking about Anthropic's flat-rate plans, rather than the flat-rate plans of companies that get to skip the most expensive part of the process.

applfanboysbgon 14 hours ago||
I understand that very well, yes. The point I'm making is that I don't think Anthropic or OpenAI would have ever gotten significant traction if they didn't have flat-rate plans, because flat-rate plans themselves are not inherently predatory or part of the enshittification slope but actually extremely UX-friendly. Perhaps in another timeline, if their product was actually valuable enough to pay this price for, they could have simply provided a $50 plan as the standard level to provide enough margin to account for training costs as well. But as I see it DeepSeek is an existential threat to them, and they are now stuck between a rock and a hard place, because their product is devalued by its existence and if the frontier labs were to gate access with $50 plans they would get their lunch eaten even more quickly. It turns out there are downsides to burning inconceivably large stacks of other people's money.
simoncion 13 hours ago||
> The point I'm making is that I don't think Anthropic or OpenAI would have ever gotten significant traction if they didn't have flat-rate plans...

That seems likely. If people had to pay their share of the actual all-in cost of the service (rather than having it be subsidized by investors with extremely deep pockets and a small handful of corporate customers), very, very few regular people would use it.

The point that 'jp57' pretty explicitly made [0] is that flat-rate plans that don't cover the all-in cost of providing the plans tend to result in those plans getting worse and worse and worse, as economic realities assert themselves. If the flat-rate plans that you are aware of actually cover the cost of providing the service, then you're discussing an entirely different situation that's entirely inapplicable to the discussion about Anthropic's pricing and degrading level of service.

[0] ...which is one that's understood by people who have been in pretty much any industry for more than a few years...

applfanboysbgon 13 hours ago||
The crux of my argument is that there is a timeline where people would've paid the all-in cost of the service, with margin, as a flat-rate sub. The $20 rate was not sustainable when factoring in training costs but if not for DeepSeek they could have simply raised the prices rather than gestures broadly whatever the fuck is going on at Anthropic now, with a new PR fumble every three days. If the Chinese models didn't exist, people would've groaned but would likely still pay $40 or $50 for an LLM subscription.

You misdirected my quoted statement to assert a position I did not take. When I talk about flat-rate subs being a good UX, I am not talking about at a subsidized rate. My position is that people will pay more for a flat-rate sub than they are willing to through per-token billing. That is, a consumer who would only pay average $10/mo if they used the API will voluntarily pay $20/mo for a sub, because even though it's a worse value the latter is a tremendously more friendly user experience. When I say that flat-rate subs are necessary for traction, I mean that solely from a user experience perspective, not "subsidized usage is necessary for traction".

skydhash 12 hours ago||
There’s also the “prepaid” alternative. Especially if you’re skittish about budgets. You topup you account for $10, and when you overflow (maybe by setting an alert to around $8), you can add an extra 5$ to make it to the end without interruption.
kandros 13 hours ago|||
Adding many new chapters to it
cute_boi 12 hours ago|||
I don’t think Anthropic is more ethical than OpenAI. And honestly, OpenAI is not just Altman; we should judge a company by its actions. OpenAI has released more open-source projects, like Codex and GPT-OSS. What has Anthropic given?
addedGone 12 hours ago|||
This is quite a real take, each time I ask people what's inferior about OpenAI without citing any politics, they can't really do it. gpt-5.5 is above Opus 4.7 for serious engineering as well, and many of their contributions are very useful for the OSS world.

More so, imagine the whole open-source community PREACHING a binary that is literally using heavy telemetry, unknown and questionable behavior instead of codex, completely open-source.

rglullis 11 hours ago|||
> we should judge a company by its actions

Okay, then let's judge it by the fact that they started as a non-profit and now are are playing the same growth-at-all-costs playbook from Silicon Valley.

Or let's judge them by how they they consider themselves above copyright law, and went on to US congress to say "we can not run this business without stealing intellectual property".

Or how they they don't mind making deals with the Saudis.

Or how they don't mind getting in bed with Trump to secure expedited construction of their datacenters.

Or how they are making all types of accounting fraud (the circular deals) to keep propping up the bubble, and will undoubtly be footed by the taxpayers when it finally pops?

> What has Anthropic given?

Anthropic is also trash. They are guided by this whole "Effective Altruism" bullshit which should be enough to raise all sorts of red flags. But to think that OpenAI is somehow "better" is completely absurd. Both of them are dangerous and both of them should not exist.

bdangubic 10 hours ago||
If you do this judging on every S&P Company and make them "not exist" you'd end up with Mom & Pop shops as you'd be closing the whole joint :)
rglullis 10 hours ago||
Why do you make it sound like it was a bad thing?
bdangubic 9 hours ago||
definitely not a bad thing in my opinion, I think the whole system should collapse 100% and closing down most companies in USA makes a lot of sense to me :)
duped 13 hours ago|||
I think people inside the tech bubble don't realize that AI companies are considered villainous by the public. So there's no reputation to destroy.
moomoo11 11 hours ago||
I’d argue sama is a far better person.

At least you know his intentions, which is that he will do anything to win. And codex actually works, I can let it run for hours and at least come back and it’s done a good job.

CC not only fucked me with false advertising on Opus that I cancelled, but it fucking stops working so often or sucks after a little bit of context usage.

A\ ceo is a bad salesman (50% of X will lose their jobs, 3 months later 50% of Y will lose their jobs).

A\ also falsely advertised their Opus usage that me and many others cancelled months ago. They even were nuking all GitHub issues around this.

IMO, CC is for tourists and people who fall for AI marketing on X.

gitaarik 1 hour ago||
Funny, for me Claude Code works perfectly, and I don't have to wait for hours, my prompts are usually done within minutes. And the results are most of the time great.
Kirr 6 hours ago||
"I'm sorry Dave, I'm afraid I can't do that" is getting realer with each passing day. As well as arguing with your front door from Ubik.
sschueller 15 hours ago||
https://xcancel.com/theo/status/2049645973350363168
cowlby 15 hours ago||
I don't understand how, having access to Mythos and unlimited use, their solution to open harnesses is lazy string regex-style matching.
jp57 14 hours ago||
I saw a talk by Boris where he said, basically that Claude codes itself now. They have it automatically writing features and reviewing PRs, apparently. I suspect that much of the code has never been seen by human eyes within Anthropic.
whateveracct 14 hours ago||
lol so they aren't even good at using Claude
shimman 10 hours ago||
These are people that lucked into working at FAANG 10 years ago and been riding the coattails since. Highly incompetent people dictating how we should all work.
whateveracct 14 hours ago|||
their CEO has been shouting from the rooftops that programming is dead. ofc that would ripple down the org chart and result in a culture of bad programming.
alienbaby 15 hours ago||
I wonder what happens if you ask Claude to solve the problem, and don't review it's answer properly..
whateveracct 14 hours ago||
they're just holding it wrong.. what model are they using? they should make sure they're on Opus 4.5+. That was a stepwise improvement and was when AI coding clearly became the futureₖₑₖ
kandros 13 hours ago||
I find it incredibly that after all the good faith Claude Code built during 2025 they are destroying users trust is such amateurish ways (same as hermes.md)
scottbez1 13 hours ago|
Subscription models only work when marginal costs are low and/or there’s a good variety of usage that roughly averages out. Or, you need to be able to kick out abuse.

Unfortunately for those of us who just want to eat a nice filling meal at the fixed price all you can eat buffet of AI subscriptions, a minority of customers keeps paying for the all you can eat buffet and staying for hours and bringing containers to sneak food out when they leave. And they keep wearing disguises to try and evade detection.

It’s a losing battle for the provider, which ultimately means the subscription pricing model can’t work, which hurts the majority of customers that just want to use the system as intended and no longer have a subscription model available.

I have plenty of frustrations with Anthropic as a paying customer, but this specific false positive abuse detection doesn’t strike me as all that awful, just some annoying collateral damage. I’d rather have that than no subscription model at all.

kenhwang 13 hours ago||
I wouldn't be surprised if the AI usage model moves towards a bidder/auction model. Set how much you'd willing to pay for your AI request, and they evaluate requests starting from the highest to lowest bids.
scottbez1 13 hours ago||
It definitely would make sense, especially if they are capacity constrained, but it’s also a losing PR move for whoever moves first in the space unless the big players all shift at the same time.
rohansood15 13 hours ago||
Nobody is stopping them from capping usage at 3x subscription price. Except themselves, because it'll ruin their revenue growth story once they stop selling dollars for cents.
More comments...