Posted by elmean 18 hours ago
The truth is that it doesn't matter what companies say, what they claim, what they do, and what their CEO says/claims/does.
It's just a matter of time until the shareholders will get the right CEO to maximize shareholder value.
People in the comments who want a statement or a "reorientation" or a commitment from Anthropic leadership are missing the principles of how capitalism functions. Shareholder value cannot be compromised. In every battle between morality and profit, values and profit, public good and profit, ultimately all things will mutate into a state that enables profit to prevail. Always.
There are no exceptions to this.
Usually neither shareholders nor users are willing to pay the price.
Not in capitalism, indeed.
That's a notable achievement, but let's have some balance... It's also responsible for the biggest self-own in software industry history by leaking their 1) crown jewels (i.e., source code) 2) the existence of their next model Mythos, and 3) their roadmap in a highly competitive market.
Let's put this in perspective. Imagine it's 3 years ago, April 2023. Chatgpt has been launched for 4 months. We've all been using it, and writing poems in parrot talk or whatever. Someone tells you "In 2 years time there will be an app that lets you use LLMs to write code. It will be coded by humans for 3 weeks, then by humans + LLMs for 6 months, and then by LLMs mostly unsupervised. One year after that, they'll be making 2B/mo out of that app". Would you believe them? Not even the most maximalist, overhypers, AI singularity frenzied crazy people would have said that. And yet... it happened.
That being said, Anthropic can be diverting capacity to train the next model, and if it is significantly better, people would start flocking back again.
There's very little competition for SOTA models. The models themselves also weren't built by Claude. The current revenue has almost nothing to do with what Claude built.
Hell if it was so far ahead then they wouldn't be desperately trying to block OpenCode.
Ummm, no. Anthropic is #1 in coding because they developed it first. Then they used data + signals to train models specifically to work best with cc. They work together. Why do you think every provider (including chinese ones) have their own harnesses? Having real-world data and usage metrics helps training the models in immense ways.
Having features fast in this case >>> having perfect features. Some of them they dropped along the way, but having them in the pair cc + models is what matters. People switched from Cursor to cc in droves because it worked better there. That's not a fluke. That's how you improve your models, by collecting real world data after you launch them.
> Hell if it was so far ahead then they wouldn't be desperately trying to block OpenCode.
That's a lack of compute problem.
The problem with slop is, nobody understands it. Nobody ever designed it, nobody really knows how it works. You’re just putting blind faith in the slop you’ve shipped.
It lets you be very quick, but if you’ve accidentally compromised all your data or bank accounts through the slop then you won’t know until you’re destroyed.
But, if they did intentionally break other stuff, like charging more money, it would be a scam (not sure what is wrong but there is something wrong in taking credits without fulfilling the request)
But then they will just say "ah yeah, aí broke our tool it wasn't intentional, bla bla bla"
This is a reason to seriously consider changing providers.
Substantively: assuming this is true, what are the possible explanations? If they don't use OpenClaw, wouldn't this suggest there is some other cause?
What company? Will these people go on the record?
We live in a world where it is irrational for me to put much credence in a HN account. I see it has 125 karma and was created in January 2022.
If you must - in my experience Deepseek v4 is incredible value in every aspect. Pricing is transparent.
But like I said, I have funds in different AI gateways but I'm preferring to write by hand because I don't want surprising bugs and unnecessary code in my end result.
Spec your machine accordingly. Some models I recommend trying to get a feel for what's out there. Qwen 3.6 35b a3b, granite4.1 8b, llama 3.2 3b.
There are plenty of others but those give a good taste for different sizes and what they can do. If it's not enough then you are out maybe 5 bucks.
Also check in with r/localllama they have a bunch of people who can help you go further, spec machines, get better performance and results. If you don't want to post that's cool but there are lots of comments on how to get going. They are pretty friendly though so I'd read the rules and make a post asking for help
Maybe you will inspire me to use it.