Top
Best
New

Posted by WXLCKNO 10 hours ago

Claude Code is being dumbed down?(symmetrybreak.ing)
759 points | 526 commentspage 4
thisisit 8 hours ago|
My last experience with Claude support was a fun merry go round.

I had used a Visa card to buy monthly Pro subscription. One day I ran out of credits so I go to buy extra credit. But my card is declined. I recheck my card limit and try again. Still declined.

To test the card I try extending the Pro subscription. It works. That's when I notice that my card has a security feature called "Secure by Visa". To complete transaction I need to submit OTP on a Visa page. I am redirected to this page while buying Pro subscription but not when trying to buy extra usage.

I open a ticket and mention all the details to Claude support. Even though I give them the full run down of the issue, they say "We have no way of knowing why your card was declined. You have to check with your bank".

Later I get hold of a Mastercard with similar OTP protection. It is called Mastercard Securecode. The OTP triggers on both subscription and extra usage page.

I share this finding with support as well. But the response is same - "We checked with our engineering team and we have no way of knowing why the other Visa card was declined. You have to check with your bank".

I just gave up trying to buy extra usage. So, I am not really surprised if they keep making the product worse.

encom 8 hours ago||
I guarantee you talked to a chat bot. There are no human support agents anywhere anymore.
polski-g 8 hours ago||
Its true. They have no idea why your bank was declining the charge, only that it was declined.
the__alchemist 7 hours ago||
Hey... I have been experimenting with Claude for a few days, and am not thrilled with it compared to web chatbots. I suspect this is partly me being new and unskilled with it, but this is a general summary.

ChatGPT or Gemini: I ask it what I wish to do, and show it the relevant code. It gives me a often-correct answer, and I paste it into my program.

Claude: I do the same, and it spends a lot of time thinking. When I check the window for the result, it's stalled with a question... asking to access a project or file that has nothing to do with the problem, and I didn't ask it to look for. Repeat several times until it solves the problem, or I give up with the questions.

Johnny_Bonk 5 hours ago||
I unfortunately have unsubbed from my 200 plan after having it for months, It really really seems to me that you never 100% feel like you're getting 4.6 and the same was happening with 4.5, some sessions it truly felt like haiku was being used despite the default setting and high thinking.
ndespres 4 hours ago||
It makes sense that any product written after the advent of these AI code generators, including the AI code generators themselves, will get worse as it starts to eat itself.
viraptor 8 hours ago||
I don't get why people cling to the Claude Code abusive relationship. It's got so many issues, it's getting worse, and it's clear that there's no plan to make it open for patching.

Meanwhile OpenCode is right there. (despite Anthropic efforts, you can still use it with a subscription) And you can tweak it any way you want...

muyuu 8 hours ago||
Perhaps some power user of Claude Code can enlighten me here, but why not just using OpenCode? I admit I've only briefly tried Claude Code, so perhaps there are unique features there stopping the switch, or some other form of lock-in.
TJTorola 7 hours ago|
Anthropic is actively blocking calls from anything but claude code for it's claude plans. At this point you either need to be taking part in the cat and mouse game to make that plan work with opencode or you need to be paying the much more expensive API prices.
muyuu 7 hours ago||
i see

i guess they were blocking OpenCode for a reason

this will put people to the test that use mainly Anthropic, to have a second look at the results from other models

evo_9 9 hours ago||
Serous question - why do people stick with Clause Code over Cursor? With Cursors base subscription I have access to pretty much all the Frontier models and can pick and choose. Anthropic models haven’t been my go-to in months, Gemini and Codex produce much better results for me.
SatvikBeri 9 hours ago||
Cursor performs notably worse for me on my medium-sized codebase (~500kloc), possibly because they try to aggressively conserve context. This is especially true for debugging, Claude Code will read dozens of files and do a surprisingly good job of finding complex bugs, while Cursor seems to just respond with the first hypothesis it comes up with.

That said, Cursor Composer is a lot faster and really nice for some tasks that don't require lots of context.

CharlesW 9 hours ago|||
My answer is that I tested both, and Claude Code (~8 months ago) was so obviously better than Cursor that I continue to happily pay Anthropic $200/month. Based on anecdotes I happen to catch, I don't believe Cursor's caught up.

The value isn't just the models. Claude Code is notably better than (for example) OpenCode, even when using the same models. The plug-in system is also excellent, allowing me to build things like https://charleswiltgen.github.io/Axiom/ that everyone can benefit from.

flaviolivolsi 9 hours ago|||
Because when it's good, it's really good - Cursor doesn't work as well for me and also I prefer the TUI experience. If anything, the real alternative is OpenCode.
elzbardico 9 hours ago|||
Part of the sauce is not in the model, but in the agent itself. And for that matter, I think AMP an incredibly better agent that Claude Code. But then, Claude heavily subsidized subscription prices are hard to beat.
esafak 9 hours ago|||
Wouldn't you run out of tokens sooner? That's the big problem.
mock-possum 8 hours ago||
Because I tried all the Cs - Copilot, Cursor, Codex, and Claude - and Claude consistently have better results. Codex was faster, Copilot had better integration, Cursor sometimes seemed smarter, but Claude was the best most reliable consistent experience overall, so Claude is what I stuck with - and so did the rest of our eng department.
anupamchugh 8 hours ago||
We're having a UI argument about a workflow problem.

We treat a stateless session like a colleague, then get upset when it forgets our preferences. Anthropic simplified the output because power users aren't the growth vector. This shouldn't surprise anyone.

The fix isn't verbose mode. It's a markdown file the model reads on startup — which files matter, which patterns to follow, what "good" looks like. The model becomes as opinionated as your instructions. The UI becomes irrelevant.

The model is a runtime. Your workflow is the program. Arguing about log verbosity is a distraction.

searls 7 hours ago|
LOL, no, dumbing down was when I paid two months of subscription with the model literally struggling to write basic functions. Something Anthropic eventually acknowledged but offered no refunds for. https://ilikekillnerds.com/2025/09/09/anthropic-finally-admi...

I care A LOT about the details, and I couldn't care less that they're cleaning up terminal output like this.

More comments...