Top
Best
New

Posted by y42 14 hours ago

I cancelled Claude: Token issues, declining quality, and poor support(nickyreinert.de)
826 points | 489 commentspage 4
brunooliv 7 hours ago|
I still haven’t seen any other models be as complete as Claude inside Claude Code. I bet Anthropic knows this and they turn the knobs and see people’s reactions… I have been planning with Qwen3.6 Max inside opencode, absolutely game changer. Opus can then follow the plan quite detailed and like this I can make progress on my toy apps on Pro plan at 20/mo.

For work, unlimited usage via Bedrock.

Yes I’d like to get more usage out of my personal sub, but at 20/mo no complains

joozio 11 hours ago||
Funny. I thought I was the only one. Then I found more people and now you wrote about that. Just this week I also wrote about Claude Opus 4.7 and how I came back to Codex after that: https://thoughts.jock.pl/p/opus-4-7-codex-comeback-2026
y42 8 hours ago|
I like your blog and I can totally relate to this article - it's like something I wanted to write about for a couple of weeks now. :D

https://thoughts.jock.pl/p/adhd-ai-agent-personal-experience...

vintagedave 13 hours ago||
They won't even reset usage for me: https://news.ycombinator.com/item?id=47892445

And by crikey do I empathise with the poor support in this article. Nothing has soured me on Anthropic more than their attitude.

Great AI engineers. Questionable command line engineers (but highly successful.) Downright awful to their customers.

vondur 13 hours ago||
Wait, weren't there posts in the not too distant past where everyone was signing the praises for Claude and wondering how OpenAI will catch up?
swader999 12 hours ago||
Yep. I think the sentiment here isn't lagging too much in terms of the day to day experience of what is being offered. Kind of makes HN very useful in this regard.
cyanydeez 13 hours ago||
Wait, are SaaS's fundamentally shifting business models searching to maximize the value of a product at the expense of a customer over time?

Strange how things can change!

Capricorn2481 12 hours ago||
We've seen this sentiment shift on HN like 20 times in the past year, too often for it to be a real reflection of service quality. Feels more like people rooting for sports teams.

The services (OpenAI, Anthropic) are not wildly changing that much. People are just using LLMs more and getting frustrated because they were told it would change the world, and then they take it out on their current patron. Give it a month and we'll be hearing how far OpenAI has fallen behind.

torstenvl 12 hours ago||
I feel like almost everyone using AI for support systems is utterly failing at the same incredibly obvious place.

The first job of any support system—both in terms of importance and chronologically—is triage. This is not a research issue and it's not an interaction issue. It's at root a classification problem and should be trained and implemented as such.

There are three broad categories of interaction: cranks, grandmas, and wtfs.

Cranks are the people opening a support chat to tell you they have vital missing information about the Kennedy Assassintion or they want your help suing the government for their exposure to Agent Orange when they were stationed at Minot. "Unfortunately I can't help with that. We are a website that sells wholesale frozen lemonade. Good luck!"

Grandma questions are the people who can't navigate your website. (This isn't meant to be derogatory, just vivid; I have grandma questions often enough myself.) They need to be pointed toward some resource: a help page, a kb article, a settings page, whatever. These are good tasks for a human or LLM agent with a script or guideline and excellent knowledge/training on the support knowledge base.

WTFs are everything else. Every weird undocumented behavior, every emergent circumstance, every invalid state, etc. These are your best customers and they should be escalated to a real human, preferably a smart one, as soon as realistically possible. They're your best customers because (a) they are investing time into fixing something that actually went wrong; (b) they will walk you through it in greater detail than a bug report, live, and help you figure it out; and (c) they are invested, which means you have an opportunity for real loyalty and word-of-mouth gains.

What most AI systems (whether LLMs or scripts) do wrong is that they treat WTFs like they're grandmas. They're spending significant money on building these systems just to destroy the value they get from the most intelligent and passionate people in their customer base doing in-depth production QC/QA.

dboreham 12 hours ago|
This rings true. However I have used one AI automated support chat that didn't behave that way. I wish I could remember the vendor but I do remember being blown away when it said something like "that sounds like a real problem would you like me to open a support ticket for this?". Which it then did and subsequently a human addressed my issue.
burnJS 11 hours ago||
My experience is Claude and others are good at writing methods and smaller because you can dictate what it should do in less tokens and easily read the code. This closes the feadback loop for me.

I occasionally ask AI to write lots of code such as a whole feature (>= medium shirt size) or sometimes even bigger components of said feature and I often just revert what it generated. It's not good for all the reasons mentioned.

Other times I accept its output as a rough draft and then tell it how to refactor its code from mid to senior level.

I'm sure it will get better but this is my trust level with it. It saves me time within these confines.

Edit: it is a valuable code reviewer for me, especially as a solo stealth startup.

0xchamin 7 hours ago||
One of the biggest problem with Claude is, it tries to do things that I don't even ask. I really like to have full control over what I do. I feel sometimes, Claude has the urgency to keep going with what it is hardcore programmed for instead waiting for my feedback. Looks like, Claude consider everything to be oneshot. I maybe wrong, this is my personal experience
olcay_ 5 hours ago|
Claude Code has something about picking sensible choices instead of asking questions in the system prompt, that's probably the problem.
airbreather 9 hours ago||
I am sort of in the same place, it seems to have lost enough of the magic that I might be better trying to do more with running local LLMs on my 4090.

The thing is running local LLMs will give some kind of reliability and fixed expectations that saves a lot of time - yeah sure Claude might be fantastic one day, but what do I do when the same workload churns out shit the next and I am halfway thru updating and referencing a 500 document set?

Better the devil you know and all that.

chaosprint 8 hours ago||
I bought a Claude membership a few days ago. I asked him to fix a React issue—a very simple UI modification with almost no logic. He still failed to understand it. And after three attempts, the 5-hour limit was reached. This was a disaster. I had to immediately buy a CodeX membership and also tried Image2. I won't give Claude another chance.
jryio 8 hours ago|
I find it strange that you've anthropomorphized Claude but not ChatGPT seemingly based on one having a human name and the other not
duxup 10 hours ago|
I’ve definitely encountered a drop in Claude quality.

Even a simple prompt focused on two files I told Claude to do a thing to file A and not change file B (we were using it as a reference).

Claude’s plan was to not touch file B.

First thing it did was alter file B. Astonishing simple task and total failure.

It was all of one prompt, simple task, it failed outright.

I also had it declare that some function did not have a default value and then explain what the fun does and how it defaults to a specific value….

Fundamentally absurd failures that have seriously impacted my level of trust with Claude.

More comments...