Top
Best
New

Posted by y42 20 hours ago

I cancelled Claude: Token issues, declining quality, and poor support(nickyreinert.de)
881 points | 519 commentspage 8
dannypostma 13 hours ago|
When I saw the German screenshot it all made sense to me.
captainregex 16 hours ago||
anyone remember the whole “delete uber” thing from 2017ish? good times
smashah 6 hours ago||
Did the same with Google Ai Ultra. They rug pulled the subscribers. They changed the deal, we cancel. Simple.
bad_haircut72 19 hours ago||
Waiting 60s every time I send a msg really kills the ux of claude
spaceman_2020 14 hours ago||
4.7 is the breaking point for me

It's almost unusable

postepowanieadm 18 hours ago||
Yeah, session limits are kinda show stoppers.
zh_code 19 hours ago||
I just cancelled my Max20 plan yesterday.
varispeed 19 hours ago||
It also seems to me they route prompts to cheaper dumber models that present themselves as e.g. Opus 4.7. Perhaps that's what is "adaptive reasoning" aka we'll route your request to something like Qwen saying it's Opus. Sometimes I get a good model, so I found I'll ask a difficult question first and if answer is dumb, I terminate the session and start again and only then go with the real prompt. But there is no guarantee model will be downgraded mid session. I wish they just charged real price and stopped these shenanigans. It wastes so much time.
dswalter 19 hours ago|
You're describing a Taravangian prompt situation (a character in a book series who wakes up with a different/random intelligence level each day and has a series of tests for himself to determine which kind of decisions he's capable of that day). https://coppermind.net/wiki/Taravangian
r00t- 18 hours ago||
Same, it's a mess.
tamimio 11 hours ago|
Very similar experience, although I didn’t use claude for anything in production, but I did try some tests with some few topics and questions on things that I know, and while initially it works very well, but as soon as you dive deeper you get all sort of extra none sense that was never asked to add/do nor it’s useful, just workarounds after workarounds after duct tape solutions, several times I would say “no, why are you introducing xyz, that will cause this and that” to get similar answer of “thanks for pushing back, you are right bla bla”.

We probably hit peak generative AI last year, now they probably use AI to improve the AI so it’s kinda garbage in garbage out, or maybe anthropic is deprioritizing users while favoring enterprise or even government where it provides better quality for higher contracts.

More comments...