Top
Best
New

Posted by y42 11 hours ago

I cancelled Claude: Token issues, declining quality, and poor support(nickyreinert.de)
772 points | 467 commentspage 2
ChicagoDave 7 hours ago|
I think there’s a clear split amongst GenAI developers.

One group is consistently trying to play whack-a-mole with different models/tools and prompt engineering and has shown a sine-wave of success.

The other group, seemingly made up of architects and Domain-Driven Design adherents has had a straight-line of high productivity and generating clean code, regardless of model and tooling.

I have consistently advised all GenAI developers to align with that second group, but it’s clear many developers insist on the whack-a-mole mentality.

I have even wrapped my advice in https://devarch.ai/ which has codified how I extract a high level of quality code and an ability to manage a complex application.

Anthropic has done some goofy things recently, but they cleaned it up because we all reported issues immediately. I think it’s in their best interests to keep developers happy.

My two cents.

joquarky 7 hours ago||
I kind of wonder if people with ADHD tend to fall into the latter group, as we are used to setting guardrails to keep us aligned to a goal.
ChicagoDave 40 minutes ago||
ADHD may be a part of it but mentoring and some experience with TDD or strong unit testing is also a part.
camel_Snake 7 hours ago|||
FYI that prominent link to your sharpee repo on GitHub 404s
ChicagoDave 7 hours ago||
OMG (fixed) - I updated devarch to check all html links for accuracy. This came from a recent update to the website to simplify it and of course Claude completely hallucinated that URL.

You can NEVER stop being vigilant. This is why I still have no faith in things like OpenClaw. Letting an AI just run off unsupervised makes me sweat.

rglover 7 hours ago|||
Dead on. Any company not thinking about this like the 2nd group is setting themselves up for a bad time (and sadly, anecdotally, that seems to be an emerging majority).
ChicagoDave 39 minutes ago||
Sadly, I agree that the majority are undisciplined.
estimator7292 5 hours ago||
IME it seems that output quality is directly proportional to the amount of engineering effort you put in. If a bug happens and you just tell the model to fix it over and over with no critical thinking, you end up with an 800 line shell script meant to change the IP address on an interface (real example). If you stop and engage your brain to reason about bugs and explain the problem, the model can fix it in an acceptable manner.

If you want to get good results, you still have to be an engineer about it. The model multiplies the effort you put in. If your effort and input is near zero, you get near zero quality out. If you do the real work and relegate the model to coloring inside the lines, you get excellent results.

ChicagoDave 38 minutes ago||
Even my guardrails can’t replace experience. You have to pay attention. This is exactly how some devs land in whack-a-mole loops.
cbg0 10 hours ago||
I've been a fan since the launch of the first Sonnet model and big props for standing up to the government, but you can sure lose that good faith fast when you piss off your paying customers with bad communication, shaky model quality and lowered usage limits.
arikrahman 3 hours ago||
I use Aider nowadays, and will probably cancel my Github multi AI bundle subscription due to the new training policy. I find using Aider with the new open models and using Open Spec to negotiate requirements before the handoff, has helped me a lot.
aucisson_masque 4 hours ago||
First ever time I used ai to code was a week ago, went with the Claude pro because I didn't want to commit.

The 20$ plan has incredible value but also, the limit are just way too tight.

I'm glad Claude made me discover the strength of ai, but now it's time to poke around and see what is more customer friendly. I found deepseek V4 to be extremely cheap and also just as good.

Plus I get the benefit to use it in vs code instead of using Claude proprietary app.

I think that when people goes over the hype and social pressure, anthropic will lose quite a lot of customer.

petterroea 10 hours ago||
Looking at Anthropic's new products I think they understand they don't really have a cutting edge other than the brand.

I tried Kimi 2.6 and it's almost comparable to Opus. Anthropic lost the ball. I hope this is a sign the we are moving towards a future where model usage is a commodity with heavy competition on price/performance

mmonaghan 8 hours ago||
Kimi nowhere close to opus on extended use but definitely highly competitive with sonnet. I will probably end up using kimi for personal stuff when I find some time to get it running or get a non-anthropic/openai harness set up on my personal machine.
jetbalsa 8 hours ago|||
I've been mostly using Kimi has a hacker of sorts, putting it places I want to attach AI directly as their API for their plans are not completely user hostile. Need to do OCR for scanning Magic the Gathering Cards. Sure!, have it attached to X4: Foundations as a AI manager for some stuff. sounds fun. Can't really do that with claude
alex-onecard 9 hours ago||
How are you using kimi 2.6? I am considering their coding plan to replace my claude max 5x but I am worried about privacy and security.
ac29 8 hours ago|||
I'm using it via OpenCode Go, which claims to only use Zero Data Retention providers.

How much you trust any particular provider's claim to not retain data is subjective though.

petterroea 8 hours ago|||
I'm only using it for a project I'm already expecting to open source later. Don't think I'm comfortable using it for more private work
lukaslalinsky 7 hours ago||
I feel like Opus 4.5 was the peak in Claude Code usefulness. It was smart, it was interactive, it was precise. In 4.6 and 4.7, it spends a long time thinking and I don't know what's happening, often hits a dead-end and just continues. For a while I was setting Opus 4.5 in Claude Code, but it got reset often. I just canceled my Max plan, don't know where to look for alternatives.
stldev 10 hours ago||
Same, after being a long-time proponent too.

First was the CC adaptive thinking change, then 4.7. Even with `/effort max` and keeping under 20% of 1M context, the quality degradation is obvious.

I don't understand their strategy here.

siliconc0w 10 hours ago||
Shameless self plug but also worried about the silent quality regressions, I started building a tool to track coding agent performance over time.. https://github.com/s1liconcow/repogauge

Here is a sample report that tries out the cheaper models + the newest Kimi2.6 model against the 5.4 'gold' testcases from the repo: https://repogauge.org/sample_report.

conception 10 hours ago|
This is cool - just wanted to note https://marginlab.ai is one that has been around for a while.
aleksiy123 9 hours ago||
are there any tools anyone knows to collect this kind of telemetry while using the tools instead of offline evals.

running evals seems like it may be a bit too expensive as a solo dev.

binaryturtle 10 hours ago||
I have a simple rule: I won't pay for that stuff. First they steal all my work to feed into those models, afterwards I shall pay for it? No way!

I use AI, but only what is free-of-charge, and if that doesn't cut it, I just do it like in the good old times, by using my own brain.

joquarky 6 hours ago|
Sorry your work got stolen. Did you have any backups?
zulban 4 hours ago|
Curious. Not my experience whatsoever.

I tried Claude recently and it was able to one-shot fixes on 9/9 of the bugs I gave it on my large and older Unity C# project. Only 2/9 needed minor tweaks for personal style (functionally the same).

Maybe it helps that I separately have a CLI with very extensive unit tests. Or that I just signed up. Or that I use Claude late in the evenings (off hours). I also give it very targeted instructions and if it's taking longer than a couple minutes - I abort and try a different or more precise prompt. Maybe the backend recognizes that I use it sparingly and I get better service.

The author describes what sounds like very large tasks that I'd never hand off to an AI to run wild in 2026.

Anyway I thought I'd give a different perspective than this thread.

More comments...