Top
Best
New

Posted by ersiees 3 days ago

Anthropic is Down(updog.ai)
153 points | 144 comments
davedx 3 days ago|
The great thing about LLMs being more or less commoditized is switching is so easy.

I use Claude Code via the VS Code extension. When I got a couple of 500 errors just now I simply copy pasted my last instructions into Codex and kept going.

It's pretty rare that switching costs are THAT low in technology!

shermantanktop 3 days ago||
It’s not a moat, it’s a tiny groove on the sidewalk.

I’ve experienced the same. Even guide markdown files that work well for one model or vendor will work reasonably well for the other.

bgirard 3 days ago|||
The switching cost is so low that I find it's easier and better value to have two $20/mo subscription from different providers than a $200/mo subscription with the frontier model of the month. Reliability and model diversity are a bonus.
davedx 2 days ago||
Yes that's exactly what I have too.
paxys 3 days ago|||
Which is exactly why these companies are now all focused on building products rather than (or alongside) improving their base models. Claude Code, Cowork, Gemini CLI/Antigravity, Codex - all proprietary and don't allow model swapping (or do with heavy restrictions). As models get more and more commoditized the idea is to enforce lock-in at the app level instead.
alecco 3 days ago|||
FWIW, OpenAI Codex is open source and they help other open source projects like OpenCode to integrate their accounts (not just expensive API), unlike Anthropic who blocked it last month and force people to use their closed source CLI.
gundmc 3 days ago|||
Gemini CLI is open source too, though I think the consensus is it's a distant third behind Claude Code and Codex
_aavaa_ 3 days ago|||
The classic commoditize your complements.
bloppe 3 days ago|||
I only integrate with models via MCP. I highly encourage everybody to do the same to preserve the commodity status
LetsGetTechnicl 3 days ago|||
Using "low cost" and LLM's in the same sentence is kind of funny to me.
harrisi 3 days ago|||
I genuinely don't know how any of these companies can make extreme profit for this reason. If a company makes a significantly better model, shouldn't it be able to explain how it's better to any competitor?

Google succeeded because it understood the web better than its competitors. I don't see how any of the players in this space could be so much better that they could take over the market. It seems like these companies will create commodities, which can be profitable, but also incredibly risky for early investors and don't make the profits that would be necessary to justify the evaluations of today.

crazygringo 3 days ago||
> If a company makes a significantly better model, shouldn't it be able to explain how it's better to any competitor?

No. Not if it's not trained on any materials that reveal the secret sauce on why it's better.

LLM's don't possess introspection into their own training process or architecture.

harrisi 3 days ago||
That's my point. Anything that could exist that's significantly "better" would be able to share more about its creation. And anything that could be significantly better would have to be capable of "understanding" things it wasn't trained on.
crazygringo 3 days ago|||
That's not true. There are a million ways to be "significantly better" that don't involve knowledge about the model's creation. It can be 10x or 100x or 1000x more accurate at coding, for example, without knowing a single thing more about its own internal training methodology.
Smaug123 2 days ago|||
Could you share anything about your creation, without having been to school where we taught you what the answers were? Can you deduce the existence of your hippocampus just by thinking really hard?
skydhash 3 days ago|||
> It's pretty rare that switching costs are THAT low in technology!

Look harder. Swapping usb devices (mouse,…) takes even less time. Switching wifi is also easy. Switching browser works the same. I can equally use vim/emacs/vscode/sublime/… for programming.

pchristensen 3 days ago|||
Switching between vim <-> emacs <-> IDEs is way harder than swapping a USB (unless you already know how to use them).
cozzyd 3 days ago||
I don't know, USB A takes 3 attempts to plug in for some reason.
bmitc 3 days ago||
Sometimes four!
ahmadyan 3 days ago||||
good point, they are standards, by definition society forced vendors to behave and play nice together. LLMs are not standards yet, and it is just pure bliss that english works fine across different LLMs for now. Some labs are trying to push their own format and stop it. Specially around reasoning traces, e.g. codex removing reasoning traces between calls and gemini requiring reasoning history. So don't take this for granted.
crazygringo 3 days ago||
I dunno. Text is a pretty good de facto standard. And they work in lots of languages, not just English.
jononor 3 days ago||||
You would have to buy said USB device and get it to your location. Switching WiFi is only easy if you mean on a single machine/gateway. Swapping the WiFi network equipment in an office is considerably more involved, depending on the desired configuration.
amelius 3 days ago||||
You make it sound like lock-in doesn't exist. But your examples are cherry picked. And they're all standards anyway, their _purpose_ was for easy switching between implementations.
NicuCalcea 3 days ago||||
Most people only have one mouse or Wi-Fi network. If my Wi-Fi goes down, my only other option is to use a mobile hotspot, which is inferior in almost every way.
oneeyedpigeon 3 days ago||
> Most people only have one mouse

Tell me you're not a Mac user without telling me you're not a Mac user...

NicuCalcea 3 days ago|||
Thankfully, not a Mac user, or even a wireless mouse user.
crazygringo 3 days ago|||
Huh?
oneeyedpigeon 3 days ago||
The default Apple mouse needs a backup because it still cannot be charged and used at the same time.
whatever1 3 days ago||||
I mean sublime died overnight when vscode showed up.
davedx 2 days ago|||
Huh? I have the VSCode extensions for both. Switching is a couple of mouse clicks and copy paste.
falloutx 3 days ago|||
on some agents you just switch the model and carry on.
benterix 3 days ago||
Except Kimi Agent via website is hard to replace - I tried the same task in Claude Code, Codex, and Kimi Agent - the results for office tasks are incomparable. The versions from Anthropic and OpenAI are far behind.
hk__2 3 days ago||
Their GitHub issues are wild; random people are posting the same useless "bug reports" over and over multiple times per minute.

https://github.com/anthropics/claude-code/issues

phito 3 days ago||
Gives you a good window into a vibe coder's mentality. They do not care about anything except what they want to get done. If something is in the way, they will just try to brute force it until it works, not giving a duck if they are being an inconvenience to others. They're not aware of existing guidelines/conventions/social norms and they couldn't care less.
stevenpetryk 3 days ago|||
This sounds like a case of a bias called availability heuristic. It'd be worth remembering that you often don't notice people who are polite and normal nearly as much as people who are rude and obnoxious.
Forgeties79 3 days ago||||
I am starting to get concerned about how much “move fast break things” has basically become the average person’s mantra in the US. Or at least it feels that way.
embedding-shape 3 days ago|||
You're about a decade+ late to the party, this isn't some movement that happened overnight, it's a slow cultural shift that been happening for quite some time already. Quality and stability used to be valued, judging by what most people and companies put out today, they seem to be focusing on quantity and "seeing what sticks" today instead.
Forgeties79 3 days ago||
I’m not saying it’s a sudden/brand new thing, I think I’m just really seeing the results of the past decade clearly and frequently. LLM usage philosophies really highlight it.
embedding-shape 3 days ago||
> I’m not saying it’s a sudden/brand new thing

I was more referencing the whole "I'm starting to worry" while plenty of people been cautiously observing from the side-lines all the trouble "move fast, break things" brought forward, many of them speaking up at the time too.

It's been pretty evident for quite some time, even back in 2016 Facebook was used by the military to incite genocide in Myanmar, yet people were still not really picking up the clues... That's a whole decade ago, times were different, yet things seems the same, that's fucking depressing.

Forgeties79 2 days ago||
I’m starting to think this is unproductive tbh
bandrami 3 days ago|||
Particularly since that mantra started around 2005 or so, which was exactly when Silicon Valley stopped creating companies that could run at a profit without a constant investor firehose.
egeozcan 3 days ago||||
Could it be that you're creating a stereotype in your head and getting angry about it?

People say these things against any group they dislike. It's so much that these days it feels like most of the social groups are defined by outsiders with the things they dislike about them.

phito 3 days ago||
Well not really, vibe coding is literally brute forcing things until it works, not caring about the details of it.
charcircuit 3 days ago|||
So manual programming. Humans don't always get everything perfect the first try either.
qbxk 3 days ago||||
if history doesn't repeat, but it rhymes

does vibe coding rhyme with eternal september?

falloutx 3 days ago||||
IF anything, this is good news for Anthropic, they can now bury every open source project with useless isssues and PRs
soulofmischief 3 days ago|||
Are these superpredator vibe coders in the room with us right now?
monsieurbanana 3 days ago|||
Wow are these submitted automatically by claude code? I'm not comfortable with the level of details they have (user's anthropic email, full path of the project they were working on, stack traces...)
tobyjsullivan 3 days ago|||
Scanning a few. Some are definitely written by AI but most seem genuinely human (or at least, not claude).

Anecdata: I read five and only found one was AI. Your sampling may vary.

prodigycorp 3 days ago||||
I consider revealing my file structure and file paths to be PII so naturally seeing people's comfort with putting all that up there makes me queasy.
philipwhiuk 3 days ago||||
No, but they are submitted by the sort of people who will use AI to write the GitHub issue details
xnorswap 3 days ago||||
I think claude code has a /bug command which auto-fills those details in a github report.
embedding-shape 3 days ago||||
Definitively some automation involved, no way the typical user of Claude Code (no offense) would by default put so much details into reporting an issue, especially users who don't seem to understand it's Anthropic's backend that is the issue (given the status code) rather than the client/harness.
lavezzi 2 days ago||||
Nope, all user submitted likely with the assistance of Claude.
petters 3 days ago|||
How could they be? Claude was down
jscheel 3 days ago|||
and every single one of them checked "I have searched existing issues and this hasn't been reported yet"
david422 3 days ago|||
A long time ago I was taking flight lessons and I was going through the takeoff checklist. I was going through each item, but my instructor had to remind me that I am not just reading the checklist - I need understand/verify each checklist item before moving on. Always stuck with me.
macintux 3 days ago||
A few times a year I have to remind my co-workers that reading & understanding error messages is a critical part of being in the IT business. I'm not perfect in that regard, but the number of times the error message explaining exactly what's wrong and how to solve it is included in the screenshot they share is a little depressing.
delaminator 3 days ago||
Application Error:

The exception illegal instruction

An attempt was made to execute an illegal instruction.

(0xc000001d) occurred in the application at location.

Click on OK to terminate the program.

beAbU 2 days ago||
Yes, and this is an example of a horrible error message that does not help the use one iota.
kogasa240p 3 days ago|||
Some of them don't even have error messages.
kogasa240p 3 days ago|||
Goes to show that nobody reads error messages and it reminds me of this old blogpost:

> A kid knocks on my office door, complaining that he can't login. 'Have you forgotten your password?' I ask, but he insists he hasn't. 'What was the error message?' I ask, and he shrugs his shoulders. I follow him to the IT suite. I watch him type in his user-name and password. A message box opens up, but the kid clicks OK so quickly that I don't have time to read the message. He repeats this process three times, as if the computer will suddenly change its mind and allow him access to the network. On his third attempt I manage to get a glimpse of the message. I reach behind his computer and plug in the Ethernet cable. He can't use a computer.

http://coding2learn.org/blog/2013/07/29/kids-cant-use-comput...

ebonnafoux 3 days ago|||
It's wild that people check the box

> I have searched existing issues and this hasn't been reported yet

when the first 50 issues are about 500 error.

atonse 3 days ago|||
This is the kind of abuse that will cause them to just close GitHub issues.

Or they'll have to put something in the system prompt to handle this special case where it first checks for existing bugs and just upvotes it, rather than creating a new one.

orphea 3 days ago|||
I'm not too empathic to Anthropic. They did it to themselves by hyping AI and attracting that kind of people.

And it's not like they have been taking care of issues anyway.

echelon 3 days ago||||
The automation of the SWE.
ddmma 3 days ago|||
should enable some kind of agent automation
nerdjon 3 days ago|||
There has to be some sort of automation making these issues, to many of them are identical but posted by different people.

Also love how many have the “I searched for issues” checked which is clearly a lie.

Does Claude code make issue reports automatically? (And then how exactly would it be doing that if Anthropic was down when the use of LLM in the report is obvious )

nhubbard 3 days ago|||
I've made a feature request there to add another GitHub Actions bot to auto-close issues reporting errors like this when an outage is happening. Would definitely help to cut through the noise.

https://github.com/anthropics/claude-code/issues/22848

LetsGetTechnicl 3 days ago|||
That's what happens when people outsource their mental capacity to a machine
cing 3 days ago|||
Github issues will be the real social network for AI agents, no humans allowed!
hacker_homie 3 days ago|||
Couldn't have happened to a better Repo, I needed that chuckle.
falloutx 3 days ago||
Thats exactly what they Anthropic deserves (btw they cant even get Anthropic on github lmao, this must be the biggest company having to run with wrong ID on github)
palcu 3 days ago||
Hey folks, I’m Alex from the reliability team at Anthropic. We’re sorry for the downtime and we’ve posted a mini retrospective on our status page. We’re also be doing a more in depth retrospective in the following days.

https://status.claude.com/incidents/pr6yx3bfr172

acedTrex 3 days ago||
If this overly impacts you as an "engineer" beyond "oh thats minorly annoying i'll go do it another way" please do some soul searching.
shepherdjerred 3 days ago||
I’m sure there are plenty of tools you rely on as an “engineer” as well
ares623 3 days ago||
And none of them demand my retirement funds to continue existence
falloutx 3 days ago|||
Code quality increased.
couchdb_ouchdb 3 days ago||
OR just take some time off from the grind to enjoy your life.
embedding-shape 3 days ago||
For folks who have latest version (0.4.1) LM Studio installed, I just noticed they added endpoints for being compatible with Claude Code, maybe this is an excellent moment to play around with local models, if you have the GPU for it. zai-org/glm-4.7-flash (Q4) is supposed to be OK-ish, and should fit within 24GB VRAM. It's not great, but always fun to experiment, and if the API stays down, you have some time to waste :)
jwr 3 days ago||
I find it a bit annoying that the last place where I can learn about an Anthropic outage is the Anthropic Status page.
sjm-lbm 3 days ago||
As best as I can tell, there was less than 10 minutes from the last successful request I made and when the downtime was added to their status page - and I'm not particularly crazy with my usage or anything, the gap could have been less than that.

Honestly, that seems okay to me. Certainly better than what AWS usually does.

seaal 3 days ago|||
https://status.claude.com/

what do you mean it's right there. Judging by the Github issues it only took them 10 minutes to add the issue message.

jMyles 3 days ago|||
It appeared there like 5 minutes ago; it was down for at least 20 before that.

That's 20 minutes of millions of people visiting the status page, seeing green, and then spending that time resetting their context, looking at their system and network configs, etc.

It's not a huge deal, but for $200/month it'd be nice if, after the first two-thousand 500s went out (which I imagine is less than 10 seconds), the status page automatically went orange.

jwr 2 days ago||||
It was longer than 10 minutes, I'd say 15-20 minutes this time. They should be much quicker, I would expect <5 minutes.
wild_egg 3 days ago|||
It took them about 15 minutes to update that page
cirrusfan 3 days ago||
Anthropic might have the best product for coding but good god the experience is awful. Random limits where you _know_ you shouldn’t hit them yet, the jankiness of their client, the service being down semi-frequently. Feels like the whole infra is built on a house of cards and badly struggles 70% of the time.

I think my $20 openai sub gets me more tokens than claude’s $100. I can’t wait until google or openai overtake them.

falloutx 3 days ago||
Because they update it everyday and the team has not heard about something called stability. This is direct result of Move fast and break too many things all at once.
somewhereoutth 3 days ago||
In other news: https://fortune.com/2026/01/29/100-percent-of-code-at-anthro...
emsign 3 days ago||
Big models are single points of failure. I don't want to rely on those for my business, security, wealth, health and governance.

Why do people have to learn the same lessons over and over again? What makes them forget or blind to the obvious pitfalls?

rowanseymour 3 days ago||
Aren't most developers accessing Anthropic's models via vscode/github? It takes seconds to switch to a different model. Today I'm using Gemini.
kogasa240p 3 days ago||
The entire AI hype was started because Silicon Valley wanted a new SaaS product to keep themselves afloat, notice that LLMs started getting pushed right after Silicon Valley Bank collapsed.
yoavsha1 3 days ago||
Both the CC api and their website -- hopefully related to the rumored Sonnet 5 release
eleventhborn 3 days ago||
That will be one strange way to release a model.
yoavsha1 3 days ago||
I mean, can you expect a vibecoding company to do stuff with 0 downtime? They brought the models down and are now panicking at HQ since there's no one to bring them back up
nicpottier 3 days ago|||
This made me laugh only because I imagine there could possibly be some truth to it. This is the world we are in. Maybe they all loaded codex to fix their deploy? ;)
copilot_king 3 days ago|||
[dead]
kachapopopow 3 days ago|||
it is not, sounds like an issue with AWS
8cvor6j844qw_d6 3 days ago||
OpenClaw agents on Anthropic API taking an unscheduled coffee break.
jsnell 3 days ago|
Status page: https://status.claude.com/incidents/pr6yx3bfr172
More comments...