It's usually down about at least once a day for me anyway. Previously it used to be down during beginning and end of US work times. Now that they've geographically spread out their servers, it's down at random times instead.
TBH I'm not so irritated by this. It keeps me grounded, and saves me from being unconsciously outsourcing all the hard work of thought process to AI.
I spend 10h/day in Claude Code and I don't remember the last time it was unavailable to me, maybe a couple of months ago? I suspect this is highly dependant on location and timezone, but at least from within Central Europe it has been smooth sailing (apart from this morning...)
Stability has seen a marked improvement since summer. I used to regularly get 429 error codes multiple times per session. Lately it’s been quite awhile since I’ve had it error out or silently exit the chat with no response. I don’t usually code at the peak times though, when it does go down its usually morning time in USA when all of the Europeans are still using it too.
Same, it has been stable mostly. Becsusr of the instability before, we have our own scripts protecting it so it continues after failure, for several different failures.
I didn't even know they had a status page. Claude (with pro subscription) is often so unreliable with regards to connectivity and performance that I'm looking for something more predictable.
It randomly fails halfway through a response, sometimes very slow to start, hangs for long periods during a response, and so on.
The Claude chat interface can also slow down with long sessions. I sometimes use Claude code which is better, but I'm not a huge fan of terminal interfaces. I'm aware of third party frontends, but I believe those require api access which I don't like for personal use.
Well that has happened sometimes, I usually say continue.
But what I meant was that the whole response completely disappears. Sometimes the text I wrote previously is pasted back into the text input, but sometimes it's not.
I have this habit of copying my prompt in case it happens.
Try Gemini to see how bad it can really get.
Most of the time 2.5 pro requests fail for unknown reasons over the App.
Claude and Chatgpt are way more reliable.
I use Gemini over web app and mobile app. Both are very unreliable. Anthropic and openai don't have more resources than google but still get it right most of the time - the quality of product development is not even in a similar league
Gemini is much much better using AI studio though.
And no, Claude sucks ass. It's like Anthropic does not want to make money. For a company that's targeting enterprise customers, they are totally unprepared. Like forget customer support, they can't even sell properly. They brag about insane capabilities on the Max plan but good luck trying to buy that on a team plan with company billing.
Even if OpenAI doesn't have the best model, at least they know what to do to make money.
My theory, beyond their organizational incentive issues, is that Google’s UIs are so pathetically bad because the company is so gung ho about “web first”. The web is a wonderful thing, but it’s set UI development back by decades.
I think the decline in UI quality is real, but I don't think the web takes all of the blame. The blame that it does take is due to a sort of mixed bag of advantages and disadvantages: web technologies make it quicker and easier to get something interactive on the screen, which is helpful in many ways. On the other hand, because it lowers the effort needed to build a UI, it encourages the building of low-effort UIs.
Other forces are to blame as well, though. In the 80s and 90s there were UI research labs in indistry that did structured testing of user interactions, measuring how well untutored users could accomplish assigned tasks with one UI design versus another, and there were UI-design teams that used the quantitative results of such tests to deign UIs that were demonstrably easier to learn and use.
I don't know whether anyone is doing this anymore, for reasons I'll metion below.
Designing for use is one thing. Designing for sales is another. For sales you want a UI to be visually appealing and approachable. You probably also want it to make the brand memorable.
For actual use you want to hit a different set of marks: you want it to be easy to learn. You want it to be easy to gradually discover and adopt more advanced features, and easy to adapt it to your preferred and developing workflow.
None of these qualities is something that you can notice in the first couple of minutes of interacting with a UI. They require extended use and familiarization before you even know whether they exist, much less how well designed they are.
I think that there has been a general movement away from design for use and toward a design for sales. I think that's perfectly understandable, but tragic. Understandable because if something doesn't sell then it doesn't matter what its features are. Tragic because optimizing for sales doesn't necessarily make a product better for use.
If a large company is making a utility cares, they'll have a ux person/s, sometimes part of a design team, to make sure things are usable.
But if you're really big, you could also test in production with ab testing. But as you said, the motivation tends to be to get people to click some button that creates revenue for the company. (subscribe, buy, click ad)
Somewhat related to this, the google aistudio interface was really pushing gdrive. I think they reduced it now, but in the beginning if you wanted to just upload a single file, you had to upload it to gdrive first and then use it.
There was also some annoying banner you couldn't remove above the prompt input that tried to get you to connect to gdrive.
Yes true. It's basically form over function and it's not just limited to Web UIs.
Windows 11, iOS7, iOS26 are just some example of non Web UIs, which focused first on optimizing for sales, i.e. making something look good without thinking about usability implications.
Gemini is embarrassingly bad. It outright doesn’t work. I mean, it actually goes out and does stuff but it’s 100% of the time random. Even third-party forks of it work better (like Qwen Code), which is just wild.
This makes me wonder: what do developers, who completely rely on LLMs to write their code, do when the service is down?
I realize this is already a problem for other jobs, which require working with SAAS, but it seems odd to me that now some developers will fall into this "helpless" category as well.
Early in my career (which started in civil engineering) I was working with a man at the very end of his, which started in the 1950s. I was the young tech-focused intern who found a way to use a computer for everything even when printed and sometimes hand-drawn plans were the standard of the day. He asked me once if I knew how to use a slide rule, which I didn't.
"Well, what do you do when the power goes out?", he asked.
"I go home, just like you would.", I said with a smile.
He paused for a moment and nodded, "you know, you're absolutely right".
Nice story. I guess it can be looked at as some sort of parable. But if I take it literally: I never had a power outage at work, but SAAS downtime happens every year (probably multiple times).
Serious answer: I can write code manually, but it feels like a waste of time. I'll just go for a walk to synthesize my ideas if a service was down, and I don't think not writing actual code for a day is a huge problem. So focus on health and maybe even talk to humans.
> This makes me wonder: what do developers, who completely rely on LLMs to write their code, do when the service is down?
Even the engineers at these AI companies can't use these LLMs to fix an outage when there is one. Especially SREs.
But if one has to just sit there and "wait" for the outage to subside then perhaps the kitchen timer just went off and declared that these "developers" are cooked.
If they're smart, they just switch to ppq.ai or an openrouter provider where they can purchase prepaid tokens from various providers with many alternative models available.
We usually try to figure out how to build reliability/redundancy in step with what we require to function as a society under most circumstances without taking outsized losses.
When things go worse than anticipated, we take the hit, try to recover and maybe learn to strengthen the system afterwards. I would rate us roughly okay-ish at that, mostly because I don't know what to compare it to, since we are the only species to do it at this level to my knowledge.
Pro tip: if you pay for Claude, also subscribe to status updates here: https://status.claude.com . you may want to add a rule to filter these to a tag or folder as they can be quite spammy, but it has helped me lots. It tells you which specific models are down and what platforms are down, such as claude web, app, API, etc.
I signed up for a Claude Teams account last week. File uploads were not working anywhere - web, desktop, Android app. As soon as I switched over to a personal account they worked again. Switch back to Team account, broken again with just a cryptic 404 error popping up.
They were broken for a week, I found several people talking about it on Reddit. But no word from Anthropic, no status page info.
I opened a support request, there was no response until 3 or 4 days later when someone messaged to say that it was fixed, and a status page related to it magically appeared.
Still seeing issues on the OAuth flow despite a "a fix [having] been implemented". Looks like whatever happened probably trashed the session database since it's forcing Claude Code to re-auth.
It's usually down about at least once a day for me anyway. Previously it used to be down during beginning and end of US work times. Now that they've geographically spread out their servers, it's down at random times instead.
TBH I'm not so irritated by this. It keeps me grounded, and saves me from being unconsciously outsourcing all the hard work of thought process to AI.
This has 100% ruined my morning! -_-
I tried many different ways to get in, I'm using claude in terminal in cursor.
Fix for me: Hopefully will work for you guys too, I logged out of claude, restarted cursor, used the anthropic console login method instead of normal login, when you click the link it gives you an option to signin with chat credentials instead, there it did not work, I pressed the link given in claude a few times and kept trying, finally I was given a pastable code, this took a while to be accepted in terminal but now is logged in.
Damm... Sorry to hear it, I was watching console during my attempts, see many errors on multiple attempts, the one that worked did not show any console error logs on the working attempt if that helps
they marked as resolved but is not
Resolved
This incident has been resolved.
Posted 30 minutes ago. Oct 31, 2025 - 10:36 UTC
Update
We are continuing to monitor for any further issues.
Posted 30 minutes ago. Oct 31, 2025 - 10:36 UTC
Monitoring
A fix has been implemented and we are monitoring the results.
Posted 1 hour ago. Oct 31, 2025 - 09:46 UTC
Identified
The issue has been identified and a fix is being implemented.
Posted 2 hours ago. Oct 31, 2025 - 09:30 UTC
Update
We are continuing to investigate this issue.
Posted 2 hours ago. Oct 31, 2025 - 09:25 UTC
Investigating
We are currently investigating this issue.
Posted 2 hours ago. Oct 31, 2025 - 09:17 UTC
This incident affected: claude.ai, platform.claude.com (formerly console.anthropic.com), and Claude API (api.anthropic.com).
https://status.claude.com/incidents/s5f75jhwjs6g
Until it's back up and running, something I did that worked for me was access the web version of Claude Code, link the repository there, and ask them to implement something. After they implemented it, they enabled the option to click the "Open in CLI" button, allowing me to use Claude Code CLI from the terminal again.
I just managed to log in to claude code. You have to spam the login when you recive the screen with authorize press on it multiple times. you will recive errors than try again i have tried like 70 times and it started finnaly
I wonder if it's coincidence: today Anthropic mailed out all previous customers of Claude offering a free monthly of 5x Claude Code if they sign up again.
Plenty on Reddit saying they did. And I did.
Could the outage be a the result of an "unexpected" surge in account activations / use?
Has anyone else pretty much stopped using AI at this point? The only thing I use it for to help generate README's or Javadocs, and then heavily edit them. I had it in my workflow and it burned me so many times I just went back to google and stackoverflow.
I'm sure some people did, by I personally use them every day - for coding tasks, for language translation, for research, current events (Grok is really good at this thanks to being connected to the X real-time data), for day-to-day questions (like my daughter asking me what's that Pokemon called) and so on.
Never really started. Really the only properly good AI thing that I’ve used is AI autocompletions, which are generally higher quality than the traditional ones that I’ve used. Not that it’s perfect and at least the one in Xcode has hallucinated on me.
If I have a question for SO these days then I ask Claude instead and tell it to use SO where possible. It's preferable to actually asking it on SO which often results in the question being edited, downvoted and closed by someone with an anime child profile picture.
What was the question like? "How to print a decimal in C"? Valuable questions aren't downvoted. If you ask about something that could be found on the first Google page, then no surprise you are being downvoted.
I asked a novel question, well-written and clear, and a "subject matter expert" decided it was too similar to another question (it wasn't), so they defaced it, downvoted it and closed it.
Stack Overflow is dying, it's extremely difficult to get new questions through. Even if they survive moderation then they're unlikely to get answers.
I wonder if there will be any compensation for customers in this situation, such as providing an extra day ^^. I doubt that Claude, who is a stickler for weekly limits, will be a stickler for rewards.
All the major providers seem to have blips on latency and other issues at least once a day, which would be a “service degradation” by the standards of normal apps.
5 hours in and I'm pretty disappointed in Anthropic's response to this. Looking at their status page you'd think everything was OK, the downtime was only 37 minutes, a fix is in place and they're "monitoring" - that's total BS. I can't even log in. I can forgive downtime, we've all been there, but I really don't like when companies try to smooth it over with little self-serving minimizations of the disruption.. while it's still in progress!
I use Claude Code for programming work, but I choose OpenAI anywhere customer-facing and this cute little outage is making me feel better about that with every passing minute. NOT cool.
I mean I can't give you my company code but my GitHub is not difficult to find. That said, I'm a little confused by your offer, because I feel like this doomed to fail because you don't have my context as to what I want improved?
Is this news? Claude API is frequently down at frequent times of the day although it got better recently (coincided with them heavily restricting usage).
I was pretty against coding tools like this until I'm trying to customize an open-source library, written in a language that I don't know, mostly to show an MVP.
For that purpose? It lets me do things I never would have even tried.
Isn’t it funny how people think? Say anything about Palestine not deserving being flattened and you are now anti-Semitic and a Nazi. Mention how NATO is not as innocent and harmless a a fly and perhaps being on the other side of their friendly border is definitely reason for concern and you are now a Putin puppet. You know, I lived in a dictatorship briefly. That’s exactly how thinking goes when they reason about opposition and free thought.
If LLM use were as valuable as the adherents claim it is, this news would be on par with AWS US East 1 being down.
LLMs neither have the mechanical reliability we expect from computers (does it the same way every time) nor the flexible reliability we expect from biological intelligences (solves or works around the unexpected sub-problems as they arise).
What a weird top comment. It opens with some subtle name calling ("adherents"?), an unwarranted conclusion and then a statement about what LLMs lack (which is no surprise) that we're meant to draw a conclusion from?
I find the debate about LLMs rather exhausting. I find them useful and almost every day someone on social media tells me I'm mistaken, lying or merely deluded .
TBH I'm not so irritated by this. It keeps me grounded, and saves me from being unconsciously outsourcing all the hard work of thought process to AI.