> Incorrect routing affected less than 0.0004% of requests on Google Cloud's Vertex AI between August 27 and September 16.
Matches my experience. I use CC through our enterprise Vertex AI account and never noticed any degradation.
In general it seems like these bugs, while serious, were substantially less prevalent than anecdotal online reports would have you believe. We are really talking about a ~1-2 week window here where most issues were concentrated, a relatively small percentage of total requests and total users impacted.
ispeaknumbers · 14m ago
I'm not sure if you can claim these were "less prevalent than anecdotal online reports". From their article:
> Approximately 30% of Claude Code users had at least one message routed to the wrong server type, resulting in degraded responses.
> However, some users were affected more severely, as our routing is "sticky". This meant that once a request was served by the incorrect server, subsequent follow-ups were likely to be served by the same incorrect server.
30% of Claude Code users getting a degraded response is a huge bug.
extr · 5m ago
I don't know about you but my feed is filled with people claiming that they are surely quantizating the model, Anthropic is purposefully degrading things to save money, etc etc. 70% of users were not impacted. 30% had at least one message degraded. One message is basically nothing.
I would have appreciated if they had released the full distribution of impact though.
Wowfunhappy · 11m ago
> On August 25, we deployed a misconfiguration to the Claude API TPU servers that caused an error during token generation. An issue caused by a runtime performance optimization occasionally assigned a high probability to tokens that should rarely be produced given the context, for example producing Thai or Chinese characters in response to English prompts, or producing obvious syntax errors in code. A small subset of users that asked a question in English might have seen "สวัสดี" in the middle of the response, for example.
Can anyone explain to a layperson how this sort of thing is even possible for an LLM?
For normal code, of course stupid bugs happen all the time. You accidentally introduce an off-by-one error in a conditional, for example, or add an extra `goto fail`.
But LLMs aren't written by humans! Models are trained by automated programs over a period of many months across unfathomably massive data centers.
How would a human introduce a bug like the one described in TFA?
Centigonal · 1m ago
LLMs produce a probability distribution for what the next token might be. How you pick the actual word that is printed next from that probability distribution is by using a sampling approach[1]. If your sampling approach is "select the next word randomly from among the top 4 possibilities" and you flip a > sign, you could end up with the behavior described in the OP.
LLMs are still executed by code written by humans.
In this case, the model ultimately give you a probability distribution over each (~200k) tokens in the vocabulary. It's then up to you to decide how you want to sample the next token, you could for example just always sample the most likely, or to make the output more creative, you can sample randomly from the top-k tokens. This top-k sampling, to make it efficient, is written in XLA and compiled to run directly as a kernel, there was a bug in that kernel, which presumably led to tokens outside of the top-k window be select from times to times.
ashdksnndck · 10m ago
There are many layers of human-written code in between you and the weights.
stellalo · 33m ago
Title should be fixed: it’s about Claude models in general, not Claude Code
OGEnthusiast · 18m ago
Seems like Claude is using TPUs a lot more than I thought. For some reason I thought 90%+ of their capacity was from AWS.
flutas · 11m ago
And yet no offers of credits to make things right for the users, for what was essentially degraded performance of what you paid for.
I know I'll probably get push back on this, but it left a sour taste in my mouth when I paid for a $200 sub that felt like it was less useful than ChatGPT Plus ($20) at times.
Or to summarize: [south park "we're sorry" gif]
moatmoat · 1h ago
TL;DR — Anthropic Postmortem of Three Recent Issues
In Aug–Sep 2025, Claude users saw degraded output quality due to infrastructure bugs, not intentional changes.
The Three Issues
1. *Context window routing error*
- Short-context requests sometimes routed to long-context servers.
- Started small, worsened after load-balancing changes.
2. *Output corruption*
- TPU misconfigurations led to weird outputs (wrong language, syntax errors).
3. *Approximate top-k miscompilation*
- A compiler bug in TPU/XLA stack corrupted token probability selection.
- Occasionally dropped the true top token.
Why It Was Hard to Detect
- Bugs were subtle, intermittent, and platform-dependent.
- Benchmarks missed these degradations.
- Privacy/safety rules limited access to real user data for debugging.
Fixes and Next Steps
- More sensitive, continuous evals on production.
- Better tools to debug user feedback safely.
- Stronger validation of routing, output correctness, and token-selection.
sebastiennight · 38m ago
> Privacy/safety rules limited access to real user data for debugging.
Do their ToS really limit access to user data (prompt/response)? I don't remember seeing anything to that effect in their terms.
mcintyre1994 · 34m ago
I’d imagine they have a lot of internal controls, even if ultimately someone at the company can read the data within their terms. It makes sense that the teams debugging stuff wouldn’t have this access immediately.
favorited · 24m ago
I know that when you submit a thumbs up/down rating for a response, you need to opt-in to the whole chat conversation being shared with Anthropic.
bravetraveler · 20m ago
> We don't typically share this level of technical detail about our infrastructure, but the scope and complexity of these issues justified a more comprehensive explanation.
Layered in aggrandizing. You host a service, people give you money.
levocardia · 7m ago
No, what that statement means is "we know that if we just say 'we weren't downgrading performance to save money', you won't believe us, so here is a deep dive on the actual reason it happened"
deepdarkforest · 22m ago
Wow. Sneaky. They do not even state the rate of impact for the XLA bug afaik, which affected everyone, not just claude code users, very vague. Interesting.
Claude code made almost half a billion so far[1] (>500m in ARR and its like 9 months old) , and 30% of all users have been impacted at least once, just from the first routing bug. Scary stuff.
Their post mortem is basically "evaluations are hard, we relied on vibe checking, now we are going to have even more frequent vibe checking". I believe it was indeed unintentional, but in the future where investor's money wont come down from the skies, serving distilled models will be very tempting. And you can not be liable to any SLA currently, it's just vibes. I wonder how enterprise vendors are going to deal with this going forward, you cannot just degrade quality without client or vendor even being able to really prove it.
Is your contention that paying for a service entitles you to zero bugs, ever?
deepdarkforest · 14s ago
Of course not! But usually, you can quantify metrics for quality, like uptime, lost transactions, response time, throughput etc. Then you can have accountability, and remediate. Even for other bugs, you can often reproduce and show clearly the impact. But in this case, other than internal benchmarks, you cannot really prove it. There is no accountability yet
Matches my experience. I use CC through our enterprise Vertex AI account and never noticed any degradation.
In general it seems like these bugs, while serious, were substantially less prevalent than anecdotal online reports would have you believe. We are really talking about a ~1-2 week window here where most issues were concentrated, a relatively small percentage of total requests and total users impacted.
> Approximately 30% of Claude Code users had at least one message routed to the wrong server type, resulting in degraded responses.
> However, some users were affected more severely, as our routing is "sticky". This meant that once a request was served by the incorrect server, subsequent follow-ups were likely to be served by the same incorrect server.
30% of Claude Code users getting a degraded response is a huge bug.
I would have appreciated if they had released the full distribution of impact though.
Can anyone explain to a layperson how this sort of thing is even possible for an LLM?
For normal code, of course stupid bugs happen all the time. You accidentally introduce an off-by-one error in a conditional, for example, or add an extra `goto fail`.
But LLMs aren't written by humans! Models are trained by automated programs over a period of many months across unfathomably massive data centers.
How would a human introduce a bug like the one described in TFA?
[1] Here is an example of two common approaches: https://www.reddit.com/r/AIDungeon/comments/1eppgyq/can_some...
I know I'll probably get push back on this, but it left a sour taste in my mouth when I paid for a $200 sub that felt like it was less useful than ChatGPT Plus ($20) at times.
Or to summarize: [south park "we're sorry" gif]
In Aug–Sep 2025, Claude users saw degraded output quality due to infrastructure bugs, not intentional changes.
The Three Issues 1. *Context window routing error* - Short-context requests sometimes routed to long-context servers.
2. *Output corruption* - TPU misconfigurations led to weird outputs (wrong language, syntax errors). 3. *Approximate top-k miscompilation* - A compiler bug in TPU/XLA stack corrupted token probability selection. Why It Was Hard to Detect - Bugs were subtle, intermittent, and platform-dependent.- Benchmarks missed these degradations.
- Privacy/safety rules limited access to real user data for debugging.
Fixes and Next Steps - More sensitive, continuous evals on production.
- Better tools to debug user feedback safely.
- Stronger validation of routing, output correctness, and token-selection.
Do their ToS really limit access to user data (prompt/response)? I don't remember seeing anything to that effect in their terms.
Layered in aggrandizing. You host a service, people give you money.
Claude code made almost half a billion so far[1] (>500m in ARR and its like 9 months old) , and 30% of all users have been impacted at least once, just from the first routing bug. Scary stuff.
Their post mortem is basically "evaluations are hard, we relied on vibe checking, now we are going to have even more frequent vibe checking". I believe it was indeed unintentional, but in the future where investor's money wont come down from the skies, serving distilled models will be very tempting. And you can not be liable to any SLA currently, it's just vibes. I wonder how enterprise vendors are going to deal with this going forward, you cannot just degrade quality without client or vendor even being able to really prove it.
[1][https://www.anthropic.com/news/anthropic-raises-series-f-at-...]