It looks like that it is a central service @ Google called Chemist that is down.
"Chemist checks the project status, activation status, abuse status, billing status, service status, location restrictions, VPC Service Controls, SuperQuota, and other policies."
-> This would totally explain the error messages "visibility check (of the API) failed" and "cannot load policy" and the wide amount of services affected.
EDIT: Google says "(Google Cloud) is down due to Identity and Access Management Service Issue"
mrGomesDev · 1d ago
I use Expo intermediation for notifications, but with this Google context, I imagine that FCM is also suffering, is that possible?
rvnx · 1d ago
Very likely. Firebase Auth is down for sure (though unreported yet), so most likely FCM too
No comments yet
VWWHFSfQ · 1d ago
There are multiple internet services down, not just GCP. It's just possible that this "Chemist" service is especially externally affected which is why the failures are propagating to the their internal GCP network services.
rvnx · 1d ago
Absolutely possible. Though there is something curious:
At Cloudflare it started with: "Investigating - Cloudflare engineering is investigating an issue causing Access authentication to fail.".
So this would somehow validate the theory of auth/quotas started failing right after Google, but what happened after ?! Pure snowballing ? That sounds a bit crazy.
terom · 23h ago
From the Cloudflare incident:
> Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information are unavailable [...]
Surprising, but not entirely unplausible for a GCP outage to spread to CF.
voytec · 22h ago
> outage of a 3rd party service that is a key dependency.
Good to know that Cloudflare has services seemingly based on GCP with no redundancy.
londons_explore · 20h ago
Probably unintentional. "We just read this config from this URL at startup" can easily snowball into "if that URL is unavailable, this service will go down globally, and all running instances will fail to restart when the devops team try to do a pre-emptive rollback"
No comments yet
__turbobrew__ · 15h ago
After reading about cloudflare infra in post mortems it has always been surprising how immature their stack is. Like they used to run their entire global control plane in a single failure domain.
Im not sure who is running the show there, but the whole thing seems kinda shoddy given cloudflares position as the backbone of a large portion of the internet.
I personally work at a place with less market cap than cloudflare and we were hit by the exact same instances (datacenter power went out) and had almost no downtime, whereas the entire cloudflare api was down for nearly a day.
tibbar · 20h ago
What's the alternative here? Do you want them to replicate their infrastructure across different cloud providers with automatic fail-over? That sounds -- heck -- I don't know if modern devops is really up to that. It would probably cause more problems than it would solve...
arccy · 20h ago
They're a company that has to run their own datacenters, you'd expect them to not fall over when a public cloud does.
hplk · 19h ago
I was really surprised. The dependence on another enterprise’s cloud services in-general I think is risky, but pretty much everyone does it these days, but I didn’t expect them to be.
calvinmorrison · 19h ago
well at some level you can contract deploy private instances of clouds as well.
UltraSane · 15h ago
AWS has Outpost racks that let you run AWS instances and services in your own datacenter managed like the ones running in AWS datacenters. Neat but incredibly expensive.
voytec · 17h ago
> What's the alternative here? Do you want them to replicate their infrastructure
Cloudflare adverises themselves as _the_ redundancy / CDN provider. Don't ask me for an "alternative" but tell them to get their backend infra shit in order.
ghshephard · 18h ago
There are roughly 20-25 major IaaS providers in the world that should have close to dependency on each other. I'm almost certain that cloud flare believe that was their posture, and that the action items coming out of this post mortem will be to make sure that this is the case.
somanyphotons · 19h ago
I would expect them to not rely on GCP at all
arghwhat · 19h ago
Redundancy ≠ immune to failure.
ProAm · 19h ago
Google is an advertising company not a tech company. Do not rely on them performing anything critical that doesn't depend on ad revenue.
dylan604 · 19h ago
What does that make Amazon?
tapoxi · 18h ago
A cloud services company. AWS is much bigger than Amazon retail at this point.
bravetraveler · 21h ago
Content Delivery Thread
whatevertrevor · 1d ago
Doesn't cloudflare have its own infrastructure, it's wild to me that both these things are down presumably together with this size of a blast radius.
derefr · 22h ago
Cloudflare isn't a cloud in the traditional sense; it's a CDN with extra smarts in the CDN nodes. CF's comparative advantage is in doing clever things with just-big-enough shared-nothing clusters deployed at every edge POP imaginable; not in building f-off huge clusters out in the middle of nowhere that can host half the Internet, including all their own services.
As such, I wouldn't be overly surprised if all of CF's non-edge compute (including, for example, their control plane) is just tossed onto a "competitor" cloud like GCP. To CF, that infra is neither a revenue center, nor a huge cost center worth OpEx-optimizing through vertical integration.
whatevertrevor · 21h ago
But then you do expose yourself to huge issues like this if your control plane is dependent on a single cloud provider, especially for a company that wants to be THE reverse proxy and CDN for the internet no?
snowwrestler · 20h ago
Cloudflare does not actually want to reverse proxy and CDN the whole internet. Their business model is B2B; they make most of their revenue from a set of companies who buy at high price points and represent a tiny percentage of the total sites behind CF.
Scale is just a way to keep costs low. In addition to economies of scale, routing tons of traffic puts them in position to negotiate no-cost peering agreements with other bandwidth providers. Freemium scale is good marketing too.
So there is no strategic reason to avoid dependencies on Google or other clouds. If they can save costs that way, they will.
whatevertrevor · 20h ago
Well I mean most of the internet in terms of traffic, not in terms of the corpus of sites. I agree the long-tail of websites is probably not profitable for them.
mbreese · 20h ago
True, but how often do outages like this happen? And when outages do happen, does Cloudflare have any more exposure than Google? I mean, if Google can’t handle it, why should Cloudflare be expected to? It also looks like the Cloudflare services have been somewhat restored, so whatever dependency there is looks like it’s able to be somewhat decoupled.
So long as the outages are rare, I don’t think there is much downside for Cloudflare to be tied to Google cloud. And if they can avoid the cost of a full cloud buildout (with multiple data centers and zones, etc…), even better.
arccy · 20h ago
They're pushing workers more as a compute platform
Latest Cloudflare status update basically confirms that there is a dependency to GCP in their systems:
"Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information are unavailable"
Yeah I saw that now too. Interesting, I'm definitely a little surprised that they have this big of an external dependency surface.
smoe · 21h ago
Definitely very surprised to see, that so much of the CF products that are there to compete with the big cloud providers have such a dependance on GCP.
cyberpunk · 1d ago
You'd think so wouldn't you?
DownDetector also reports azure and oracle cloud, I can't see then also being dependant on GCP...
I guess down detector isn't a full source of truth though.
Down detector has a problem when whole clouds go down: unexpected dependencies. You see an app on a non-problematic cloud is having trouble, and report it to Down Detector but that cloud is actually fine- their actual stuff is running fine. What is really happening is that the app you are using has a dependency on a different SaaS provider who runs on the problematic cloud, and that is killing them.
It's often things like "we got backpressure like we're supposed to, so we gave the end user an error because the processing queue had built up above threshold, but it was because waiting for the timeout from SaaS X slowed down the processing so much that the queue built up." (Have the scars from this more than once.)
spwa4 · 21h ago
Surely if you build a status detector you realize that colo or dedicated are your only options, no? Obviously you cannot host such a service in the cloud.
mandevil · 21h ago
I'm not even talking about Down Detector's own infra being down, I'm talking about actual legitimate complaints from real users (which is the data that Down Detector collates and displays) because the app they are trying to use on an unaffected cloud is legitimately sending them an error- it's just because of SaaS dependencies and the nature of distributed systems one cloud going down can have a blast radius such that even apps on unaffected clouds will have elevated error rates, and that can end up confusing displays on Down Detector when large enough things go down.
My apps run on AWS, but we use third parties for logging, for auth support, billing, things like that. Some of those could well be on GCP though we didn't see any elevated error rates. Our system is resilient against those being down- after a couple of failed tries to connect it will dump what it was trying to send into a dump file for later re-sending. Most engineers will do that. But I've learned after many bad experiences that after a certain threshold of failures to connect to one of these outside system, my system should just skip calling out except for once every retryCycleTime, because all it will do is add two connectionTimeout's to every processing loop, building up messages in the processing queue, which eventually create backpressure up to the user. If you don't have that level of circuit breaker built, you can cause your own systems to give out higher error rates even if you are on an unaffected cloud.
So today a whole lot of systems that are not on GCP discovered the importance of the circuit breaker design pattern.
iFred · 23h ago
Down Detector can have a poor signal to noise ratio given from what I am assuming is users submitting "this is broken" for any particular app. Probably compounded by many hearing of a GCP issue, checking their own cloud service, and reporting the problem at the same time.
basfo · 1d ago
Using Azure here, no issues reported so far.
gcpman · 20h ago
perhaps the person who maintains Chemist took the buyout
Getting a lot of errors for Claude Sonnet 4 (Cursor) and Gemini Pro.
Nooooo I'm going to have to use my brain again and write 100% of my code like a caveman from December 2024.
burntalmonds · 1d ago
Same here. Getting this in AI Studio: Failed to generate content: user has exceeded quota. Please try again later.
No comments yet
bicx · 1d ago
I was in the middle of testing Cloud Storage file uploads, so I guess this is a good time to go for a walk.
matsemann · 23h ago
A good excuse for adding error handling, which otherwise is often overlooked, heh.
robin-a · 23h ago
Cursor throwing some errors for me in Auto Agent mode too.
cryptonector · 23h ago
Devs before June 12, 2025: "Ai? Pfft, hallucination central. They'll never replace me!"
Devs during June 12, 2025 GCP outage: "What, no AI?! Do you think I'm a slave?!"
atonse · 23h ago
100% agree... I even thought "ok maybe I'll clean up the backlog while I wait" but I'm so used to even using AI to clean up my JIRA backlog (using the Atlassian MCP), so even that feels weird to click into each ticket, just the way I used to do it TWO MONTHS AGO.
This is a good wake-up call on how easily (and quickly) we can all become pretty dependent on these tools.
tough · 21h ago
local llm's would work
sva_ · 22h ago
It appears like "Devs" is not a homogeneous mass.
christianqchung · 15h ago
Goomba fallacy
thefourthchime · 22h ago
So true
crocowhile · 1d ago
openrouter.ai is down for me
sujayakar · 1d ago
switch to auto mode and it should still work!
ashu1461 · 1d ago
GPT is working in agent mode, which kind of confirms that claude is hosted on google and GPT probably on MSFT servers / self hosted.
Update - We are seeing a number of services suffer intermittent failures. We are continuing to investigate this and we will update this list as we assess the impact on a per-service level.
Impacted services:
Access
WARP
Durable Objects (SQLite backed Durable Objects only)
Workers KV
Realtime
Workers AI
Stream
Parts of the Cloudflare dashboard
Jun 12, 2025 - 18:48 UTC
Seems like a major wtf if Cloudflare is using GCP as a key dependency.
a2128 · 20h ago
Some day Cloudflare will depend on GCP and GCP will depend on Cloudflare and AWS will rely on one of the two being online and Cloudflare will also depend on AWS and the internet will go down and no one will know how to restart it
IX-103 · 19h ago
Supposedly something like this already happened inside Google. There's a distributed data store for small configs read frequently. There's another for larger configs that are rarely read. The small data store depends on a service that depends on the large data store. The large data store depends on the small data store.
Supposedly there are plans for how to conduct a "cold" start of the system, but as far as I know it's never actually been tried.
__turbobrew__ · 15h ago
The trick there is you take the relevant configs and serialize them to disk periodically, and then in a bootstrap scenario you use the configs on disk.
Presumably for the infrequently read configs you could do this so the service with frequently read configs can bootstrap without the service for infrequently read configs.
Yeah. This service was presenting charts likely probed from inside GCP. I was on a call with a Google rep, someone pointed out that "AWS is also down" and I foolishly said something about "possible BGP attack" out of spite, before checking AWS availability myself. Shame on me.
toast0 · 22h ago
Didn't have the feeling of a BGP issue, most services I was working with were reasonably quickly returning failures, as opposed to lingering death.
yard2010 · 22h ago
I love this kind of fake news. It's like that scene from Scary Movie (can't remember which one) in which someone says "I heard the japs took out one in Kikoman" :')
AlienRobot · 23h ago
Wait, it's all Google?
deepsun · 23h ago
Google was the first to report probably.
bananapub · 21h ago
all cloud
plateng000 · 23h ago
"always has been"
sillypuddy · 1d ago
Well that's interesting. I wouldn't expect AWS or Microsoft 365 to be affected by a Google outage.
(OTOH, it's not always trivial to define/detect an outage.)
peanut-walrus · 21h ago
Downdetector in incidents like this is 100% misinformation.
johanyc · 20h ago
Why
peanut-walrus · 20h ago
Downdetector does not actually monitor the services. It aggregates user reports from socials etc. For large-scale incidents, the reports get really noisy and it will show that basically everything is down.
boneitis · 14h ago
I thought that was the whole premise of Downdetector, no? User reports, because first-party status updates are tightly controlled by those first parties?
Was not basically everything (hyperbolically speaking, of course) practically impacted today?
How much weight really comes from those social media posts? Is there an indirect effect of people reading these posts, then flocking to hit the report button, sight unseen?
Why even have a status page? Someone reported that their org of >100,000 users can't use Google Meet. If corps aren't going to update their status page, might as well just not have one.
Edit: The GCP status page got updated <1 minute after I posted this, showing affected services are Cloud Data Fusion, Cloud Memorystore, Cloud Shell, Cloud Workstations, Google Cloud Bigtable, Google Cloud Console, Google Cloud Dataproc, Google Cloud Storage, Identity and Access Management, Identity Platform, Memorystore for Memcached, Memorystore for Redis, Memorystore for Redis Cluster, Vertex AI Search
SOLAR_FIELDS · 1d ago
There's no situation where the corporation controls the status page where you can trust the status page to have accurate information. None. The incentives will never be aligned in this regard. It's just too tempting and easy for the corp to control the narrative when they maintain their own status page.
The only accurate status pages are provided by third party service checkers.
the8472 · 23h ago
> The incentives will never be aligned in this regard.
Well, yes, incentives, do big customers with wads of cash have an incentive to demand accurate reporting from their suppliers so they can react better rather than trying to identify issues? If there's systematic underreporting, then apparently not. Though in this case they did update their page.
SOLAR_FIELDS · 20h ago
In practice how this plays out is that the big wads of cash holders will make demand, and Google (or whoever, Google is just the standin for the generic Corp here) will give them the actual information privately. It will still never be trusted to be reflected accurately on the public status page.
If you think about it from the corp’s perspective, it makes perfect sense. They weigh the risk reward. Are they going to be rewarded for the radical transparency or suffer fall out by acknowledging how bad of a dumpster fire the situation actually is? Easier for the corp to just lie, obscure and downplay to avoid having to even face that conundrum in the first place.
staplers · 20h ago
If there's systematic underreporting, then apparently not.
You answered your own question.
supportengineer · 1d ago
Who gets a promotion from a working status board?
nikcub · 23h ago
I have zero faith in status pages. It's easier and more reliable to just check twitter.
Heroku was down for _hours_ the other day before there was any mention of an incident - meanwhile there were hundreds of comments across twitter, hn, reddit etc.
fooey · 22h ago
anecdotally, the status pages have been taken away from engineering and are run by customer support and marketing
Yeah, my company of hundreds of people working remotely are having 90%+ failures connecting to Google Meetings - joining a meeting just results in a 504.
ransom1538 · 1d ago
Why can't companies be honest with being down. It helps us all out so we don't spend an hour internalizing.
We are truly in gods hands.
$ prod
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=503, message=Visibility check was unavailable. Please retry the request and contact support if the problem persists
kingstnap · 1d ago
Because they have unrealistic targets so they make up fake uptime numbers. 99.999% would mean not even having an hour of downtime in 10 years.
I remember reddit being down for like a whole day or so and they claimed 99.5% in that month.
wbl · 23h ago
Ma Bell hit that decently often.
Uehreka · 22h ago
Is that even knowable? Like, I know they called it “The Astonishing, Unfailing, Bell System” but if they had an outage somewhere did they actually have an infrastructure of “canary phones” and such to tell in real time? (As in, they’d know even if service was restored in an hour)
Not trying to snark, I legit got nerdsniped by this comment.
wbl · 22h ago
They absolutely did. Note that the reliability estimates exclude the last mine because trees falling and the like but they had a lot of self repair, reporting, and management facilities.
Engineering and Operations in the Bell System is pretty great for this.
Dylan16807 · 22h ago
Running a much simpler system with much more independent nodes.
It's a lot easier to keep packets flowing than to keep non-self-contained servers serving.
oxymoron · 1d ago
Because a lot of the time, not everyone is impacted, as the systems are designed to contain the "blast radius" of failures using techniques such as cellular architecture and [shuffle sharding](https://aws.amazon.com/builders-library/workload-isolation-u...). So sometimes a service is completely down for some customers and fully unaffected for other customers.
hnuser123456 · 1d ago
"there is a 5% chance your instance is down" is still a partial outage. A green check should only mean everything (about that service) is working for everyone (in that region) as intended.
Downdetector reports started spiking over an hour ago but there still isn't a single status that isn't a green checkmark on the status page.
deepsun · 22h ago
With highly distributed services there's always something failing, some small percentage.
nijave · 19h ago
Sure but you can still put a message up when it's some <numeric value> over some <threshold value> like errors are 50% higher than normal (maybe the SLO is 99.999% of requests are processed successfully)
deepsun · 19h ago
Just note that aggregations like that might manifest as GCP didn't have any issues today actually.
E.g. it was mostly us-central1 region affected, and in there only some services (e.g. regular instances, and GKE kubernetes were not affected in any region). So if we ask "what the percentage of GCP is down", it might well be it's less than the threshold.
On the other hand, about a month ago, 2025-05-19 there was an 8-hour long incident with Spot VM instances affecting 5 regions, and which was way more important to our company, but it didn't make any headlines.
spwa4 · 23h ago
Just say it: they want to lie to 95% of customers.
Eduard · 1d ago
> Because a lot of the time, not everyone is impacted
then such pages should report a partial failure. Indeed the GCP outage page lists an orange "One or more regions affected" marker, but all services show the green "Available" marker, which apparently is not true.
deepsun · 21h ago
There's always a partial outage in large systems, some very small percentage. All clouds should report all red then.
nijave · 19h ago
It's not rocket science. Put a message up "The service is currently degraded and some users may see errors"
johannes1234321 · 1d ago
They still could show that so.e.issues exist. Their monitoring must know.
The issue is that they don't want to. (For claiming good uptime, which may even be true for average user, if most outages affect only small groups)
jobs_throwaway · 1d ago
That is still 100% an outage and should be displayed as such
jeanlucas · 1d ago
Because there are contracts related to uptime :)
rixthefox · 1d ago
Those contracts will be monitoring their service availability on their own. If Google can't be honest you can bet your bottom dollar the companies paying for that SLA are going to hold them accountable if they report the outage properly or not.
datadrivenangel · 1d ago
The real point of SLAs is to give you a reason to break contracts. If a vendor doesn't meet their contractual promises, that gives you a lot of room to get out contracts
rustc · 1d ago
Does any service even say they're "down" anymore? All I see is "elevated error rates".
colechristensen · 1d ago
4 to 6 hours after the flames are visible from orbit and management has finally given up on the 37th quick fix you do get that red X
But really not until after it's been on CNN a while.
rapus95 · 1d ago
if half the internet is down, which it apparently is, it's usually not the service in question, but some backbone service like cloudflare. And as internal health monitoring doesn't route to the outside through the backbone to get back in, it won't pick it up. Which is good in some sense, as it means that we can see if it's on the path TO the service or the service itself.
voytec · 1d ago
> Why can't companies be honest with being down
SLA agreements.
organsnyder · 23h ago
Any customer with enough leverage to negotiate meaningful SLA agreements will also have the leverage to insist that uptime is not derived from the absence of incidents on public-facing status pages.
remram · 18h ago
Service level agreements agreements?
9rx · 1d ago
The program that updates the status page is hosted on Google Cloud.
tfsh · 1d ago
It's not. You might be joking, but that comment still isn't helpful.
My understanding is this is part of Google's internal PSD offering (Public Status Board) which uses SCS (Static Content Service) behind GFE (Google Frontend) which is hosted on Borg, and deploys other large scale apps such as Search, Drive, YouTube, etc.
9rx · 19h ago
How could it not be helpful given that it gave you reason to provide more details that you wouldn't have otherwise shared? You may not have thought this through. There is nothing more helpful. Unless you think your own comment isn't helpful, but then...
refulgentis · 13h ago
Because "It's good to lie because it makes people correct me" is a joke about IRC, not a viable stable game-theoretic optimal position.
9rx · 6h ago
Cunningham's Law emerged in the newsgroups era, well predating the existence of IRC.
Of course, I recognize that you purposefully pulled the Cunningham's Law trigger so that you, too, would gain additional knowledge that nobody would have told you about otherwise, as one logically would. And that you played it off as some kind of derision towards doing that all while doing it yourself made it especially funny. Well done!
refulgentis · 1h ago
I have 0 idea what Cunningham's Law is, so we can both agree that "recognizing purpose" was "mind-reading", in this case. I didn't really bother reading the rest after the first sentence because I saw something about how I joking and congratulating me in my peripheral vision.
It is what it says on the tin: choosing to lie doesn't mean you want the truth communicated.
I apologize that it comes across as aggro, its just that I'm not quite as giggly about this as you are. I think I can safely assume you're old enough to recognize some deleterious effects of lying
ashu1461 · 1d ago
So even then, it should have been able to correctly report the status, it somehow shows that the status page is not automated and any change there needs to go through someone manual.
9rx · 1d ago
A program that updates the status page failing does not imply that the status page is manually edited. It is not like you would generate a status page on every request.
ashu1461 · 1d ago
How do we know that the program is failing ?
How hard is it for the frontend to detect if the last update to the status page was made a while ago and that itself implies there is an error and should be reported ?
9rx · 17h ago
We don’t.
But why would the frontend have processing logic when all you need is to serve a static HTML document?
Even if it did, what would you do with that information? Throw up a screen with: Call us for service information at 1-HAHA-JUST-KIDDING
It’s not like it really matters if it’s accurate anyway.
rapus95 · 1d ago
the services ARE healthy, status page is correct. The backbone which links YOU to the service isn't healthy. Take a look at cloudflare, they are already working on it
ikiris · 1d ago
Not even close. The status page is manual and cloud flares outage is because of Google not the other way around.
supportengineer · 1d ago
Nobody gets a promotion, that's why.
rozap · 1d ago
Please, won't somebody think of the KPIs.
DrBenCarson · 1d ago
Whichever product person is in charge of the status page should be ashamed
How could you possibly trust them with your critical workloads? They don't even tell you whether or not their services work (despite obviously knowing)
artooro · 1d ago
What's crazy is that RCS messaging is down as a result of this outage. It shows how poorly the technology or infrastructure was designed.
foota · 23h ago
Isn't RCS basically just instant messaging? I don't know why it's surprising that it would be down.
roywiggins · 23h ago
I'm not sure any single company could have an outage that would take out SMS globally, but RCS is presumably more centralized.
toast0 · 22h ago
SMS is pretty much decentralized, although there's a few companies with a lot of reach. I don't remember any Global SMS outages, but it wasn't uncommon for a whole carrier to have an SMS outage and especially for inter-carrier SMS to be broken from time to time (sometimes for days). I've certainly seen some stuff with SMS aggregators: almost all of them claim a majority of direct links, but when you have accounts with 4 large aggregators and one of them has an outage, you find out which of your other account use that aggregator for which links (because their deliverability will go to zero to those destinations).
RCS was designed and specced, by GSMA, as a telco run decentralized system that would replace SMS as like for like; but there were only a handful of rollouts. It's really only gotten use as Google pushed it onto Android, using their RCS server; recently iOS started using it although I don't know what server they attach to.
Since RCS is basically the 5th wave Google IM, it's no surprise when they have a major outage, RCS is pretty much broken.
lieuwex · 20h ago
> recently iOS started using it although I don't know what server they attach to.
According to Wikipedia, only the carrier's RCS server is used [1]
All of the major carriers use Google Jibe as their RCS backend anyway though, so it's pretty irrelevant.
watusername · 23h ago
It used to be kind of distributed, but Google has been strong arming carriers to use their hosted Jibe service through a combination of proprietary extensions (e.g., E2E which is finally standard) and bypassing carrier control (if the carrier didn't provision RCS, Google Messages would use their own service iMessage-style).
From the end user's perspective, if the carrier didn't use Jibe RCS, it simply wouldn't work well.
whynotminot · 22h ago
People liked to be utterly pissed at Apple for not supporting RCS. But there were reasons
wbl · 22h ago
That explains why I couldn't get the photo of my parents dog today.
whalesalad · 22h ago
should have used Erlang
dcchambers · 19h ago
Oh my god is that why my RCS chats were failing earlier?!?!
augbog · 22h ago
Cloudflare Outage also just updated
> Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information
togume · 6h ago
Is GCP the third party?
kfarr · 1d ago
Yes Firebase auth is down and affecting many apps, on Discord and Slack groups tons of others are corroborating. A bit disappointing that there is no post on the status page for nearly 30 mins:
https://status.firebase.google.com/
kentlyons · 1d ago
It just updated. Maybe affected by their own outage!
ashu1461 · 1d ago
Just proves how shady the status page and sla stuff is
rco8786 · 23h ago
Google is 10 minutes late updating their status page.
"So shady"
It's really, really hard to make a status page realtime.
ashu1461 · 23h ago
What makes you think it’s hard? We have AI generating songs and writing code, but setting up basic health checks is too much?
rco8786 · 20h ago
Yes. “Basic health checks” is not a real thing. I mean that genuinely.
> What makes you think it’s hard?
Being responsible (or rather, on a team of people responsible) for a status page of a big tech co made me think it’s hard.
“Is it down?” Is not a binary question.
No comments yet
jug · 22h ago
An AI generated status page would be the epitome of 2025.
urbandw311er · 21h ago
What makes you think it’s easy?
dgellow · 23h ago
or how difficult it actually is to do that type of thing at scale
0xffany · 1d ago
Does anyone know of a good dashboard to check for such BGP routing anomalies as (apparently) this one? I am currently digging around https://radar.cloudflare.com/routing but it doesn't show which routes were actually leaked.
I would love if anyone has any good tool recommendations!
SparkyMcUnicorn · 23h ago
I don't know if I've seen CF Radar before. That's pretty cool!
Here are some others, although some seem to be experiencing issues due to the current outage I can only presume.
Why would you think this outage is (internet) BGP related?
whalesalad · 22h ago
Cloudflare runs all their own bare metal servers. Seems odd that they would be impacted by Google cloud. Same can be said for all the other issues on downdetector. This points to a broad issue at the core internet which could certainly be related to BGP.
ilkkao · 22h ago
Cloudflare is now saying:
"Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency."
I really hope CF explains this apparent Google dependecy in detail in their post mortem.
whalesalad · 22h ago
Imagine it's just a google spanner wrapper lmao
ddtaylor · 23h ago
I am a newb at this too, but is it "normal" for the "Announced IP Address Space" section to have that large jump from addresses like that?
the API has afaik nothing to do with HN itself, its a read-only copy for API users, not what backs the actual site.
ddtaylor · 1d ago
Smells like BGP since there are services people claim have nothing to do with GCP being affected. OpenRouter is down, Lovable is down, etc.
thallium205 · 23h ago
AWS seems fine though. My bet is Cloudflare.
CSMastermind · 19h ago
AWS and Azure both had outages.
remram · 18h ago
Is that true? I see no direct report about that. downdetector says so, but it's crowdsourced so it tends to have fake positives.
CSMastermind · 15h ago
That's fair, I haven't seen any posts from the companies themselves.
brown9-2 · 1d ago
perhaps Lovable uses GCP somewhere in their stack?
DrBenCarson · 1d ago
npm as well
koito17 · 1d ago
Initially attributed the unresponsiveness of `npm install` to npm (the CLI tool) in general. Tried using bun to install dependencies, saw the same result -- but with actual logs instead of a vague spinner -- and decided to check Hacker News.
Getting 504 errors on anything from registry.npmjs.org that isn't cached on my machine.
yard2010 · 21h ago
I just want to say that bun is a gift. It's just like npm, but backwards. So you imagine how perfect it is. I'm kidding, but really - bun is awesome. If you're using npm you can make the switch as it's mostly compatible.
supportengineer · 1d ago
Interesting how I landed here. I was having trouble with Nest. Then I went to Down Detector. I noticed many sites having a simultaneous uptick. Then I came to HN, and found this link at the top of the front page.
If Google Chat is down per https://www.google.com/appsstatus/dashboard/, the ability for Google engineers to communicate among themselves impaired, despite SREs having IRC as a backup.
sebzim4500 · 22h ago
TIL Google chat hasn't been killed yet
donalhunt · 21h ago
They have irc services internally (or at least did when I was there 10-ish years ago).
iamdelirium · 23h ago
Google Chat wasn't down for me throughout the entire incident.
bananapub · 21h ago
it at least used to be standard and fairly well known practice for non-sres to use the irc bridge.
the much more disastrous situation would have been the irm fallback.
miohtama · 23h ago
Someone actually uses Google Chat...?
00deadbeef · 23h ago
Google has a chat product?
asadm · 23h ago
it's the best
ZiiS · 22h ago
Well given how many they have decommissioned...
clhodapp · 21h ago
Oh no, that's how you know it's nearing the point of being reaped and thrown in the graveyard!
jppittma · 19h ago
Extremely unlikely. It’s ubiquitous internally.
IX-103 · 19h ago
Don't worry, they're not following the "deprecate and cancel" playbook for that. They seem to be using the "copy a competitor poorly" one. The few features I liked about it, that distinguished it from Slack, disappeared in the latest update.
Also spotify isn't working for me so I assume that's also related.
These are my most important productivity resources! Sad!
0xCAP · 1d ago
> No major incidents
… Proceeds to show worldwide degraded service level alerts.
jimt1234 · 1d ago
Yep. Self-reporting status pages are pretty near worthless. At my former large company (not FAANG), we weren't allowed to update the status page until we got VP approval, which also required approval from both PR and Legal. It would take a lot more time and effort to get those approvals than to just fix the problem and move on.
iFred · 23h ago
SLA contracts, clawbacks, and performance obligations make these pages a bit of a minefield for CSPs. When I was at a top-tier CSP, we had the status page that was public, one that was for a trusted tier of customers, one built for a customer-by-customer basis, and one for internal engineering.
genewitch · 22h ago
When i worked at a top tier speakeasy, we had a book up front for the man, a book in the back for the boss, a book for the trusted accountants...
jschroeder · 1d ago
Status page is showing green because GCP admins can't login to change it ;)
quyleanh · 22h ago
Look like affect to Cloudflare as well [1]
Update - Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency.
Jun 12, 2025 - 19:57 UTC
Status pages at cloud providers aren't usually based in reality -- usually requires VP level political games to actually get them changed especially for serious outages.
enahs-sf · 23h ago
Would be comedy if one of the progenitors of this took Sundar’s buyout offer yesterday and let the world burn today.
paxys · 1d ago
Kinda funny that the top post on HN titled "GCP Outage" links to the Google Cloud status page which shows...no outage.
I'm sure it's not entirely impossible, but sounds backwards to me. Sure - a lot of the internet relies on Cloudflare, but I'd be very surprised if GCP had a direct dependency on Cloudflare, for a lot of reasons. Maybe I misunderstood your comment?
johnnyApplePRNG · 23h ago
This appears to be continuing to cascade over an hour later... wow... more and more services mentioned as completely down on the outage page.
Kind of nice to not be glued to AI chat prompts for a while to be honest.
tsouth · 1d ago
Everyone is down. Cloudflare has problems too. All auth providers broken.
Jayakumark · 1d ago
Someone must have checked in AI Generated code :-)
ekojs · 1d ago
Super duper frustrating having the status page being green. Why can't Google do this properly?
supportengineer · 1d ago
Those responsible have been sacked.
18172828286177 · 1d ago
Those responsible for sacking the people who have just been sacked, have been sacked.
> Multiple GCP products are experiencing impact due to Identity and Access Management Service Issue
IAM issue huh. The post-mortem should be interesting at least.
yard2010 · 21h ago
Ha. With all this soviet style euphemism I rather read the onion instead.
bananapub · 21h ago
It’s not a euphemism - every outage, including the 99.9% that don’t end up on HN gets a postmortem document written about it, which is almost always a fascinating discussion of the technical, cultural and organisational situation that led to an unexpected bad thing happening.
Even a few years ago senior management knew to stay the fuck out except for asking for more info.
madjam002 · 1d ago
Google Maps not loading, thought it was my 4g, go to see if my connection works by loading Hacker News, GCP Outage XD
b0a04gl · 1d ago
console not loading, storage slow, support forms dead, status page green.
no fallback, no real-time alert, was just wondering when it'll start working.
whole stack feels brittle when basic visibility tools fail too.
everyone’s pointing fingers but nobody has root access to truth.
eterm · 1d ago
Cloudflare speedtest is down too, I assume because of this?
I wonder what the damage ($) for having a good portion of the internet down for an hour or two ;)
tmiku · 1d ago
Looks like I'm about to start learning which of my time-killing websites are hosted on GCP - The Ringer is down, and since Spotify owns them and is a major GCP customer, it looks like they've been hit by this. CRAZY that the GCP status page is still green.
Just our bi-yearly reminder of our over reliance on cloud providers for literally everything. Can't say there's an answer beyond trying to build more independent tech but we know how that goes.
ManBeardPc · 15h ago
Yet migration to the cloud continues, driven by people arguing that doing it yourself is too complicated and expensive. Let’s see how long until one outage takes down the global economy for multiple days or weeks.
ocdtrekkie · 21h ago
Hilariously, I did not know about any outages today during the workday because we discourage cloud service usage and nobody complained about anything breaking. :)
Cloudflare KV is also having an outage. I wonder who is reliant on who here.
dlewis1788 · 1d ago
Looks like more than KV is having an issue. Just tried to load dash.cloudflare.com and no bueno.
hackermondev · 1d ago
seriously doubt Google Cloud is relying on Cloudflare KV lol
pikdum · 22h ago
Was just about to do a demo, but Google Meet was down. Tried to use Jitsi as a fallback, but couldn't log in because Firebase was down too. Ended up using a Slack Huddle, lol.
marcinignac · 19h ago
Can't wait to see how charts are going to look like here on the project we have developed for Maintel https://variable.io/maintel-digital-landscape/. It shows availability across multiple services as a landscape. Expecting to see a lot of spikes tomorrow.
jamesrwhite · 23h ago
Seems like a wider issue at Google than just GCP, the Sheets and Chat APIs are also returning similar "Visibility check was unavailable" errors.
yunwal · 22h ago
Presumably many Google products run on GCP
knuppar · 17h ago
Spotify was not loading, thought my 5G was bad, used YouTube Music instead without issues. Hmmm...
traeregan · 1d ago
For us Cloud SQL instances are toast but App Engine Standard instances are still serving requests. Google Cloud console is borked too, mostly just erroring out.
digest · 1d ago
love how their status page is green with no issues detected!
Haha, I don't ordinarily spend a lot of time in the Google Cloud Console but just now I was debugging a squirrely OAuth issue with reCAPTCHA failing to refresh several days running. I'm getting this weird page error, and I think, "Is this an issue with my organization? [futz futz futz] Hey wait is GCP actually down?" And it turns out to be the top discussion on HN. XD
Brystephor · 1d ago
some core GCP cloud services are down. might be a good time for GCP dependent people to go for a walk, do some stretches, and check back in a couple hours.
niij · 1d ago
Experiencing 504s in Google Meet.
Google Cloud Console won't load.
tiagod · 1d ago
Getting Gateway timeouts on docker hub. Maybe related?
I can pull images.
Does anyone know if instance-to-instance networking has been affected? My Redis instance has been throwing a lot of connection errors.
markbnj · 1d ago
We're not seeing any connectivity issues between pods and vms in our vpc, but your mileage may vary.
Axsuul · 1d ago
Thanks
kgwxd · 1d ago
Sorry, after decades of being hard wired, I just installed a PCIe Wifi6 card on my desktop. Internet took a dive the second I got it connected. Must have done something wrong.
A contact in google mentioned to me that some bad update to Google Cloud Storage service has caused some cascading issues affecting multiple GCP services.
unsupp0rted · 1d ago
The last few times this happened I wouldn't have thought "So this is the day AI takes over".
But this time...
herpderperator · 19h ago
When Google said GCP is "down", did it affect entire availability zones within a region? For people who designed redundant infrastructure, did your backup AZs/regions keep your systems online?
jlhawn · 19h ago
The outage was global. For my team specifically, a global Identity and Access Management outage meant that our internal service accounts could not refresh their short-lived access tokens and so different parts of our infrastructure began to fail over the course of an hour or so, regardless of what region or zone they were in. Services were up, but they could not access critical GCP services because of auth-related issues which resulted in internal service errors for us.
To give an example, our web servers connect to our GCP CloudSQL database via a Cloud SQL Auth Proxy (there's also a connection pooler in between but that also stayed up). The connection to the proxy was always available, but the proxy wasn't able to renew auth tokens it uses to tunnel to the database, regardless of where the webserver or database happened to be located. To mitigate this in the future we're planning to stop using the auth proxy and connect directly via mutual TLS but now it means we have to manage TLS certificates.
slt2021 · 19h ago
so much for System Design interview and bs gatekeeping...
I doubt gcloud would be affected by an aws-specific cni. Unless maybe enough AWS users have a GCP backup environment that they flipped on all at once, but it seems unlikely
itdependsnet · 22h ago
good point. I took that as simply the example that they had in front of them but a generic issue.
biglyburrito · 17h ago
I wonder how many SLAs Google blew out today with this outage.
plerpler · 23h ago
GCP Artifact registry still down... Not accepting image push and showing 500 status code
TN1ck · 1d ago
Cloudbuild completely down for us. Getting "Visibility check was unavailable" errors.
acureau · 1d ago
Well this explains the issues I've been having with Spotify through the last hour.
We're in us-west-1 and seeing issues across Cloud Run, Cloud SQL, Cloud Storage and Compute Engine.
jim180 · 23h ago
Claude Code is down :( too lazy to do manual conversion from Cocoapods dependency to SwiftPM
rcfox · 1d ago
I'm able to login to the GCP dashboard, but it isn't able to find any of my projects.
No comments yet
throwaway7783 · 23h ago
Even though BigQuery is not listed in affected services, we see errors connecting to it
tecleandor · 23h ago
It's listed by regions :(
CrimsonCape · 23h ago
I'm having trouble getting any Street View imagery. Can anyone else confirm?
NameError · 23h ago
Yep, street view is not working at all for me
cyberflame · 23h ago
Root cause has been identified and it's being resolved/monitored now
aetherson · 1d ago
We're experiencing intermittent slowness and timeouts on our GCP everything.
alexcroox · 23h ago
2 hour outage at this point
cyberflame · 23h ago
Everything except us-central1 is back up - it's recovering now though
throwcarsales · 1d ago
My friends and I are even having trouble getting Rcs text messages to send.
ZiyadFarhan · 1d ago
this aint looking good yall
cyrux004 · 1d ago
GPay which is a widely used payment service in India is down as well
edm0nd · 23h ago
India is having a really bad day today
makk · 21h ago
And THAT, Smithers, is why we wear hardhats on the job.
xan_ps007 · 1d ago
Where are the AI agents?
dionys · 1d ago
Poor agents, finally taking a break
kyleee · 20h ago
The AI is over employed
keizo · 1d ago
Yup, intermittent db connection issues and cloud storage problems.
LZ_Khan · 21h ago
Is this the new Y2k?
dpedu · 23h ago
reCAPTCHA affected? I couldn't log into my local utilities website due to a reCAPTCHA error. Downdetector agrees, but I interpret that site as dubious.
marifjeren · 23h ago
Yeah recaptcha is down intermittently
kodisha · 21h ago
> Waiting for downdetector.com to respond...
matdehaast · 1d ago
Not just GCP, most of Googles services are out of action
milesward · 1d ago
I'm on a meet, in cal, editing a dozen docs, in GCP, pushing commits and launching containers; it's not clear yet what exactly is going on but it's certainly intermittent and sparse, at least so far
parpfish · 1d ago
stop it. you're overloading their system by doing three things at once. let the rest of us have a turn.
CyrMeta · 1d ago
if all services at down at once, no one is thinking or mentioning a potential attack on US cloud providers ? (China or Russia) Maybe ?
meltyness · 1d ago
Can't upload discord attachments from mobile.
ea016 · 1d ago
Google Cloud Storage seems to be down or very slow
morgandoane · 1d ago
Storage, CloudRun, Firebase...... All down....
dana321 · 1d ago
Auth, GCP, Windsurf,Augment Code,Udio, the list is endless.
Facebook, Reddit and Hacker News is still up, but thats about it
evtothedev · 1d ago
Yarn package registry also appears to be down.
tom1337 · 23h ago
npm is, registry.yarnpkg.com is only a CNAME to npm
First, check that nobody else in your family is making a call on the phone line that your modem is connected to, then make sure to disable your Internet Explorer add-ons before trying again.
if everything down at the same time - No one is mentioning an attack on us cloud services ? ( China or Russia ) Maybe ?
ashwinsundar · 23h ago
Interesting that all Digital Ocean services are fine...
mlb_hn · 1d ago
Our GCP is down
milesward · 1d ago
What region?
ashu1461 · 1d ago
I think multiple regions are down. asia-south, us-east atleast are impacted.
a_void_sky · 23h ago
asia-south is working for me
whitedurna · 1d ago
i think it'll be disaster.
jwatte · 22h ago
Let's say a typical base service (network attached RAM or whatever) has 99.99% reliability.
If you have a dependency on 100 of those, you're suddenly closer to 99% reliability.
So you switch to higher-level dependencies, and only have 10 dependencies, for a 99.9% reliability.
But! It turns out, those dependencies each have dependencies, so they're really already more like 99.9% at best, and you're back at 99% reliability.
"good enough" is, indeed, just good enough to make it not worthwhile to rip out all the upstreams and roll your own everything from scratch, because the cost of the occasional outages is much lower than the cost of reinventing every single wheel, nut, bolt, axle, bearing, and grease formulation.
"All locations except us-central1 have fully recovered. us-central1 is mostly recovered. We do not have an ETA for full recovery in us-central1."
kubectl_h · 22h ago
An hour later and everything is a mess in central-1. They seemed to jump the gun on that one. Doesn't matter if some dinky service like "AutoML Vision" is working, if GCS isn't, then they shouldn't post an optimistic message.
sleepybrett · 1d ago
npm registry happen to be hosted on gcp, because that seems to be down as well.
"Firebase Data Connect unavailable due to a known Google Cloud global outage"
While the Google Cloud status page https://status.cloud.google.com/ says "No major incidents" and everything is green. So Google Cloud know there is an outage but just deem it not major enough to show it.
Edit to add: within 10 minutes of this post Google updated their status page. More curiously the Firebase page I linked to has been edited to remove mention of Google Cloud in the status and now says "Firebase Data Connect is currently experiencing a service disruption. Please check back for status. ".
shmatt · 1d ago
IIRC status pages drive customer compensation for downtime. Updating it is basically signing the check for their biggest customers, in most similar companies you need a very senior executive to approve the update
On the other side of this, Firebase probably doesn't have money at stake making the update
aiauthoritydev · 1d ago
It is not the status page that drives customer compensation. It is downtime.
camdenreslink · 23h ago
The status page is essentially an admission of guilt. It can require approval from the legal department and a high level official from the company to approve updating it and the verbiage used on the status page.
hodgesrm · 23h ago
> It can require approval from the legal department and a high level official from the company to approve updating it and the verbiage used on the status page.
Is that true in this case or are you speculating? My company runs a cloud platform. Our strategy is to have outages happen as rarely as possible and to proactively offer rebates based on customer-measured downtime. I don't know why people would trust vendors that do otherwise.
camdenreslink · 17h ago
I don't have any special knowledge about the companies involved in this outage. I do know most (all?) status pages for large companies have to be manually updated and not just anybody can do that. These things impact contracts, so you want to be really sure it is accurate and an actual outage (not just a monitor going off, possibly giving a false positive).
dpkirchner · 23h ago
You are likely right, but it's still gross dishonesty. I'm not ready to let Google and their engineers off the hook for that.
refulgentis · 23h ago
Inter alia, "is essentially", "it can", tell us this is just free-associating.
We should probably avoid punishing them based on free-associating made by a random not-anonymous not-Googler not-Xoogler account on HN. (disclaimer: xoogler)
dijit · 23h ago
then it’s fucking useless.
Let’s crowd source our own
We tried to do that. It didn't work. Too much spam, scams, and abuse.
baggy_trough · 23h ago
You're in the crowdsourced version right now.
hugs · 23h ago
working on it! (valet network)
refulgentis · 23h ago
"It can", this is just free-associating, don't let it get to ya. (disclaimer: xoogler)
refulgentis · 23h ago
Nah, its just some client side caching / JS stuff. Clicking the big refresh button fixed it for me, 15 minutes before OP noted it.
(n.b. as much as Google in aggregate is evil, they're smart evil. You can't avoid execs approving every outage because checks without some paper trail, and execs don't want to approve every outage, you'd have to rely on too many engineers and sales people, even as ex-employees, to keep it a secret. disclaimer: xoogler)
(EDIT: for posterity, we're discussing a "overall status" thing with a huge refresh button, right above a huge table chockful of orange triangles that indicate "One or more regions affected" - even when the "overall status" was green, the table was still full of orange and visible immediately underneath. My point being, you gotta suppose a wholeeee bunch of stuff to get to the point there was ever info suppressed, much less suppressed intentionally to avoid cutting checks)
kjuulh · 1d ago
Something must be preventing them updating the status page at this point. Of course they could still deem it not enough, but just from my limited tests, docker, buf, etc (it may not be GCP that is down, but it is quite the coincidence). are outright down. I'd wager that this is much more widespread.
sss111 · 1d ago
I'm actually on a bridge call with Google Cloud, we're a large customer -- I just learned today that their status page is not automated, instead someone actually manually updates it!
paxys · 1d ago
That's the case with every status page. These pages are managed by business people not engineers, because their primary purpose is to show customers that the company is meeting contractually defied SLAs.
belter · 1d ago
Surelly no SLA will be based on the display of the status page...
phatskat · 23h ago
Maybe or maybe not, but someone with nothing better to do than monitor that page out of boredom might “get on the horn” with lots of people to complain if a green check mark turns to a red X.
paxys · 23h ago
They aren't automatically based on that page, but seeing a red status makes it too easy for customers to point to it and go "see you were down, give us a refund".
Tostino · 23h ago
should* be
redeux · 1d ago
This is actually the norm for status pages. If you look at the various status page offerings you'll see that they're designed around manual updates.
quectophoton · 23h ago
The best way to consistently having good "time to response" metrics, is to be the one deciding when an incident "actually" started happening, if at all :)
kjuulh · 1d ago
This feels very much like when facebook, locked themselves out of their datacenters. ;)
This extra funny that GCP status page even includes a “last updated” time, which is exactly built to convey possible failure to update in cases like this
No major incident as of “ Last updated time: 12 Jun 2025, 11:48 PDT”
codergautam · 1d ago
Maybe the outage is preventing them from updating that specific page? Hmm
EDIT: Looks like it has been updated now (6:49 PM UTC)
artooro · 1d ago
Anytime there is an outage that affects App Engine, Google can't seem to get their status page updated for an extended period of time.
alexcroox · 1d ago
Almost an hour to update the page...
devMem · 1d ago
I hope this is the case, or google is super unreliable for production grade work.
samdung · 1d ago
:))))))
Workaccount2 · 1d ago
I asked testing to see if it was up, and it pointed out that Google shows nothing but Nest is showing an outage right now, lol
No notifications for mentions, have to email the mods at the hn@ email address.
cwillu · 1d ago
Do we know if email is still working? kidding-but-not-really-because-gmail…
hambro · 1d ago
I think I was a bit optimistic in the response time from mods. This thread won the popularity contest quite well...
Thanks for letting me know about emailing the mods, refreshingly explicit to send email.
lawrenceyan · 23h ago
Solana is up ¯\_(ツ)_/¯
estees_ecstacy · 23h ago
seems recovering
desktopninja · 23h ago
Borg and K8s were fighting for resources, so Gemini decided to take out DNS. Now a sysAdmin has to step in.
* just trying to add a little humour. pretty stressfull outage. grarr!!
ManBeardPc · 20h ago
The cloud enables you to scale. It allows us to distribute systems across multiple regions and data centers. Seems that this is true for outages as well.
The PHP application I wrote as a student running on a single self-hosted server had a higher uptime than any of the cloud providers or redundant system I have seen so far. If you don’t need the cloud for scalability, do it yourself and save yourself the trouble and money. Most companies would be better off investing into some IT staff instead of giving away their systems in the hands of some proprietary and insanely complex cloud environment. You are becoming dependent on someone you don’t know, have no control over and can’t talk with directly. Also the single point of failure is just shifting: from your system to whatever system is managing the cloud. Guess one advantage is that you can shift the blame to someone else…
"Chemist checks the project status, activation status, abuse status, billing status, service status, location restrictions, VPC Service Controls, SuperQuota, and other policies."
-> This would totally explain the error messages "visibility check (of the API) failed" and "cannot load policy" and the wide amount of services affected.
cf. https://cloud.google.com/service-infrastructure/docs/service...
EDIT: Google says "(Google Cloud) is down due to Identity and Access Management Service Issue"
No comments yet
https://www.cloudflarestatus.com/
At Cloudflare it started with: "Investigating - Cloudflare engineering is investigating an issue causing Access authentication to fail.".
So this would somehow validate the theory of auth/quotas started failing right after Google, but what happened after ?! Pure snowballing ? That sounds a bit crazy.
> Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information are unavailable [...]
Surprising, but not entirely unplausible for a GCP outage to spread to CF.
Good to know that Cloudflare has services seemingly based on GCP with no redundancy.
No comments yet
Im not sure who is running the show there, but the whole thing seems kinda shoddy given cloudflares position as the backbone of a large portion of the internet.
I personally work at a place with less market cap than cloudflare and we were hit by the exact same instances (datacenter power went out) and had almost no downtime, whereas the entire cloudflare api was down for nearly a day.
Cloudflare adverises themselves as _the_ redundancy / CDN provider. Don't ask me for an "alternative" but tell them to get their backend infra shit in order.
As such, I wouldn't be overly surprised if all of CF's non-edge compute (including, for example, their control plane) is just tossed onto a "competitor" cloud like GCP. To CF, that infra is neither a revenue center, nor a huge cost center worth OpEx-optimizing through vertical integration.
Scale is just a way to keep costs low. In addition to economies of scale, routing tons of traffic puts them in position to negotiate no-cost peering agreements with other bandwidth providers. Freemium scale is good marketing too.
So there is no strategic reason to avoid dependencies on Google or other clouds. If they can save costs that way, they will.
So long as the outages are rare, I don’t think there is much downside for Cloudflare to be tied to Google cloud. And if they can avoid the cost of a full cloud buildout (with multiple data centers and zones, etc…), even better.
Plus their past outage reports indicate they should be running their own DC: https://blog.cloudflare.com/major-data-center-power-failure-...
"Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information are unavailable"
DownDetector also reports azure and oracle cloud, I can't see then also being dependant on GCP...
I guess down detector isn't a full source of truth though.
https://ocistatus.oraclecloud.com/#/ https://azure.status.microsoft/en-gb/status
Both green
It's often things like "we got backpressure like we're supposed to, so we gave the end user an error because the processing queue had built up above threshold, but it was because waiting for the timeout from SaaS X slowed down the processing so much that the queue built up." (Have the scars from this more than once.)
My apps run on AWS, but we use third parties for logging, for auth support, billing, things like that. Some of those could well be on GCP though we didn't see any elevated error rates. Our system is resilient against those being down- after a couple of failed tries to connect it will dump what it was trying to send into a dump file for later re-sending. Most engineers will do that. But I've learned after many bad experiences that after a certain threshold of failures to connect to one of these outside system, my system should just skip calling out except for once every retryCycleTime, because all it will do is add two connectionTimeout's to every processing loop, building up messages in the processing queue, which eventually create backpressure up to the user. If you don't have that level of circuit breaker built, you can cause your own systems to give out higher error rates even if you are on an unaffected cloud.
So today a whole lot of systems that are not on GCP discovered the importance of the circuit breaker design pattern.
https://www.businessinsider.com/google-return-office-buyouts...
Nooooo I'm going to have to use my brain again and write 100% of my code like a caveman from December 2024.
No comments yet
Devs during June 12, 2025 GCP outage: "What, no AI?! Do you think I'm a slave?!"
This is a good wake-up call on how easily (and quickly) we can all become pretty dependent on these tools.
No comments yet
Update - We are seeing a number of services suffer intermittent failures. We are continuing to investigate this and we will update this list as we assess the impact on a per-service level.
Impacted services: Access WARP Durable Objects (SQLite backed Durable Objects only) Workers KV Realtime Workers AI Stream Parts of the Cloudflare dashboard Jun 12, 2025 - 18:48 UTC
Edit: https://news.ycombinator.com/item?id=44261064
Supposedly there are plans for how to conduct a "cold" start of the system, but as far as I know it's never actually been tried.
Presumably for the infrequently read configs you could do this so the service with frequently read configs can bootstrap without the service for infrequently read configs.
(Its Finnish inventor is incidentally working for Google in Stockholm, as per https://en.wikipedia.org/wiki/Jarkko_Oikarinen)
Cloud flare was really the gcp problem. Most of the others are going to be dependencies on cf or random Google stuff.
Discord for example was gcs for updates, etc
The have software running in most ISPs around the world:
https://help.speedtest.net/hc/en-us/articles/360039164793-Ho...
(OTOH, it's not always trivial to define/detect an outage.)
Was not basically everything (hyperbolically speaking, of course) practically impacted today?
How much weight really comes from those social media posts? Is there an indirect effect of people reading these posts, then flocking to hit the report button, sight unseen?
(downdetector infra also likely affected)
https://www.google.com/appsstatus/dashboard/
https://status.cloud.google.com/index.html
Edit: The GCP status page got updated <1 minute after I posted this, showing affected services are Cloud Data Fusion, Cloud Memorystore, Cloud Shell, Cloud Workstations, Google Cloud Bigtable, Google Cloud Console, Google Cloud Dataproc, Google Cloud Storage, Identity and Access Management, Identity Platform, Memorystore for Memcached, Memorystore for Redis, Memorystore for Redis Cluster, Vertex AI Search
The only accurate status pages are provided by third party service checkers.
Well, yes, incentives, do big customers with wads of cash have an incentive to demand accurate reporting from their suppliers so they can react better rather than trying to identify issues? If there's systematic underreporting, then apparently not. Though in this case they did update their page.
If you think about it from the corp’s perspective, it makes perfect sense. They weigh the risk reward. Are they going to be rewarded for the radical transparency or suffer fall out by acknowledging how bad of a dumpster fire the situation actually is? Easier for the corp to just lie, obscure and downplay to avoid having to even face that conundrum in the first place.
Heroku was down for _hours_ the other day before there was any mention of an incident - meanwhile there were hundreds of comments across twitter, hn, reddit etc.
This is my position.
… I get that PR-types probably want to massage the message, but going radio dark is not good PR.
We are truly in gods hands.
$ prod
Fetching cluster endpoint and auth data. ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=503, message=Visibility check was unavailable. Please retry the request and contact support if the problem persists
I remember reddit being down for like a whole day or so and they claimed 99.5% in that month.
Not trying to snark, I legit got nerdsniped by this comment.
Engineering and Operations in the Bell System is pretty great for this.
It's a lot easier to keep packets flowing than to keep non-self-contained servers serving.
Downdetector reports started spiking over an hour ago but there still isn't a single status that isn't a green checkmark on the status page.
E.g. it was mostly us-central1 region affected, and in there only some services (e.g. regular instances, and GKE kubernetes were not affected in any region). So if we ask "what the percentage of GCP is down", it might well be it's less than the threshold.
On the other hand, about a month ago, 2025-05-19 there was an 8-hour long incident with Spot VM instances affecting 5 regions, and which was way more important to our company, but it didn't make any headlines.
then such pages should report a partial failure. Indeed the GCP outage page lists an orange "One or more regions affected" marker, but all services show the green "Available" marker, which apparently is not true.
The issue is that they don't want to. (For claiming good uptime, which may even be true for average user, if most outages affect only small groups)
But really not until after it's been on CNN a while.
SLA agreements.
My understanding is this is part of Google's internal PSD offering (Public Status Board) which uses SCS (Static Content Service) behind GFE (Google Frontend) which is hosted on Borg, and deploys other large scale apps such as Search, Drive, YouTube, etc.
Of course, I recognize that you purposefully pulled the Cunningham's Law trigger so that you, too, would gain additional knowledge that nobody would have told you about otherwise, as one logically would. And that you played it off as some kind of derision towards doing that all while doing it yourself made it especially funny. Well done!
It is what it says on the tin: choosing to lie doesn't mean you want the truth communicated.
I apologize that it comes across as aggro, its just that I'm not quite as giggly about this as you are. I think I can safely assume you're old enough to recognize some deleterious effects of lying
How hard is it for the frontend to detect if the last update to the status page was made a while ago and that itself implies there is an error and should be reported ?
But why would the frontend have processing logic when all you need is to serve a static HTML document?
Even if it did, what would you do with that information? Throw up a screen with: Call us for service information at 1-HAHA-JUST-KIDDING
It’s not like it really matters if it’s accurate anyway.
How could you possibly trust them with your critical workloads? They don't even tell you whether or not their services work (despite obviously knowing)
RCS was designed and specced, by GSMA, as a telco run decentralized system that would replace SMS as like for like; but there were only a handful of rollouts. It's really only gotten use as Google pushed it onto Android, using their RCS server; recently iOS started using it although I don't know what server they attach to.
Since RCS is basically the 5th wave Google IM, it's no surprise when they have a major outage, RCS is pretty much broken.
According to Wikipedia, only the carrier's RCS server is used [1]
[1]: https://en.wikipedia.org/wiki/Rich_Communication_Services#So...
From the end user's perspective, if the carrier didn't use Jibe RCS, it simply wouldn't work well.
> Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency. As a result, certain Cloudflare products that rely on KV service to store and disseminate information
"So shady"
It's really, really hard to make a status page realtime.
> What makes you think it’s hard?
Being responsible (or rather, on a team of people responsible) for a status page of a big tech co made me think it’s hard.
“Is it down?” Is not a binary question.
No comments yet
I would love if anyone has any good tool recommendations!
Here are some others, although some seem to be experiencing issues due to the current outage I can only presume.
- https://atlas.ripe.net/probes/public
- https://www.ihr.live/en/global-report
- https://www.ihr.live/en/network
- https://bgp.he.net/
- https://ioda.inetintel.cc.gatech.edu/dashboard/asn
Why would you think this outage is (internet) BGP related?
"Cloudflare’s critical Workers KV service went offline due to an outage of a 3rd party service that is a key dependency."
I really hope CF explains this apparent Google dependecy in detail in their post mortem.
as per their api documentation [1], it might be linked to firebase, which might explain this?
[1] https://github.com/HackerNews/API
Getting 504 errors on anything from registry.npmjs.org that isn't cached on my machine.
https://status.npmjs.org/incidents/dn5mcp85737y
the much more disastrous situation would have been the irm fallback.
Also spotify isn't working for me so I assume that's also related.
These are my most important productivity resources! Sad!
… Proceeds to show worldwide degraded service level alerts.
I'm sure it's not entirely impossible, but sounds backwards to me. Sure - a lot of the internet relies on Cloudflare, but I'd be very surprised if GCP had a direct dependency on Cloudflare, for a lot of reasons. Maybe I misunderstood your comment?
Kind of nice to not be glued to AI chat prompts for a while to be honest.
> Multiple GCP products are experiencing impact due to Identity and Access Management Service Issue
IAM issue huh. The post-mortem should be interesting at least.
Even a few years ago senior management knew to stay the fuck out except for asking for more info.
Google Cloud Console won't load.
Example: https://hub.docker.com/layers/library/eclipse-mosquitto/late...
A contact in google mentioned to me that some bad update to Google Cloud Storage service has caused some cascading issues affecting multiple GCP services.
But this time...
To give an example, our web servers connect to our GCP CloudSQL database via a Cloud SQL Auth Proxy (there's also a connection pooler in between but that also stayed up). The connection to the proxy was always available, but the proxy wasn't able to renew auth tokens it uses to tunnel to the database, regardless of where the webserver or database happened to be located. To mitigate this in the future we're planning to stop using the auth proxy and connect directly via mutual TLS but now it means we have to manage TLS certificates.
EDIT: Updated link to point to the specific incident.
Seeing how everything seems to be broken everywhere, I'm very much looking forward to the post-mortem.
Ask HN: Is Firebase Down? - https://news.ycombinator.com/item?id=44260669
Crossing my fingers for a quick resolution.
Thankfully we use AWS at work for everything critical
https://downdetector.com/
anyone know what tech stack they use and where they host
Cloud console does nothing.
They should host their support services on AWS and vice-versa.
No comments yet
Facebook, Reddit and Hacker News is still up, but thats about it
Good luck out there!
https://nitter.net/Google/status/1933246051512644069
Seems obvious.
"good enough" is, indeed, just good enough to make it not worthwhile to rip out all the upstreams and roll your own everything from scratch, because the cost of the occasional outages is much lower than the cost of reinventing every single wheel, nut, bolt, axle, bearing, and grease formulation.
but no tech bros, just keep following your ketamine addled edgelord when he did this with twitter..
https://en.wikipedia.org/wiki/Bart_Gets_Famous
https://status.cloud.google.com/
File that in the status pages worth ~0 category.
Did someone screw up BGP again?
sslv3 alert bad certificate:../deps/openssl/openssl/ssl/record/rec_layer_s3
"Firebase Data Connect unavailable due to a known Google Cloud global outage"
While the Google Cloud status page https://status.cloud.google.com/ says "No major incidents" and everything is green. So Google Cloud know there is an outage but just deem it not major enough to show it.
Edit to add: within 10 minutes of this post Google updated their status page. More curiously the Firebase page I linked to has been edited to remove mention of Google Cloud in the status and now says "Firebase Data Connect is currently experiencing a service disruption. Please check back for status. ".
On the other side of this, Firebase probably doesn't have money at stake making the update
Is that true in this case or are you speculating? My company runs a cloud platform. Our strategy is to have outages happen as rarely as possible and to proactively offer rebates based on customer-measured downtime. I don't know why people would trust vendors that do otherwise.
We should probably avoid punishing them based on free-associating made by a random not-anonymous not-Googler not-Xoogler account on HN. (disclaimer: xoogler)
https://downdetector.com/
(n.b. as much as Google in aggregate is evil, they're smart evil. You can't avoid execs approving every outage because checks without some paper trail, and execs don't want to approve every outage, you'd have to rely on too many engineers and sales people, even as ex-employees, to keep it a secret. disclaimer: xoogler)
(EDIT: for posterity, we're discussing a "overall status" thing with a huge refresh button, right above a huge table chockful of orange triangles that indicate "One or more regions affected" - even when the "overall status" was green, the table was still full of orange and visible immediately underneath. My point being, you gotta suppose a wholeeee bunch of stuff to get to the point there was ever info suppressed, much less suppressed intentionally to avoid cutting checks)
* https://www.datacenterdynamics.com/en/news/facebook-blames-m...
I at least have no issues on their services across a few regions, and their console works fine.
https://health.aws.amazon.com/health/status
Perhaps CF is dependant on some GCP services?
> https://health.aws.amazon.com/health/status
Historically, the worst place to figure out if AWS is up/down is Amazons own status page.
At least some of the information has to be.
The weird part is that it took them almost an full hour to update it.
No major incident as of “ Last updated time: 12 Jun 2025, 11:48 PDT”
EDIT: Looks like it has been updated now (6:49 PM UTC)
https://status.nest.com/posts/dashboard
say it's not so!
Thanks for letting me know about emailing the mods, refreshingly explicit to send email.
* just trying to add a little humour. pretty stressfull outage. grarr!!
The PHP application I wrote as a student running on a single self-hosted server had a higher uptime than any of the cloud providers or redundant system I have seen so far. If you don’t need the cloud for scalability, do it yourself and save yourself the trouble and money. Most companies would be better off investing into some IT staff instead of giving away their systems in the hands of some proprietary and insanely complex cloud environment. You are becoming dependent on someone you don’t know, have no control over and can’t talk with directly. Also the single point of failure is just shifting: from your system to whatever system is managing the cloud. Guess one advantage is that you can shift the blame to someone else…