My SRE brain reading between the lines is they have been feature factory and tech debt finally caught up to them.
My guess is reason they been down so long is they don’t have good rollback so they attempting to fix forward with limited success.
esafak · 2h ago
If so it is probably a good time to apply for an SRE position there unless they really do not get it!
acedTrex · 3h ago
An outage of this magnitude is almost ALWAYS the direct and immediate fault of senior leaderships priorities and focus. Pushing too hard in some areas, not listening to engineers on needed maintenance tasks etc.
AsmodiusVI · 1h ago
And engineers never are the cause of mistakes? There can't possibly be any data to back up that major outages are more often caused by leadership. I've been in SIEs simply because someone pushed a network outage to a switch network. Statements like these only go to show how much we have to learn, humble ourselves, and stop blaming others all the time.
acedTrex · 41m ago
PROLONGED outages are a failure point that more often than not, require organizational dysfunction to happen.
AlotOfReading · 1h ago
Leadership can include engineers responsible for technical priorities. If you're down for that long though, it's usually an organizational fuck-up because the priorities didn't include identifying and mitigating systemic failure modes. The proximate cause isn't all that important and the people who set organizational priorities are by-and-large not engineers.
nusl · 4h ago
My sympathy for those in the mud dealing with this. Never a fun place to be. Hope y'all figure it out and manage to de-stress :)
Edit, an outage of this length smells of bad systems architecture...
hinkley · 3h ago
Prediction: Someone confidently broke something, then confidently 'fixed' it, with the consequence of breaking more things instead. And now they have either been pulled off of the cleanup work or they wish they had been.
bravesoul2 · 3h ago
Wow >31h I am surprised they couldnt rebuild their entire systems in parallel on new infra in that time. Can be hard if data loss is invokved tho (a guess). Would love to see the post mortem so we all can learn.
stackskipton · 3h ago
I doubt it’s infra failure but software failure. Their bad design has caught up and they can’t toss more hardware for some reason. Most companies have this https://xkcd.com/2347/ in their stack and it’s fallen over.
> 99.99%+ uptime is the standard we need to meet, and lately, we haven’t.
Four nines is not what I would be citing at this point. (That's less than an hour per year, so they burned that for next three decades)
Maybe aim for 99% first.
Otherwise a pretty honest and solid response, kudos for that!
zamadatix · 3h ago
One could have nearly 3 such incidents per year and still have hit 99%.
I always strive for 7 9s myself, just not necessarily consecutive digits.
manquer · 1h ago
It could be consecutive too and even start with a 9 and be all nines here you go : 9.9999999%
Spivak · 3h ago
I strive for one 9, thank you. No need to overcomplicate. We use Lambda on top of Glacier.
jeeyoungk · 3h ago
why go for 9's when you can go for 8s? you can aim for 88.8888888!
hinkley · 3h ago
There's an old rant I cannot find at the moment that argued that most systems that believe they are 5 9's are really more like 5 8's.
hnlmorg · 3h ago
Hit that and you also master time travel.
theideaofcoffee · 3h ago
Lots get starry-eyed and aim for five nines right out of the gate where they should have been targeting nine fives and learning from that. Walk before you run.
edoceo · 3h ago
Interesting the phrase "I'm sorry" was in there. Almost feels like someone in the Big Chair taking a bit of responsibility. Cheers to that.
thih9 · 3h ago
> Change controls are tighter, and we’re investing in long-term performance improvements, especially in the CMS.
This reads as if overall performance was an afterthought and this doesn’t seem practical; it should be a business metric, it is important to the users after all.
Then again, it’s easy to comment like this in hindsight. We’ll see what happens long term.
newZWhoDis · 3h ago
As a former webflow customer I can assure you performance was always an afterthought.
stackskipton · 3h ago
I mean, if customers don’t leave them over this, higher ups likely won’t care after dust settles.
willejs · 3h ago
Hugops to the people working on this for the last 31+ hours.
Running incidents of this significance is hard, draining and requires a lot of effort, this going on for so long must be very difficult for all involved.
sangeeth96 · 3h ago
Hugs to the ones dealing with this and the users of Webflow who invested in them for their clientele. Hoping they'll release a full postmortem once the sky clears up.
dangoodmanUT · 4h ago
Hugs for their SREs sweating bullets rn
betaby · 3h ago
I'm more surprised that WordPress-like platforms are profitable businesses in 2025.
bogzz · 3h ago
Why? Genuinely asking. Did you mean because there are free alternatives to self-host? I don't think that it would be so easy for someone in the market for a WYSIWYG blog builder to set everything up themselves.
betaby · 3h ago
Exactly. Because of the abundance of the one-click deploy WordPress offerings from value providers like OVH / Hetzner I would think margins are very low for WYSIWYG site builders.
esseph · 2h ago
Decent demand just awful margins.
And most non-tech (and many in tech) have never heard about OVH/Hetzner.
newZWhoDis · 3h ago
We moved away from webflow because it was slow (got the nickname web-slow internally).
Plus, despite marketing begging for the WYSIWYG interface they actually weren't creative enough to generate new content at a pace that required it.
We massively increased conversion rates by going full native and having 1 Engineer churn out parts kits/kitbash LPs from said kits.
Scale for reference: ~$10M/month
plutaniano · 3h ago
Will the company survive long enough to produce a postmortem?
chupchap · 2h ago
Bring back Failwhale
wewewedxfgdf · 3h ago
Companies get very good at handling disasters - after the disaster has happened.
dylan604 · 3h ago
The problem is they get good in that specific disaster. They can only plug a hole in the dike after the hole exists, then they look at the hole and make a plug the exact shape of that hole. The next hole starts the process over for it specifically. Each time. There's no generic plug that can be used each time. So sure, the get very good at making specific plugs. They never get to the point of making a better dike that doesn't spring so many leaks.
wewewedxfgdf · 3h ago
It is the job of the CTO to ensure the company has anticipated as many as possible such situations.
It's not a very interesting thing to do however.
dylan604 · 3h ago
okay. and? the CTO isn't the last word in anything. if they are overruled to keep releasing new features, acquiring new users/clients, sales forward dev cycles, then the whole thing has potential to collapse under the weight of itself.
It's actually the job of the CEO to keep all of the c-suite people doing jobs. Doesn't seem to stop the CEO salary explosions.
wewewedxfgdf · 3h ago
I think we are agreed.
Companies, after a disaster, focus lots of effort on that particular disaster, leaving all the other potential disasters unplanned for.
If you work at Webflow, you can anticipate LOTS of work in disaster recovery in the next 12 months. This has magically become a high priority for the CEO, who previously wanted features more than disaster recovery planning.
They will wait to focus massive resources on their security until after they get hacked.
Wow, that whole page does not inspire confidence. It’s 99% LLM slop.
What We’re Doing:
-We are making ongoing adjustments to our infrastructure to improve stability and ensure reliable scaling under elevated load
-Analyzing system patterns and optimizing backend processes where resource contention is highest
-Implementing protective measures to safeguard platform integrity
esseph · 2h ago
Expect every thing you read from here on our to be "AI Slop".
It's not going to get better in any way.
Marciplan · 2h ago
y’all relax they are vibe coding the fix right now
ActionHank · 2h ago
So now they’re Webno?
pton_xd · 3h ago
Claude, here is the bug, fix it. This is the new log output, fix the error. Fix the bug. Try a different approach. Reimplement the tests you modified. The bug is still happening, fix it. Fix the error.
We're out of credits, create a new account. We've been API rate limited? When did that start happening? When are we going to get access again?
Good luck engineers of the future!
lgl · 3h ago
Comment of the year 2025! Thanks for that :D
ed_mercer · 3h ago
You forgot to add “think hard!” :)
esafak · 3h ago
And a subtle threat: "... or else".
zvmaz · 3h ago
How do you know?
troyvit · 3h ago
More like "Good luck users of the future" that have to wade through failing infrastructure and tools that were vibe coded to begin with, rate limits notwithstanding.
xyst · 3h ago
I have no clue of "webflow" purpose based on it's marketing/buzzword filled landing page, but seems it's just a "no code" abstraction on top of HTML/CSS?
yet another SaaS that really does not need to be online 24/7. It could have been a simple app where you could "no code" on local machine and async state with webflow servers.
dylan604 · 3h ago
if you have a web based SaaS, everyone gets the updates. if you have a "simple app", then you are dependent on all of the users being up to date which you just cannot guarantee. also, what is a "simple app" that does not care about differences among various OSes found in the wild? how large of a team do you need for each of those OSes to support as wide of a user base as a web only app?
esseph · 2h ago
Cost of having a reliable product with some self-determination for the customer.
dylan604 · 2h ago
the customer can self-determine just fine using a web based SaaS no-code website builder. it's not like this is a different type of app. the thing is making a website that is more also hosted by the maker of the app. if you want to make a website to host on your own servers, then you are not the target audience of the web app.
you're like the person complaining that the hammer isn't very useful for driving in the screw. you need a different tool/app if you want to make a site you host yourself
My guess is reason they been down so long is they don’t have good rollback so they attempting to fix forward with limited success.
Edit, an outage of this length smells of bad systems architecture...
Four nines is not what I would be citing at this point. (That's less than an hour per year, so they burned that for next three decades)
Maybe aim for 99% first.
Otherwise a pretty honest and solid response, kudos for that!
I always strive for 7 9s myself, just not necessarily consecutive digits.
This reads as if overall performance was an afterthought and this doesn’t seem practical; it should be a business metric, it is important to the users after all.
Then again, it’s easy to comment like this in hindsight. We’ll see what happens long term.
And most non-tech (and many in tech) have never heard about OVH/Hetzner.
Plus, despite marketing begging for the WYSIWYG interface they actually weren't creative enough to generate new content at a pace that required it.
We massively increased conversion rates by going full native and having 1 Engineer churn out parts kits/kitbash LPs from said kits.
Scale for reference: ~$10M/month
It's not a very interesting thing to do however.
It's actually the job of the CEO to keep all of the c-suite people doing jobs. Doesn't seem to stop the CEO salary explosions.
Companies, after a disaster, focus lots of effort on that particular disaster, leaving all the other potential disasters unplanned for.
If you work at Webflow, you can anticipate LOTS of work in disaster recovery in the next 12 months. This has magically become a high priority for the CEO, who previously wanted features more than disaster recovery planning.
They will wait to focus massive resources on their security until after they get hacked.
(And also why security is always a losing battle)
What We’re Doing:
-We are making ongoing adjustments to our infrastructure to improve stability and ensure reliable scaling under elevated load
-Analyzing system patterns and optimizing backend processes where resource contention is highest
-Implementing protective measures to safeguard platform integrity
It's not going to get better in any way.
We're out of credits, create a new account. We've been API rate limited? When did that start happening? When are we going to get access again?
Good luck engineers of the future!
yet another SaaS that really does not need to be online 24/7. It could have been a simple app where you could "no code" on local machine and async state with webflow servers.
you're like the person complaining that the hammer isn't very useful for driving in the screw. you need a different tool/app if you want to make a site you host yourself