My fear of this sort of thing happening is why I don't use github or gitlab.com for primary hosting of my source code; only mirrors. I do primary source control in house, and keep backups on top of that.
It's also why nothing in my AWS account is "canonical storage". If I need, say, a database in AWS, it is live-mirrored to somewhere within my control, on hardware I own, even if that thing never sees any production traffic beyond the mirror itself. Plus backups.
That way, if this ever happens, I can recover fairly easily. The backups protect me from my own mistakes, and the local canonical copies and backups protect me from theirs.
Granted, it gets harder and more expensive with increasing scale, but it's a necessary expense if you care at all about business continuity issues. On a personal level, it's much cheaper though, especially these days.
wewewedxfgdf · 10m ago
I once said to the CTO of the company I worked for "do we back up our source code"?
He said, "no, it's on github".
I said no more.
Dylan16807 · 2m ago
If nobody has the repo checked out, what are the odds it's important?
simondotau · 46s ago
If you have multiple developers with up-to-date git repositories on their local computers, plus a copy on GitHub, the “3-2-1” principle of backups is already satisfied.
spiralcoaster · 49s ago
The amount of self-aggrandizing and lack of self awareness tells me this author is doing to do all of this again. This post could be summed up with "I should have had backups. Lesson learned", but instead they deflect to whining about how their local desktop is a mess and they NEED to store everything remotely to stay organized.
They're going to dazzle you with all of their hardened bunker this, and multiple escape route that, not realizing all of their complex machinery is metaphorically running off of a machine with no battery backup. One power outage and POOF!
slashdave · 6m ago
Put your valuables into a safe deposit box. Or, buy some stocks.
Some accident occurs. You don't pay your bill, address changes, etc. You have at least two entire years to contact the holder and claim your property. After that point, it is passed to the state as unclaimed property. You still have an opportunity to claim it.
Digital data? Screw that! One mistake, everything deleted.
ProofHouse · 4m ago
This has happened to me too destroyed five years of my life no joke. Obviously it wasn’t just the set up and pipelines that took only 4 to 6 months but as a chain reaction to collapsed the entire startup. It was so unexpected. Lesson learned.
akerl_ · 15m ago
They’ve really buried the lede here: this reads like the person paying for the account was not the post author, and AWS asked the payer (who from their perspective is the owner of the account) for information.
That person wasn’t around to respond.
blargey · 9m ago
The lede buried under that lede is that (according to an insider?) some AWS employee accidentally wiped everything immediately (contrary to typical practice in such situations of retaining data while things get sorted out), leading to a chain of brushing-off / covering-up percolating through whatever support chain the OP was talking to.
akerl_ · 3m ago
That does seem to be a mistake on their part. And the comms we’re seeing look bad.
But the overall post and the double buried ledes make me question the degree to which we’re getting the whole story.
floating-io · 11m ago
While that is certainly true, the idea that they can so rapidly decimate your data without the possibility to restore is still terrifying if that's your only copy of that data.
They should have to hold it for at least 90 days. In my opinion, it should be more like six months to a year.
In my mind, it's exactly equivalent to a storage space destroying your car five days after you miss a payment. They effectively stole and destroyed the data when they should have been required to return it to the actual owner.
Of course, that's my opinion of how it should be. AFAIK, there is no real legal framework, and how it actually is is entirely up to your provider, which is one reason I never trust them.
akerl_ · 19s ago
The post suggests that even if AWS’s policy had been to hold the data for a year, the same thing would have happened, because they deleted the data due to operator error.
Similarly, a physical storage company can totally make a mistake and accidentally destroy your stuff if they mix up their bookkeeping, and your remedy is generally either to reach an amicable settlement with them or sue them for your damages.
adastra22 · 8m ago
It sounds like it wasn’t OP’s data though, which is an important distinction.
nerdponx · 1m ago
[delayed]
yardie · 4m ago
Cloud user here. If you would read your contracts, and it doesn't matter which cloud service you use, they all have the same section on Share Responsibility.
You, the customer, are responsible for your data. AWS is only responsible for the infrastructure that it resides on.
huksley · 22m ago
If you are given only 5 days to comply with some request, that's how complicated your infra at AWS should be - so you can migrate to another provider in that time.
Just use EC2 and basic primitives which are easy to migrate (ie S3, SES)
S0y · 18m ago
>Just use EC2 and basic primitives which are easy to migrate (ie S3, SES)
If that's your whole infra you really shouldn't be on AWS in the first place.
adastra22 · 8m ago
A bit ironic when that entire stack was invented at AWS.
jasonvorhe · 12m ago
The amount of people shilling for a multi billion dollar corporation is baffling.
whstl · 6m ago
You know the quote: It is difficult to get a man to understand something, when his salary depends on his not understanding it.
A lot of people in this industry have near-zero operations knowledge that doesn't involve AWS, and it's frightening.
saltysalt · 7m ago
This is why I use a local NAS for offline backups.
dboreham · 3m ago
This is good but not really enough. You need another backup to cover the case where this backup is burned to a crisp when your house catches fire. And that second backup needs to be in another geographic region to guard against regional disasters such as meteor impact, super volcano eruption (possibly not a concern for you but it is for me), etc.
dmead · 3m ago
Reddit deleted my 20 year old account with several warnings
lightedman · 13m ago
Lesson to learn: Never use Amazon or anyone else.
0x696C6961 · 11m ago
Or just pay your bills.
3eb7988a1663 · 6m ago
Billing mistakes happen all the time. Even (especially?) at multinationals with dedicated payment departments.
tchbnl · 30m ago
>Before anyone says “you put all your eggs in one basket,” let me be clear: I didn’t. I put them in one provider
Ah, but that's still one basket.
Bratmon · 27m ago
Does... Does the writer of this piece think the phrase only applies to literal baskets?
jsiepkes · 22m ago
Wild that people don't realize that these "separate" systems in AWS all share things like the same control plane.
PartiallyTyped · 16m ago
That is wrong in every way possible. Each service is isolated, has its own control plane, and often even split into multiple cells.
You’ve missed the point by leaning into your pedantry. The point is that from the perspective of vendor/platform risk (things like account closure, billing disputes, etc), all of AWS is the same basket, even if you use multiple regions and services.
senderista · 7m ago
Not sure why you’re being downvoted, this is 100% correct from someone else who worked there.
ImPostingOnHN · 1m ago
If you're not sure why someone is being downvoted, it's a good chance there is a misunderstanding somewhere and waiting for people to comment is a good way for you to understand.
Alternatively, and in a more general sense, if you're not sure of something, a good way to learn is to ask questions.
stefan_ · 5m ago
Do all these cells and control planes run different software built by different teams?
I mean, sure every Ford Pinto is strictly it's own vehicle, but each will predictably burst into flames when you impinge its fuel tank and I don't wanna travel on a company operating an all Pinto fleet.
wewewedxfgdf · 29m ago
So you restored from your off cloud backup, right?
Tell me you have off cloud backups? If not, then I know its brutal, but AWS is responsible for their part in the disaster, and you are responsible for yours - which is not being able to recover at all.
stefan_ · 8m ago
This is really the longest, most self-aggrandizing sermon yet on "I stored my data on this other computer I don't own", complete with conspiracy theory and all.
Store your data on your own disks, then at least you will blame yourself, not .. Java command line parsers?
averrois · 2h ago
AWS has perfected the art of killing startups...
reactordev · 17m ago
My only question is: where the hell was your support rep? Every org that works with AWS in any enterprise capacity has an enterprise agreement and an account rep. They should have been the one to guide you through this.
If you were just yolo’ing it on your own identification without a contract, well, that’s that. You should have converted over to an enterprise agreement so they couldn’t fuck you over. And they will fuck you over.
It's also why nothing in my AWS account is "canonical storage". If I need, say, a database in AWS, it is live-mirrored to somewhere within my control, on hardware I own, even if that thing never sees any production traffic beyond the mirror itself. Plus backups.
That way, if this ever happens, I can recover fairly easily. The backups protect me from my own mistakes, and the local canonical copies and backups protect me from theirs.
Granted, it gets harder and more expensive with increasing scale, but it's a necessary expense if you care at all about business continuity issues. On a personal level, it's much cheaper though, especially these days.
He said, "no, it's on github".
I said no more.
They're going to dazzle you with all of their hardened bunker this, and multiple escape route that, not realizing all of their complex machinery is metaphorically running off of a machine with no battery backup. One power outage and POOF!
Some accident occurs. You don't pay your bill, address changes, etc. You have at least two entire years to contact the holder and claim your property. After that point, it is passed to the state as unclaimed property. You still have an opportunity to claim it.
Digital data? Screw that! One mistake, everything deleted.
That person wasn’t around to respond.
But the overall post and the double buried ledes make me question the degree to which we’re getting the whole story.
They should have to hold it for at least 90 days. In my opinion, it should be more like six months to a year.
In my mind, it's exactly equivalent to a storage space destroying your car five days after you miss a payment. They effectively stole and destroyed the data when they should have been required to return it to the actual owner.
Of course, that's my opinion of how it should be. AFAIK, there is no real legal framework, and how it actually is is entirely up to your provider, which is one reason I never trust them.
Similarly, a physical storage company can totally make a mistake and accidentally destroy your stuff if they mix up their bookkeeping, and your remedy is generally either to reach an amicable settlement with them or sue them for your damages.
https://aws.amazon.com/compliance/shared-responsibility-mode...
You, the customer, are responsible for your data. AWS is only responsible for the infrastructure that it resides on.
Just use EC2 and basic primitives which are easy to migrate (ie S3, SES)
If that's your whole infra you really shouldn't be on AWS in the first place.
A lot of people in this industry have near-zero operations knowledge that doesn't involve AWS, and it's frightening.
Ah, but that's still one basket.
Source: I worked there.
https://docs.aws.amazon.com/wellarchitected/latest/reducing-...
Alternatively, and in a more general sense, if you're not sure of something, a good way to learn is to ask questions.
I mean, sure every Ford Pinto is strictly it's own vehicle, but each will predictably burst into flames when you impinge its fuel tank and I don't wanna travel on a company operating an all Pinto fleet.
Tell me you have off cloud backups? If not, then I know its brutal, but AWS is responsible for their part in the disaster, and you are responsible for yours - which is not being able to recover at all.
Store your data on your own disks, then at least you will blame yourself, not .. Java command line parsers?
If you were just yolo’ing it on your own identification without a contract, well, that’s that. You should have converted over to an enterprise agreement so they couldn’t fuck you over. And they will fuck you over.