> One way is to ensure that machines that must be backed up via "push" [..] can only access their own space. More importantly, the backup server, for security reasons, should maintain its own filesystem snapshots for a certain period. In this way, even in the worst-case scenario (workload compromised -> connection to backup server -> deletion of backups to demand a ransom), the backup server has its own snapshots
My preferred solution is to let client only write new backups, never delete. The deletion is handled separately (manually or cron on the target).
You can do this with rsync/ssh via the allowed command feature in .ssh/authorized_keys.
godelski · 3h ago
Another thing you can do is just run a container or a specific backup user. Something like with a systemd-nspawn can give you a pretty lightweight chroot "jail" and you can ensure that anyone inside that jail can't do any rm commands.
pacman -S arch-install-scripts # Need this package (for debian you need debootstrap)
pacstrap -c /mnt/backups/TestSpawn base # Makes chroot
systemd-nspawn -D /mnt/backups/TestSpawn # Logs in
passwd # Set the root password. Do whatever else you need then exit
sudo ln -s /mnt/backups/TestSpawn /var/lib/machines/TestSpawn
sudo machinectl start TestSpawn # Congrats, you can now control with machinectl
Configs work like normal systemd stuff. So you can limit access controls, restrict file paths, make the service boot only at certain times or activate based on listening to a port, make only accessible via 192.168.1.0/24 (or 100.64.0.0/10), limit memory/CPU usage, or whatever you want. (I also like to use BTRFS subvolumes) You could also go systemd-vmspawn for a full VM if you really wanted to.
Extra nice, you can use importctl to then replicate.
3eb7988a1663 · 3h ago
I fall into the "pull" camp so this is less of a worry. The server to be backed-up should have no permissions to the backup server. If an attacker can root your live server (with more code/services to exploit), they do not automatically also gain access to the backup system.
amelius · 2h ago
I also implemented my backup scheme using "pull" as it is easier to do than an append-only system, and therefore probably more secure as there is less room for mistakes. The backup server can only be accessed through a console directly, which is a bit annoying sometimes, but at least it writes summaries back to the network.
haiku2077 · 4h ago
This is also why I use rclone copy instead of rclone sync for my backups, using API keys without permission to delete objects.
bambax · 6h ago
It's endlessly surprising how people don't care / don't think about backups. And not just individuals! Large companies too.
I'm consulting for a company that makes around €1 billion annual turnover. They don't make their own backups. They rely on disk copies made by the datacenter operator, which happen randomly, and which they don't test themselves.
Recently a user error caused the production database to be destroyed. The most recent "backup" was four days old. Then we had to replay all transactions that happened during those four days. It's insane.
But the most insane part was, nobody was shocked or terrified about the incident. "Business as usual" it seems.
polishdude20 · 6h ago
If it doesn't affect your bottom line enough to do it right, then I guess it's ok?
rapfaria · 3h ago
I'd go even a step further: For the big corp, having a point of failure that lives outside its structure can be a feature, and not a bug.
"Oh there goes Super Entrepise DB Partner again" turns into a product next fiscal year, that shutdowns the following year because the scope was too big, but at least they tried to make things better.
treetalker · 6h ago
Possibly for legal purposes? Litigation holds are a PITA and generators of additional liability exposure, and backups can come back to bite you.
haiku2077 · 4h ago
Companies that big have legal requirements to keep much of their data around for 5-7 years anyway.
tguvot · 2h ago
this is side effect of soc2 auditor approved disaster recovery policies.
company where i worked, had something similar. i spent a couple of months going through all teams, figuring out how disaster recovery policies are implemented (all of them were approved soc auditors).
outcome of my analysis was that in case of major disasters it will be easier to shut down company and go home than trying to recover to working state within reasonable amount of time.
daneel_w · 5h ago
It's also endlessly surprising how people over-think the process and requirements.
binwiederhier · 4h ago
Thank you for sharing. A curious read. I am looking forward to the next post.
I've been working on backup and disaster recovery software for 10 years. There's a common phrase in our realm that I feel obligated to share, given the nature of this article.
> "Friends don't let friends build their own Backup and Disaster Recovery (BCDR) solution"
Building BCDR is notoriously difficult and has many gotchas. The author hinted at some of them, but maybe let me try to drive some of them home.
- Backup is not disaster recovery: In case of a disaster, you want to be up and running near-instantly. If you cannot get back up and running in a few minutes/hours, your customers will lose your trust and your business will hurt. Being able to restore a system (file server, database, domain controller) with minimal data loss (<1 hr) is vital for the survival of many businesses. See Recovery Time Objective (RTO) and Recovery Point Objective (RPO).
- Point-in-time backups (crash consistent vs application consistent): A proper backup system should support point-in-time backups. An "rsync copy" of a file system is not a point-in-time backup (unless the system is offline), because the system changes constantly. A point-in-time backup is a backup in which each block/file/.. maps to the same exact timestamp. We typically differentiate between "crash consistent backups" which are similar to pulling the plug on a running computer, and "application consistent backups", which involves asking all important applications to persist their state to disk and freeze operations while the backup is happening. Application consistent backups (which is provided by Microsoft's VSS, as mentioned by the author) significantly reduce the chances of corruption. You should never trust an "rsync copy" or even crash consistent backups.
- Murphy's law is really true for storage media: My parents put their backups on external hard drives, and all of r/DataHoarder seems to buy only 12T HDDs and put them in a RAID0. In my experience, hard drives of all kinds fail all the time (though NVMe SSD > other SSD > HDD), so having backups in multiple places (3-2-1 backup!) is important.
(I have more stuff I wanted to write down, but it's late and the kids will be up early.)
poonenemity · 1h ago
Ha. That quote made me chuckle; it reminded me of a performance by the band Alice in Chains, where a similar quote appeared.
Re: BCDR solutions, they also sell trust among B2B companies. Collectively, these solutions protect billions, if not trillions of dollars worth of data, and no CTO in their right mind would ever allow an open-source approach to backup and recovery. This is primarily also due to the fact that backups need to be highly available. Scrolling through a snapshot list is one of the most tedious tasks I've had to do as a sysadmin. Although most of these solutions are bloated and violate userspace like nobody's business, it is ultimately the company's reputation that allows them to sell products. Although I respect Proxmox's attempt at cornering the Broadcom fallout, I could go at length about why it may not be able to permeate the B2B market, but it boils down to a simple formula (not educational, but rather from years of field experience):
> A company's IT spend grows linearly with valuation up to a threshold, then increases exponentially between a certain range, grows polynomially as the company invests in vendor-neutral and anti-lock-in strategies, though this growth may taper as thoughtful, cost-optimized spending measures are introduced.
- Ransomware Protection: Immutability and WORM (Write Once Read Many) backups are critical components of snapshot-based backup strategies. In my experience, legal issues have arisen from non-compliance in government IT systems. While "ransomware" is often used as a buzzword by BCDR vendors to drive sales, true immutability depends on the resiliency and availability of the data across multiple locations. This is where the 3-2-1 backup strategy truly proves its value.
Would like to hear your thoughts on more backup principles!
sebmellen · 1h ago
Also if you have a NAS, don’t use the same hard drive type for both.
sandreas · 5h ago
Nice writeup... Although I'm missing a few points...
In my opinion a good backup (system) is only good, if it has been tested to be restorable as fast as possible and the procedure is clear (like in documented).
How often have I heard or seen backups that "work great" and "oh, no problem we have them" only to see them fail or take ages to restore, when the disaster has happened (2 days can be an expensive amount of time in a production environment). Quite too often only parts could be restored.
Another missing aspect is within the snapshots section... I like restic, which provides repository based backup with deduplicated snapshots for FILES (not filesystems). It's pretty much what you want if you don't have ZFS (or other reliable snapshot based filesystems) to keep different versions of your files that have been deleted on the filesystem.
The last aspect is partly mentioned, the better PULL than PUSH part. Ransomware is really clever these days and if you PUSH your backups, it can also encrypt or delete all your backups, because it has access to everything... So you could either use readonly media (like Blurays) or PULL is mandatory. It is also helpful to have auto-snapshotting on ZFS via zfs-auto-snapshot, zrepl or sanoid to go back in time to where the ransomware has started its journey.
sgc · 5h ago
Since you mentioned restic, is there something wrong with using restic append-only with occasional on-server pruning instead of pulling? I thought this was the recommended way of avoiding ransomware problems using restic.
> Ransomware is really clever these days and if you PUSH your backups, it can also encrypt or delete all your backups, because it has access to everything
That depends on how you have access to your backup servers configured. I'm comfortable with append-only backup enforcement for push backups[0] with Borg and Restic via SSH, although I do use offline backup drive rotation as a last line of defense for my local backup set. YMMV.
> So you could either use readonly media (like Blurays) or PULL is mandatory.
Or like someone already commented you can use a server that allows push but doesn't allow to mess with older files. You can for example restrict ssh to only the scp command and the ssh server can moreover offer a chroot'ed environment to which scp shall copy the backups. And the server can for example daily rotate that chroot.
The push can then push one thing: daily backups. It cannot log in. It cannot overwrite older backups.
Short of a serious SSH exploit where the ransomware could both re-configure the server to accept all ssh (and not just scp) and escape the chroot box, the ransomware is simply not destroying data from before the ransomware found its way on the system.
My backup procedure does that for the one backup server that I have on a dedicated server: a chroot'ed ssh server that only accepts scp and nothing else. It's of course just one part of the backup procedure, not the only thing I rely on for backups.
P.S: it's not incompatible with also using read-only media
anonymars · 4h ago
I don't understand why this is dead..is it wrong advice? Is there some hidden flaw? Is it simply because the content is repeated elsewhere?
On the face of it "append-only access (no changes)" seems sound to me
quesera · 4h ago
TacticalCoder's comments appear to be auto-deaded for the last week or so.
I did not see a likely reason in a quick review of their comment history.
You can view a comment directly by following the "... ago" link, and from there you can use the "vouch" link to revive the comment. I vouched for a few of TacticalCoder's recent comments.
I built a disaster recovery system using python and borg. It snapshots 51 block devices on a SAN and then uses borg to backup 71 file systems from these snapshots. The entire data set is then synced to S3. And yes, I've tested the result in a offsite: recovering files systems to entirely different block storage and booting VMs, so I'm confident that it would work if necessary, although not terribly quickly, because the recovery automation is complex and incomplete.
I can't share it. But if you contemplate such a thing, it is possible, and the result is extremely low cost. Borg is pretty awesome.
rr808 · 6h ago
I dont need a backup system. I just need a standardized way to keep 25 years of photos for a family of 4 with their own phones, cameras, downloads, scans etc. I still haven't found anything good.
Jedd · 2h ago
Is '25 years of photos' a North American measure of data I was previously unfamiliar with?
As bambax noted, you do in fact need a backup system -- you just don't realise that yet.
And you want a way of sharing data between devices. Without knowing what you've explored, and constraints imposed by your vendors of choice, it's hard to be prescriptive.
FWIW I use syncthing on gnu/linux, microsoft windows, android, in a mesh arrangement, for several collections of stuff, anchored back to two dedicated archive targets (small memory / large storage debian VMs) running at two different sites, and then perform regular snapshots on those using borgbackup. This gives me backups and archives. My RPO is 24h but could easily be reduced to whatever figure I want.
I believe this method won't work if Apple phones / tablets are involved, as you are not allowed to run background tasks (for syncthing) on your devices.
(I have ~500GB of photos, and several 10-200GB collections of docs and miscellaneous files, as unique repositories - none of these experience massive changes, it's mostly incremental differences, so it is pretty frugal with diff-based backup systems.)
BirdieNZ · 2h ago
I'm trialing a NAS with Immich, and then backing up the media and Immich DB dump daily to AWS S3 Deep Archive. It has Android and iOS apps, and enough of the feature set of Google Photos to keep me happy.
You can also store photos/scans on desktops in the same NAS and make sure Immich is picking them up (and then the backup script will catch them if they get imported to Immich). For an HN user it's pretty straight-forward to set up.
xandrius · 6h ago
Downloads and scans are generally trash unless deemed important.
For the phones and cameras, setup Nextcloud and have it automatically sync to your own home network. Then have a nightly backup to another disk with a health check after it finishes.
After that you can pick either a cloud host which your trust or get another drive of ours into someone else's server to have another locstion for your 2nd backup and you're golden.
ethan_smith · 2h ago
PhotoPrism or Immich are solid self-hosted options that handle deduplication and provide good search/tagging for family photos. For cloud, Backblaze B2 + Cryptomator can give you encrypted storage at ~$1/TB/month with DIY scripts for uploads.
sandreas · 5h ago
I use syncthing... it's great for that purpose, Android is not officially supported but there is a fork, that works fine. Maybe you want to combine it with either ente.io or immich (also available for self-hosted) for photo backup.
I would also distinguish between documents (like PDF and TIFF) and photos - there is also paperless ngx.
bravesoul2 · 4h ago
Isn't that like a Dropbox approach? If you have 2tb photos this means you need 2tb storage on everything?
setopt · 5h ago
I like Syncthing but it’s not a great option on iOS.
It's an option... But still beholden to the arbitrary restriction apple has on data access.
bravesoul2 · 4h ago
Struggling too.
For me one win/mac with backblaze. Dump everything to that machine. Second ext. Drive backup just in case.
rsolva · 5h ago
Check out ente.io - it is really good!
nor-and-or-not · 4h ago
I second that, and you can even self-host it.
palata · 5h ago
I recently found that Nextcloud is good enough to "collect" the photos from my family onto my NAS. And my NAS makes encrypted backups to a cloud using restic.
haiku2077 · 4h ago
A NAS running Immich, maybe?
bambax · 6h ago
You do need a backup. But before that, you need a family NAS. There are plenty of options. (But a NAS is not a backup.)
firesteelrain · 1h ago
I run a system that has multi site replication to multiple Artifactory instances all replicating from one single Master to all Spokes. Each one can hold up to 2PB. While Artifactory supports writing to a backup location, given the size of our artifacts, we chose to not have an actual backup. Just live replication to five different sites. Never have tried to restore or replicate back to main. I am not even sure how that would work if the spokes are all “*-cache”.
Backend storage for each Artifactory instance is Dell Isilon.
kayson · 2h ago
The thing that always gets me about backup consistency is that it's impossibly difficult to ensure that application data is in a consistent state without bringing everything down. You can create a disk snapshot, but there's no guarantee that some service isn't mid-write or mid-procedure at the point of the snapshot. So if you were to restore the backup from the snapshot you would encounter some kind of corruption.
Database dumps help with this, to a large extent, especially if the application itself is making the dumps at an appropriate time. But often you have to make the dump outside the application, meaning you could hit it in the middle of a sequence of queries.
Curious if anyone has useful tips for dealing with this.
booi · 2h ago
I think generally speaking, databases are resilient to this so taking a snapshot of the disk at any point is sufficient as a backup. The only danger is if you're using some sort of on-controller disk cache with no battery backup, then basically you're lying to the database about what has flushed and there can be inconsistencies on "power failure" (i.e. live snapshot).
But for the most part as especially in the cloud, this shouldn't be an issue.
daneel_w · 5h ago
My valuable data is less than 100 MiB. I just tar+compress+encrypt a few select directories/files twice a week and keep a couple of months of rotation. No incremental hassle necessary. I store copies at home and I store copies outside of home. It's a no-frills setup that costs nothing, is just a few lines of *sh script, takes care of itself, and never really needed any maintenance.
mavilia · 5h ago
This comment made me rethink what I have that is actually valuable data. My photos alone even if culled down to just my favorites would probably be at least a few gigs. Contacts from my phone would be small. Other than that I guess I wouldn't be devastated if I lost anything else. Probably should put my recovery keys somewhere safer but honestly the accounts most important to me don't have recovery keys.
Curious what you consider valuable data?
Edit: I should say for pictues I have around 2Tb right now (downside of being a hobby photographer)
daneel_w · 4h ago
With valuable I should've elaborated that it's my set of constantly changing daily-use data. Keychain, documents and notes, e-mail, bookmarks, active software projects, those kinds of things.
I have a large amount of memories and "mathom" as well, in double copies, but I connect and add to this data so rarely that it absolutely does not have to be part of any ongoing backup plan.
inopinatus · 2h ago
Perhaps Part 1 ought to be headlined, “design the restore system”, this being the part of backup that actually matters.
bob1029 · 4h ago
I think the cleanest, most compelling backup strategies are those employed by RDBMS products. [A]sync log replication is really powerful at taking any arbitrary domain and making sure it exists in the other sites exactly.
You might think this is unsuitable for your photo/music/etc. collection, but there's no technical reason you couldn't use the database as the primary storage mechanism. SQLite will take you to ~281 terabytes with a 64k page size. MSSQL supports something crazy like 500 petabytes. The blob data types will choke on your 8k avengers rip, but you could store it in 1 gig chunks - There are probably other benefits to this anyways.
gmuslera · 6h ago
How data changes, and what changes it, matters when trying to optimize backups.
A full OS installation may not change a lot, or change with security updates that anyway are stored elsewhere.
Configurations have their own lifecycle, actors, and good practices on how to keep and backup them. Same with code.
Data is what matters if you have saved somewhat everything else. And it could have a different treatment file tree backups from I.e. database backups.
Logs is something that frequently changes, but you can have a proper log server for which logs are data.
Things can be this granular, or go for storage backup. But the granularity, while may add complexity, may lower costs and increase how much of what matters you can store for longer periods of time.
o11c · 5h ago
Other things that matter (some overlap):
* Is the file userland-compressed, filesystem-or-device-compressed, or uncompressed?
* What are you going to do about secret keys?
* Is the file immutable, replace-only (most files), append-only (not limited to logs; beware the need to defrag these), or fully mutable (rare - mostly databases or dangerous archive software)?
* Can you rely on page size for (some) chunking, or do you need to rely entirely on content-based chunking?
* How exactly are you going to garbage-collect the data from no-longer-active backups?
* Does your filesystem expose an accurate "this file changed" signal, or better an actual hash? Does it support chunk sharing? Do you know how those APIs work?
* Are you crossing a kernel version that is one-way incompatible?
* Do you have control of the raw filesystem at the other side? (e.g. the most efficient backup for btrfs is only possible with this)
Shank · 3h ago
> Security: I avoid using mainstream cloud storage services like Dropbox or Google Drive for primary backups. Own your data!
What does this have to do with security? You shouldn't be backing up data in a way that's visible to the server. Use something like restic. Do not rely on the provider having good security.
kernc · 4h ago
Make your own backup system—is exactly what I did. I felt git porcelain had a stable-enough API to accommodate this popular use case.
I might be grandfathered on the old price, not sure.
Jedd · 2h ago
Feels weird to talk about strategy for your backups without mentioning RPO, RTO, or even RCO - even though some of those concepts are nudged up against in TFA.
Those terms are handy for anyone not familiar with the space to go do some further googling.
Also odd to not note the distinction between backups and archives - at least in terms, of what users' expectations are around the two terms / features - or even mention archiving.
(How fast can I get back to the most recent fully-functional state, vs how can I recover a file I was working on last Tuesday but deleted last Wednesday.)
godelski · 2h ago
> without mentioning RPO, RTO, or even RCO
> Those terms are handy for anyone not familiar with the space to go do some further googling.
You should probably get people started
RPO: Recovery Point Objective
RTO: Recovery Time Objective
RCo: Recovery Consistency
I'm pretty sure they aren't mentioned because these aren't really necessary for doing self-hosted backups. Do we really care much about how fast we recover files? Probably not. At least not more than that they exist and we can restore them. For a business, yeah, recovery time is critical as that's dollars lost.
FWIW, I didn't know these terms until you mentioned them, so I'm not an expert. Please correct me if I'm misunderstanding or being foolishly naive (very likely considering the previous statement). But as I'm only in charge of personal backups, should I really care about this stuff? My priorities are that I have backups and that I can restore. A long running rsync is really not a big issue. At least not for me.
Fair that I should have spelled them out, though my point was that TFA touched on some of the considerations that are covered by those fundamental and well known concepts / terms.
Knowing the jargon for a space makes it easier to find more topical information. Searching on those abbreviations would be sufficient, anyway.
TFA talks about the right questions to consider when planning backups (but not archives) - eg 'What downtime can I tolerate in case of data loss?' (that's your RTO, effectively).
I'd argue the concepts encapsulated in those TLAs - even if they sound a bit enterprisey - are important for planning your backups, with 'self-hosted' not being an exception per se, just having different numbers.
Sure, as you say 'Do we really care about how fast we recover files?' - perhaps you don't need things back in an hour, but you do have an opinion about how long that should take, don't you?
You also ask 'should I really care about this stuff?'
I can't answer that for you, other than turn it back to 'What losses are you happy to tolerate, and what costs / effort are you willing to incur to mitigate?'. (That'll give you a rough intersection of two lines on your graph.)
This pithy aphorism exists for a good reason : )
> There are two types of people: those who have lost data,
> and those who do backups.
My preferred solution is to let client only write new backups, never delete. The deletion is handled separately (manually or cron on the target).
You can do this with rsync/ssh via the allowed command feature in .ssh/authorized_keys.
Extra nice, you can use importctl to then replicate.
I'm consulting for a company that makes around €1 billion annual turnover. They don't make their own backups. They rely on disk copies made by the datacenter operator, which happen randomly, and which they don't test themselves.
Recently a user error caused the production database to be destroyed. The most recent "backup" was four days old. Then we had to replay all transactions that happened during those four days. It's insane.
But the most insane part was, nobody was shocked or terrified about the incident. "Business as usual" it seems.
"Oh there goes Super Entrepise DB Partner again" turns into a product next fiscal year, that shutdowns the following year because the scope was too big, but at least they tried to make things better.
company where i worked, had something similar. i spent a couple of months going through all teams, figuring out how disaster recovery policies are implemented (all of them were approved soc auditors).
outcome of my analysis was that in case of major disasters it will be easier to shut down company and go home than trying to recover to working state within reasonable amount of time.
I've been working on backup and disaster recovery software for 10 years. There's a common phrase in our realm that I feel obligated to share, given the nature of this article.
> "Friends don't let friends build their own Backup and Disaster Recovery (BCDR) solution"
Building BCDR is notoriously difficult and has many gotchas. The author hinted at some of them, but maybe let me try to drive some of them home.
- Backup is not disaster recovery: In case of a disaster, you want to be up and running near-instantly. If you cannot get back up and running in a few minutes/hours, your customers will lose your trust and your business will hurt. Being able to restore a system (file server, database, domain controller) with minimal data loss (<1 hr) is vital for the survival of many businesses. See Recovery Time Objective (RTO) and Recovery Point Objective (RPO).
- Point-in-time backups (crash consistent vs application consistent): A proper backup system should support point-in-time backups. An "rsync copy" of a file system is not a point-in-time backup (unless the system is offline), because the system changes constantly. A point-in-time backup is a backup in which each block/file/.. maps to the same exact timestamp. We typically differentiate between "crash consistent backups" which are similar to pulling the plug on a running computer, and "application consistent backups", which involves asking all important applications to persist their state to disk and freeze operations while the backup is happening. Application consistent backups (which is provided by Microsoft's VSS, as mentioned by the author) significantly reduce the chances of corruption. You should never trust an "rsync copy" or even crash consistent backups.
- Murphy's law is really true for storage media: My parents put their backups on external hard drives, and all of r/DataHoarder seems to buy only 12T HDDs and put them in a RAID0. In my experience, hard drives of all kinds fail all the time (though NVMe SSD > other SSD > HDD), so having backups in multiple places (3-2-1 backup!) is important.
(I have more stuff I wanted to write down, but it's late and the kids will be up early.)
Re: BCDR solutions, they also sell trust among B2B companies. Collectively, these solutions protect billions, if not trillions of dollars worth of data, and no CTO in their right mind would ever allow an open-source approach to backup and recovery. This is primarily also due to the fact that backups need to be highly available. Scrolling through a snapshot list is one of the most tedious tasks I've had to do as a sysadmin. Although most of these solutions are bloated and violate userspace like nobody's business, it is ultimately the company's reputation that allows them to sell products. Although I respect Proxmox's attempt at cornering the Broadcom fallout, I could go at length about why it may not be able to permeate the B2B market, but it boils down to a simple formula (not educational, but rather from years of field experience):
> A company's IT spend grows linearly with valuation up to a threshold, then increases exponentially between a certain range, grows polynomially as the company invests in vendor-neutral and anti-lock-in strategies, though this growth may taper as thoughtful, cost-optimized spending measures are introduced.
- Ransomware Protection: Immutability and WORM (Write Once Read Many) backups are critical components of snapshot-based backup strategies. In my experience, legal issues have arisen from non-compliance in government IT systems. While "ransomware" is often used as a buzzword by BCDR vendors to drive sales, true immutability depends on the resiliency and availability of the data across multiple locations. This is where the 3-2-1 backup strategy truly proves its value.
Would like to hear your thoughts on more backup principles!
In my opinion a good backup (system) is only good, if it has been tested to be restorable as fast as possible and the procedure is clear (like in documented).
How often have I heard or seen backups that "work great" and "oh, no problem we have them" only to see them fail or take ages to restore, when the disaster has happened (2 days can be an expensive amount of time in a production environment). Quite too often only parts could be restored.
Another missing aspect is within the snapshots section... I like restic, which provides repository based backup with deduplicated snapshots for FILES (not filesystems). It's pretty much what you want if you don't have ZFS (or other reliable snapshot based filesystems) to keep different versions of your files that have been deleted on the filesystem.
The last aspect is partly mentioned, the better PULL than PUSH part. Ransomware is really clever these days and if you PUSH your backups, it can also encrypt or delete all your backups, because it has access to everything... So you could either use readonly media (like Blurays) or PULL is mandatory. It is also helpful to have auto-snapshotting on ZFS via zfs-auto-snapshot, zrepl or sanoid to go back in time to where the ransomware has started its journey.
That depends on how you have access to your backup servers configured. I'm comfortable with append-only backup enforcement for push backups[0] with Borg and Restic via SSH, although I do use offline backup drive rotation as a last line of defense for my local backup set. YMMV.
0 - https://marcusb.org/posts/2024/07/ransomware-resistant-backu...
Or like someone already commented you can use a server that allows push but doesn't allow to mess with older files. You can for example restrict ssh to only the scp command and the ssh server can moreover offer a chroot'ed environment to which scp shall copy the backups. And the server can for example daily rotate that chroot.
The push can then push one thing: daily backups. It cannot log in. It cannot overwrite older backups.
Short of a serious SSH exploit where the ransomware could both re-configure the server to accept all ssh (and not just scp) and escape the chroot box, the ransomware is simply not destroying data from before the ransomware found its way on the system.
My backup procedure does that for the one backup server that I have on a dedicated server: a chroot'ed ssh server that only accepts scp and nothing else. It's of course just one part of the backup procedure, not the only thing I rely on for backups.
P.S: it's not incompatible with also using read-only media
On the face of it "append-only access (no changes)" seems sound to me
I did not see a likely reason in a quick review of their comment history.
You can view a comment directly by following the "... ago" link, and from there you can use the "vouch" link to revive the comment. I vouched for a few of TacticalCoder's recent comments.
For my archlinux setup, configuration and backup strategy: https://github.com/gchamon/archlinux-system-config
For the backup system, I've cooked an automation layer on top of borg: https://github.com/gchamon/borg-automated-backups
I can't share it. But if you contemplate such a thing, it is possible, and the result is extremely low cost. Borg is pretty awesome.
As bambax noted, you do in fact need a backup system -- you just don't realise that yet.
And you want a way of sharing data between devices. Without knowing what you've explored, and constraints imposed by your vendors of choice, it's hard to be prescriptive.
FWIW I use syncthing on gnu/linux, microsoft windows, android, in a mesh arrangement, for several collections of stuff, anchored back to two dedicated archive targets (small memory / large storage debian VMs) running at two different sites, and then perform regular snapshots on those using borgbackup. This gives me backups and archives. My RPO is 24h but could easily be reduced to whatever figure I want.
I believe this method won't work if Apple phones / tablets are involved, as you are not allowed to run background tasks (for syncthing) on your devices.
(I have ~500GB of photos, and several 10-200GB collections of docs and miscellaneous files, as unique repositories - none of these experience massive changes, it's mostly incremental differences, so it is pretty frugal with diff-based backup systems.)
You can also store photos/scans on desktops in the same NAS and make sure Immich is picking them up (and then the backup script will catch them if they get imported to Immich). For an HN user it's pretty straight-forward to set up.
For the phones and cameras, setup Nextcloud and have it automatically sync to your own home network. Then have a nightly backup to another disk with a health check after it finishes.
After that you can pick either a cloud host which your trust or get another drive of ours into someone else's server to have another locstion for your 2nd backup and you're golden.
I would also distinguish between documents (like PDF and TIFF) and photos - there is also paperless ngx.
https://mobiussync.com/
For me one win/mac with backblaze. Dump everything to that machine. Second ext. Drive backup just in case.
Backend storage for each Artifactory instance is Dell Isilon.
Database dumps help with this, to a large extent, especially if the application itself is making the dumps at an appropriate time. But often you have to make the dump outside the application, meaning you could hit it in the middle of a sequence of queries.
Curious if anyone has useful tips for dealing with this.
But for the most part as especially in the cloud, this shouldn't be an issue.
Curious what you consider valuable data?
Edit: I should say for pictues I have around 2Tb right now (downside of being a hobby photographer)
I have a large amount of memories and "mathom" as well, in double copies, but I connect and add to this data so rarely that it absolutely does not have to be part of any ongoing backup plan.
You might think this is unsuitable for your photo/music/etc. collection, but there's no technical reason you couldn't use the database as the primary storage mechanism. SQLite will take you to ~281 terabytes with a 64k page size. MSSQL supports something crazy like 500 petabytes. The blob data types will choke on your 8k avengers rip, but you could store it in 1 gig chunks - There are probably other benefits to this anyways.
A full OS installation may not change a lot, or change with security updates that anyway are stored elsewhere.
Configurations have their own lifecycle, actors, and good practices on how to keep and backup them. Same with code.
Data is what matters if you have saved somewhat everything else. And it could have a different treatment file tree backups from I.e. database backups.
Logs is something that frequently changes, but you can have a proper log server for which logs are data.
Things can be this granular, or go for storage backup. But the granularity, while may add complexity, may lower costs and increase how much of what matters you can store for longer periods of time.
* Is the file userland-compressed, filesystem-or-device-compressed, or uncompressed?
* What are you going to do about secret keys?
* Is the file immutable, replace-only (most files), append-only (not limited to logs; beware the need to defrag these), or fully mutable (rare - mostly databases or dangerous archive software)?
* Can you rely on page size for (some) chunking, or do you need to rely entirely on content-based chunking?
* How exactly are you going to garbage-collect the data from no-longer-active backups?
* Does your filesystem expose an accurate "this file changed" signal, or better an actual hash? Does it support chunk sharing? Do you know how those APIs work?
* Are you crossing a kernel version that is one-way incompatible?
* Do you have control of the raw filesystem at the other side? (e.g. the most efficient backup for btrfs is only possible with this)
What does this have to do with security? You shouldn't be backing up data in a way that's visible to the server. Use something like restic. Do not rely on the provider having good security.
https://kernc.github.io/myba/
Those terms are handy for anyone not familiar with the space to go do some further googling.
Also odd to not note the distinction between backups and archives - at least in terms, of what users' expectations are around the two terms / features - or even mention archiving.
(How fast can I get back to the most recent fully-functional state, vs how can I recover a file I was working on last Tuesday but deleted last Wednesday.)
FWIW, I didn't know these terms until you mentioned them, so I'm not an expert. Please correct me if I'm misunderstanding or being foolishly naive (very likely considering the previous statement). But as I'm only in charge of personal backups, should I really care about this stuff? My priorities are that I have backups and that I can restore. A long running rsync is really not a big issue. At least not for me.
https://francois-encrenaz.net/what-is-cloud-backup-rto-rpo-r...
Knowing the jargon for a space makes it easier to find more topical information. Searching on those abbreviations would be sufficient, anyway.
TFA talks about the right questions to consider when planning backups (but not archives) - eg 'What downtime can I tolerate in case of data loss?' (that's your RTO, effectively).
I'd argue the concepts encapsulated in those TLAs - even if they sound a bit enterprisey - are important for planning your backups, with 'self-hosted' not being an exception per se, just having different numbers.
Sure, as you say 'Do we really care about how fast we recover files?' - perhaps you don't need things back in an hour, but you do have an opinion about how long that should take, don't you?
You also ask 'should I really care about this stuff?'
I can't answer that for you, other than turn it back to 'What losses are you happy to tolerate, and what costs / effort are you willing to incur to mitigate?'. (That'll give you a rough intersection of two lines on your graph.)
This pithy aphorism exists for a good reason : )