I also use nginx with HTTPS + HTTP authentication in front of it, with a separate username/password combination for each server. This makes rest-server completely inaccessible to the rest of the internet and you don't have to trust it to be properly protected against being hammered by malicious traffic.
Been using this for about five years, it saved my bacon a few times, no problems so far.
rsync · 1h ago
You can achieve append-only without exposing a rest server provided that 'rclone' can be called on the remote end:
rclone serve restic --stdio
You add something like this to ~/.ssh/authorized_keys:
We just started deploying this on rsync.net servers - which is to say, we maintain an arguments allowlist for every binary you can execute here and we never allowed 'rclone serve' ... but now we do, IFF it is accompanied by --stdio.
This has been replaced with a permissions feature that still provides both delete and overwrite protections. The difference is the underlying store needs to implement it rather than running a server that understands the permission differences. You can read more about this change here: https://github.com/borgbackup/borg/issues/8823#issuecomment-...
bayindirh · 12m ago
This comment needs to be pinned, alongside what the developers say [0] since the change is very misunderstood.
> The "no-delete" permission disallows deleting objects as well as overwriting existing objects.
My current approach is restic, but I'd prefer to have asymmetric passwords, essentially the backup machine only having write access (while maintaining deduplication). This way if the backup machine were compromised, and therefore the password it needs to write, the backup repo itself would still be secure since it would use a different password for reading.
Is this what append-only achieved for Borg?
dblitt · 3h ago
It seems the suggested solution is to use server credentials that lack delete permissions (and use credentials that have delete for compacting the repo), but does that protect against a compromised client simply overriding files without deleting them?
throwaway984393 · 2h ago
No. Delete and overwrite are different. You need overwrite protection in addition to delete protection. The solution will vary depending on the storage system and the use case. (The comment in the PR is not an exhaustive description of potential solutions)
qeternity · 2h ago
Append-only would imply yes. There is no overwriting in append-only. There is only truncate and append.
mosselman · 2h ago
You have misread I think.
There used to be append-only, they've removed it and suggest using a credential that has no 'delete' permission. The question asked here is whether this would protect against data being overwritten instead of deleted.
aborsy · 3h ago
Borg2 has been in beta testing for a very long time.
Anyone knows when will it come out of beta?
nathants · 2h ago
Do something simpler. Backups shouldn’t be complex.
I don't see what value this provides that rsync, tar and `aws s3 cp` (or AWS SDK equivalent) provides.
nathants · 1h ago
How do you version your rsync backups?
iforgotpassword · 1h ago
Dirvish
nathants · 1h ago
Perl still exists?
yread · 1h ago
Uh, who has the money to store backups in AWS?!
PunchyHamster · 38m ago
Support for S3 means you can just have minio server somewhere acting as backup storage (and minio is pretty easy to replicate). I have local S3 on my NAS replicated to cheapo OVH serwer for backup
nathants · 1h ago
Depends how big they are. My high value backups go into S3, R2, and a local x3 disk mirror[1].
My low value backups go into a cheap usb hdd from Best Buy.
Glacier Deep Archive is the cheapest cloud backup option at $1USD/month/TB.
Google Cloud Store Archive Tier is a tiny bit more.
mananaysiempre · 41m ago
Both would be pretty expensive to actually restore from, though, IIRC.
ikiris · 36m ago
To quote the old mongodb video: If you don't care about restores, /dev/null is even cheaper, and its webscale.
puffybuf · 2h ago
I've been using device mapper+encryption to backup my files to encrypted filesystem on regular files. (cryptsetup on linux, vnconfig+bioctl on openbsd). Is there a reason for me to use borgbackup? Maybe to save space?
I even wrote python scripts to automatically cleanup and unmount if something goes wrong (not enough space etc).
On openbsd I can even Double encrypt with blowfish(vnconfig -K) and then a diff alg for bioctl.
anyfoo · 1h ago
Does your solution do incremental backups at all? I have backups going back years, because through incremental backups each delta is not very large.
Every once in a while things gets sparsed out, so that for example I have daily backups for the recent past, but only monthly and then even yearly for further back.
neilv · 2h ago
I used to have a BorgBackup server at home that used append-only and restricted-SSH.
It wasn't perfect, but it did protect against some scenarios in which a device could be majorly messed up, yet the server was more resistant to losing the data.
For work, the backup schemes include separate additional protection of the data server or media, so append-only added to that would be nice, as redundant protection, but not as necessary.
TheFreim · 3h ago
I've been using Borg for a while, I've been thinking about looking at the backup utility space again to see what is out there. What backup utilities do you all use and recommend?
singhrac · 1h ago
I spent too long looking into this and settled on restic. I'm satisfied with the performance for our large repo and datasets, though we'll probably supplement it with filesystem-based backups at some point.
Borg has the issue that it is in limbo, i.e. all the new features (including object storage support) are in Borg2, but there's no clear date when that will be stable. I also did not like that it was written in Python, because backups are not always IO blocked (we have some very large directories, etc.).
I really liked borgmatic on Borg, but we found resticprofile which is pretty much the same thing (it is underdiscussed). After some testing I think it is important to set GOGC and read-concurrency parameters, as a tip. All the GUIs are very ugly, but we're fine a CLI.
If rustic matures enough and is worth a switch we might consider it.
Saris · 3h ago
Restic is nice. Backrest if you like a webUI.
TiredOfLife · 3h ago
Kopia
conception · 2h ago
Kopia is surprisingly good. I use it with a b2 backend, had percentage based restore verification for regulatory items and is super fast. Only downside is lack of enterprise features/centralized management.
jbverschoor · 2h ago
Moved to duplicacy. Works great for me
mrtesthah · 3h ago
FYI for those using restic, you can use rest-server to achieve a server-side-enforced append-only setup. The purpose is to protect against ransomware and other malicious client-side operations.
seymon · 2h ago
Borg vs Restic vs Kopia ?
They are so similar in features. How do they compare? Which to choose?
aborsy · 2h ago
Restic is the winner. It talks directly to many backends, is a static binary (so you can drop the executable in operating systems which don’t allow package installation like a NAS OS) and has a clean CLI.
Kopia is a bit newer and less tested.
All three have a lot of commands to work with repositories. Each one of them is much better than closed source
proprietary backup software that I have dealt with, like Synology hyperbackup nonsense.
If you want a better solution, the next level is ZFS.
PunchyHamster · 36m ago
Kopia is VERY similar to Restic, main differences is Kopia getting half decent UI vs Restic being a bit more friendly for scripting
> If you want a better solution, the next level is ZFS.
Not a backup. Not a bad choice for storage for backup server tho
seymon · 2h ago
I am already using zfs on my NAS where I want my backups to be. But I didn't consider it for backups till now
aeadio · 1h ago
You can consider something like syncthing to get the important files onto your NAS, and then use ZFS snapshots and replication via syncoid/sanoid to do the actual backing up.
aborsy · 34m ago
Or install ZFS also on end devices, and do ZFS replication to NAS, which is what I do. I have ZFS on my laptop, snapshot data every 30 minutes, and replicate them. Those snapshots are very useful, as sometimes I accidentally delete data.
With ZFS, all file system is replicated. The backup will be consistent, which is not the case with file level backup. With latter, you have to also worry about lock files, permissions, etc. The restore will be more natural and quick with ZFS.
the_angry_angel · 2h ago
Kopia is awesome. With exception to it’s retention policies, but work like no other backup software that I’ve experienced to date. I don’t know if it’s just my stupidity, being stuck in 20 year thinking or just the fact it’s different. But for me, it feels like a footgun.
The fact that Kopia has a UI is awesome for non-technical users.
I migrated off restic due to memory usage, to Kopia. I am currently debating switching back to restic purely because of how retention works.
zargon · 16m ago
I’m confused. Is Kopia awesome or is it a footgun? (Or are words missing?)
spiffytech · 2h ago
I picked Kopia when I needed something that worked on Windows and came with a GUI.
I was setting up PCs for unsophisticated users who needed to be able to do their own restores. Most OSS choices are only appropriate for technical users, and some like Borg are *nix-only.
LeoPanthera · 3h ago
Is that a big deal? You should probably be doing this with zfs immutable snapshots anyway. Or equivalent feature for your filesystem.
philsnow · 3h ago
The purpose of the append-only feature of borgbackup is to prevent an attacker from being able to overwrite your existing backups if they compromise the device being backed up.
Are you talking about using ZFS snapshots on the remote backup target? Trying to solve the same problem with local snapshots wouldn't work because the attack presumes that the device that's sending the backups is compromised.
LeoPanthera · 3h ago
> Are you talking about using ZFS snapshots on the remote backup target?
Yes.
homebrewer · 3h ago
There's not much sense in using these advanced backup tools if you're already on ZFS, even if it's just on the backup server, I would stick with something simpler. Their whole point is in reliable checksums, incremental backups, deduplication, snapshotting on top of a 'simple' classical filesystem. Sounds familiar to any ZFS user?
nijave · 3h ago
Dedupe is efficient in Borg. The target needs almost no RAM
PunchyHamster · 34m ago
well, till lightning fries your server. Or you fat finger command and fuck something up.
globular-toast · 2h ago
Are there any good options for an off-site zfs backup server besides a colo?
Would be interested to know what others have set up as I'm not really happy with how I do it. I have zfs on my NAS running locally. I backup to that from my PC via rsync triggered by anacron daily. From my NAS I use rclone to send encrypted backups to Backblaze.
I'd be happier with something more frequent from PC to NAS. Syncthing maybe? Then just do zfs sync to some off site zfs server.
aeadio · 1h ago
Aside from rsync.net which was mentioned in a sibling comment, there’s also https://zfs.rent, or any VPS with Linux or FreeBSD installed.
gaadd33 · 1h ago
I think Rsync.net supports zfs send/receive
topato · 3h ago
I'm also completely confused why this was at the top of my hacki, seems completely innocuous
ajb · 3h ago
Ideally a backup system should be implementable in such a way that no credential on the machines being backed up, enable the deletion or modification of existing backups. That's so that if your machines are hacked a) the backups can't be deleted or encrypted in a ransom attack and b)
If you can figure out when the first compromise occurred, you know that before that date the backup data is not compromised.
I guess some people might have been relying on this feature of borgbackup to implement that requirement
https://github.com/restic/restic
https://github.com/restic/rest-server
which has to be started with --append-only. I use this systemd unit:
I also use nginx with HTTPS + HTTP authentication in front of it, with a separate username/password combination for each server. This makes rest-server completely inaccessible to the rest of the internet and you don't have to trust it to be properly protected against being hammered by malicious traffic.Been using this for about five years, it saved my bacon a few times, no problems so far.
> The "no-delete" permission disallows deleting objects as well as overwriting existing objects.
[0]: https://github.com/borgbackup/borg/pull/8798#issuecomment-29...
Is this what append-only achieved for Borg?
There used to be append-only, they've removed it and suggest using a credential that has no 'delete' permission. The question asked here is whether this would protect against data being overwritten instead of deleted.
Anyone knows when will it come out of beta?
This should be simpler still:
https://github.com/nathants/backup
I don't see what value this provides that rsync, tar and `aws s3 cp` (or AWS SDK equivalent) provides.
My low value backups go into a cheap usb hdd from Best Buy.
1. https://github.com/nathants/mirror
Google Cloud Store Archive Tier is a tiny bit more.
I even wrote python scripts to automatically cleanup and unmount if something goes wrong (not enough space etc). On openbsd I can even Double encrypt with blowfish(vnconfig -K) and then a diff alg for bioctl.
Every once in a while things gets sparsed out, so that for example I have daily backups for the recent past, but only monthly and then even yearly for further back.
It wasn't perfect, but it did protect against some scenarios in which a device could be majorly messed up, yet the server was more resistant to losing the data.
For work, the backup schemes include separate additional protection of the data server or media, so append-only added to that would be nice, as redundant protection, but not as necessary.
Borg has the issue that it is in limbo, i.e. all the new features (including object storage support) are in Borg2, but there's no clear date when that will be stable. I also did not like that it was written in Python, because backups are not always IO blocked (we have some very large directories, etc.).
I really liked borgmatic on Borg, but we found resticprofile which is pretty much the same thing (it is underdiscussed). After some testing I think it is important to set GOGC and read-concurrency parameters, as a tip. All the GUIs are very ugly, but we're fine a CLI.
If rustic matures enough and is worth a switch we might consider it.
They are so similar in features. How do they compare? Which to choose?
All three have a lot of commands to work with repositories. Each one of them is much better than closed source proprietary backup software that I have dealt with, like Synology hyperbackup nonsense.
If you want a better solution, the next level is ZFS.
> If you want a better solution, the next level is ZFS.
Not a backup. Not a bad choice for storage for backup server tho
With ZFS, all file system is replicated. The backup will be consistent, which is not the case with file level backup. With latter, you have to also worry about lock files, permissions, etc. The restore will be more natural and quick with ZFS.
The fact that Kopia has a UI is awesome for non-technical users.
I migrated off restic due to memory usage, to Kopia. I am currently debating switching back to restic purely because of how retention works.
I was setting up PCs for unsophisticated users who needed to be able to do their own restores. Most OSS choices are only appropriate for technical users, and some like Borg are *nix-only.
Are you talking about using ZFS snapshots on the remote backup target? Trying to solve the same problem with local snapshots wouldn't work because the attack presumes that the device that's sending the backups is compromised.
Yes.
Would be interested to know what others have set up as I'm not really happy with how I do it. I have zfs on my NAS running locally. I backup to that from my PC via rsync triggered by anacron daily. From my NAS I use rclone to send encrypted backups to Backblaze.
I'd be happier with something more frequent from PC to NAS. Syncthing maybe? Then just do zfs sync to some off site zfs server.
I guess some people might have been relying on this feature of borgbackup to implement that requirement