A reverse-delta backup strategy – obvious idea or bad idea?

6 datastack 9 6/28/2025, 10:50:19 PM
I recently came up with a backup strategy that seems so simple I assume it must already exist — but I haven’t seen it in any mainstream tools.

The idea is:

The latest backup (timestamped) always contains a full copy of the current source state.

Any previous backups are stored as deltas: files that were deleted or modified compared to the next (newer) version.

There are no version numbers — just timestamps. New versions can be inserted naturally.

Each time you back up:

1. Compare the current source with the latest backup.

2. For files that changed or were deleted: move them into a new delta folder (timestamped).

3. For new/changed files: copy them into the latest snapshot folder (only as needed).

4. Optionally rotate old deltas to keep history manageable.

This means:

The latest backup is always a usable full snapshot (fast restore).

Previous versions can be reconstructed by applying reverse deltas.

If the source is intact, the system self-heals: corrupted backups are replaced on the next run.

Only one full copy is needed, like a versioned rsync mirror.

As time goes by, losing old versions is low-impact.

It's user friendly since the latest backup can be browsed through with regular file explorers.

Example:

Initial backup:

latest/ a.txt # "Hello" b.txt # "World"

Next day, a.txt is changed and b.txt is deleted:

latest/ a.txt # "Hi" backup-2024-06-27T14:00:00/ a.txt # "Hello" b.txt # "World"

The newest version is always in latest/, and previous versions can be reconstructed by applying the deltas in reverse.

I'm curious: has this been done before under another name? Are there edge cases I’m overlooking that make it impractical in real-world tools?

Would love your thoughts.

Comments (9)

dr_kiszonka · 2h ago
It sounds like this method is I/O intensive as you are writing the complete image at every backup time. Theoretically, it could be problematic when dealing with large backups in terms of speed, hardware longevity, and write errors, and I am not sure how you would recover from such errors without also storing the first image. (Or I might be misunderstanding your idea. It is not my area.)
compressedgas · 5h ago
It works. Already implemented: https://rdiff-backup.net/ https://github.com/rdiff-backup/rdiff-backup

There are also other tools which have implemented reverse incremental backup or backup with reverse deduplication which store the most recent backup in contiguous form and fragment the older backups.

ahazred8ta · 3h ago
For reference: a comprehensive backup + security plan for individuals https://nau.github.io/triplesec/
rawgabbit · 1h ago
What happens if in the process of all this read write rewrite, data is corrupted?
codingdave · 4h ago
The low likelihood / high impact edge case this does not handle is: "Oops, our data center blew up." An extreme scenario, but one that this method does not handle. It instead turns your most recent backup into a single point of failure because you cannot restore from other backups.
jiggawatts · 57m ago
The more common approach now is incrementals forever with occasional synthetic full backups computed at the storage end. This minimises backup time and data movement.
wmf · 5h ago
It seems like ZFS/Btrfs snapshots would do this.
HumanOstrich · 1h ago
No, they work the opposite way using copy-on-write.
wmf · 1h ago
"For files that changed or were deleted: move them into a new delta folder. For new/changed files: copy them into the latest snapshot folder." is just redneck copy-on-write. It's the same result but less efficient under the hood.