Ask HN: Should I implement my own integrity checks on my fileserver?

2 jacobwilliamroy 3 5/14/2025, 4:00:51 PM
So I have an SFTP server [Ubuntu 24.04.2 LTS (GNU/Linux 6.8.0-55-generic x86_64)] and an SFTP client (Windows 11): the client downloads files from the SFTP server. I had an idea to implement integrity checks by hashing the file on server side and also hashing the client's copy of the file after it is downloaded, then comparing the hashes to ensure the data was downloaded correctly. My question is: should I spend time implementing this? Does the OS already do this? This seems like something an OS should do.

Comments (3)

PaulHoule · 6h ago
It could be expensive for the OS to update a secure hash for a file every time you write() to it.
daveguy · 6h ago
SFTP protocol includes checksum hashing on each packet sent. So corruption from the network is very unlikely (as it is also encrypted).

On the write to disk side, you are probably best off using ZFS or btrfs as the filesystem. These contain the option for similar integrity checks / error correction on write.

What is your threat model? Are you concerned about adversarial changes to the data or just prevention of corruption? Either way an adversary would have to be deep in your system or mitm to get around the transfer protocol protections. And the transfer protocols used by SFTP should handle random network corruption.

jacobwilliamroy · 2h ago
I am only trying to ensure the data on the local system is the same as the data on the server. There is no adversary in the middle modifying data; so this is strictly about detecting corruption.