Comparing the MD5 results of split files against t

2019-02-20 14:45发布

问题:

I have a situation where I have one VERY large file that I'm using the linux "split" command to break into smaller parts. Later I use the linux "cat" command to bring the parts all back together again.

In the interim, however, I'm curious...

If I get an MD5 fingerprint on the large file before splitting it, then later get the MD5 fingerprints on all the independent file parts that result from the split command, is there a way to take the independent fingerprints and somehow deduce that the sum or average (or whatever you like to all it) of their parts is equal to the fingerprint of the single large file?

By (very) loose example...

bigoldfile.txt MD5 = 737da789
smallfile1.txt MD5 = 23489a89
smallfile2.txt MD5 = 1238g89d
smallfile3.txt MD5 = 01234cd7

someoperator(23489a89,1238g89d,01234cd7) = 737da789 (the fingerprint of the original file)

回答1:

You likely can't do that - MD5 is complex enough inside and depends on actual data as well as the "initial" hash value.

You could instead generate "incremental" hashes - hash of first part, hash of first plus second part, etc.



回答2:

Not exactly but the next best thing would be to do this: cat filepart1 filepart2 | md5sum or cat filepart* | md5sum

Be sure to cat them back together in the correct order. by piping the output of cat you don't have to worry about creating a combined file that is too large.