I`m looking for solution for hashing large file content (files may be over 2gb in 32bit os). It there any easy solution for that? Or just reading by part and loading to buffer?
相关问题
- Sorting 3 numbers without branching [closed]
- Graphics.DrawImage() - Throws out of memory except
- Why am I getting UnauthorizedAccessException on th
- 求获取指定qq 资料的方法
- How to know full paths to DLL's from .csproj f
Use
TransformBlock
andTransformFinalBlock
to calculate the hash block by block, so you won't need to read the entire file into memory. (There is a nice example in the first link - and another one in this previous question).Driis's solution sounds more flexible, but
HashAlgorithm.ComputeHash
will also acceptStream
s as parameters.If you choose to use
TransformBlock
, then you can safely ignore the last parameter and set the outputBuffer tonull
. TransformBlock will copy from the input to the output array - but why would you want to simply copy bits for no good reason?Furthermore, all mscorlib HashAlgorithms work as you might expect, i.e. the block size doesn't seem to affect the hash output; and whether you pass the data in one array and then hash in chunks by changing the
inputOffset
or you hash by passing smaller, separate arrays doesn't matter. I verified this using the following code:(this is slightly long, just here so people can verify for themselves that
HashAlgorithm
implementations are sane).