The definition of SHA-256 appears to be such that the input consisting of a single "1" bit has a well-defined hash value, distinct from that of the "01" byte (since the padding is done based on input's length in bits).
However, due to endianness issues and the fact that no implementations that I can find support feeding in single bits, I can't quite figure out what this correct value is.
So, what is the correct hash of the 1-bit long input consisting of the bit "1"? (not the 8-bit long byte[] { 1 } input).
OK, according to my own implementation:
1-bit string "1":
1-bit string "0":
I have tested this implementation on several standard multiples-of-8-bits inputs, including the 0-bit string, and the results were correct.
(of course the point of this question was to validate the above outputs in the first place, so use with care...)
Not sure if I understand your question correctly.
SHA-256 operates with block sizes of 64 bytes (=512bits). This means smaller inputs must be padded first. The result of the padding looks like this:
As this results are distinct, the results of the following compression functions will be too. And therefore the hash values are. The standard document explains the padding quite descriptive: http://csrc.nist.gov/publications/fips/fips180-2/fips180-2.pdf
There is C code available in section 8 of RFC 4634 to compute the hash of data that is not necessarily a multiple of 8 bits. See the methods whose names are
SHA*FinalBits(...)
.