What are some simple ways to hash a 32-bit integer (e.g. IP address, e.g. Unix time_t, etc.) down to a 16-bit integer?
E.g. hash_32b_to_16b(0x12345678)
might return 0xABCD
.
Let's start with this as a horrible but functional example solution:
function hash_32b_to_16b(val32b) {
return val32b % 0xffff;
}
Question is specifically about JavaScript, but feel free to add any language-neutral solutions, preferably without using library functions.
The context for this question is generating unique IDs (e.g. a 64-bit ID might be composed of several 16-bit hashes of various 32-bit values). Avoiding collisions is important.
Simple = good. Wacky+obfuscated = amusing.
I think this is the best you're going to get. You could compress the code to a single line but the var's are there for now as documentation:
function hash_32b_to_16b(val32b) {
var rightBits = val32b & 0xffff; // Left-most 16 bits
var leftBits = val32b & 0xffff0000; // Right-most 16 bits
leftBits = leftBits >>> 16; // Shift the left-most 16 bits to a 16-bit value
return rightBits ^ leftBits; // XOR the left-most and right-most bits
}
Given the parameters of the problem, the best solution would have each 16-bit hash correspond to exactly 2^16 32-bit numbers. It would also IMO hash sequential 32-bit numbers differently. Unless I'm missing something, I believe this solution does those two things.
I would argue that security cannot be a consideration in this problem, as the hashed value is just too few bits. I believe that the solution I gave provides even distribution of 32-bit numbers to 16-bit hashes
The key to maximizing the preservation of entropy of some original 32-bit 'signal' is to ensure that each of the 32 input bits has an independent and equal ability to alter the value of the 16-bit output word.
Since the OP is requesting a bit-size which is exactly half of the original, the simplest way to satisfy this criteria is to XOR the upper and lower halves, as others have mentioned. Using XOR is optimal because—as is obvious by the definition of XOR—independently flipping any one of the 32 input bits is guaranteed to change the value of the 16-bit output.
The problem becomes more interesting when you need further reduction beyond just half-the-size, say from a 32-bit input to, let's say, a 2-bit output. Remember, the goal is to preserve as much entropy from the source as possible, so solutions which involve naively masking off the two lowest bits with (i & 3)
are generally heading in the wrong direction; doing that guarantees that there's no way for any bits except the unmasked bits to affect the result, and that generally means there's an arbitrary, possibly valuable part of the runtime signal which is being summarily discarded without principle.
Following from the earlier paragraph, you could of course iterate with XOR three additional times to produce a 2-bit output with the desired property of being equally influenced by each/any of the input bits. That solution is still optimally correct of course, but involves looping or multiple unrolled operations which, as it turns out, aren't necessary!
Fortunately, there is a nice technique of only two operations which gives the provably-optimal result for this situation. As with XOR, it not only ensures that, for any given 32-bit value, twiddling any single one of the input bits results in a change to the (e.g.) 2-bit output value, but also that the distribution of 2-bit output values is perfectly uniform. In other words, over the 4,294,967,296
possible input values, there will be exactly 1,073,741,824
of each of the four possible 2-bit hash results { 0, 1, 2, 3 }
.
The method I mention here uses specific magic values that I discovered via exhaustive search, and which don't seem to be discussed very much elsewhere on the internet, at least for the particular use under discussion here (i.e., ensuring a uniform hash distribution that's maximally entropy-preserving). Curiously, according to this same exhaustive search, the magic values are in fact unique, meaning that for each of target bit-widths { 16, 8, 4, 2 }
, the magic value I show below is the only value that, when used as I show here, satisfies the perfect hashing criteria outlined above.
Without further ado, the unique and mathematically optimal procedure for hashing 32-bits to n = { 16, 8, 4, 2 }
is to multiply by the magic value corresponding to n
(unsigned, discarding overflow), and then take the n
highest bits of the result. To isolate those result bits as a hash value in the range [0 ... (2ⁿ - 1)]
, simply right-shift (unsigned!) the multiplication result by 32 - n
bits.
The "magic" values, and C-like expression syntax are as follows:
Maximally entropy-preserving hash for reducing from 32-bits to...
Target Bits Multiplier Right Shift Expression
----------- ------------ ----------- -----------------------
16 0x80008001 16 (i * 0x80008001) >> 16
8 0x80808081 24 (i * 0x80808081) >> 24
4 0x88888889 28 (i * 0x88888889) >> 28
2 0xAAAAAAAB 30 (i * 0xAAAAAAAB) >> 30
Notes:
- Use unsigned 32-bit multiply and discard any overflow (64-bit multiply is not needed).
- If isolating the result using right-shift (as shown), be sure to use an unsigned shift operation.
[edit: added table for 64-bit input values]
Maximally entropy-preserving hash for reducing a 64-bit value to...
Target Bits Multiplier Right Shift Expression
----------- ------------------ ----------- -------------------------------
32 0x8000000080000001 32 (i * 0x8000000080000001) >> 32
16 0x8000800080008001 48 (i * 0x8000800080008001) >> 48
8 0x8080808080808081 56 (i * 0x8080808080808081) >> 56
4 0x8888888888888889 60 (i * 0x8888888888888889) >> 60
2 0xAAAAAAAAAAAAAAAB 62 (i * 0xAAAAAAAAAAAAAAAB) >> 62
Further discussion
I found all this quite cool. In practical terms, the key information-theoretical requirement is the guarantee that, for any m-bit
input value and its corresponding n-bit
hash value result, flipping any one of the m
source bits always causes some change in the n-bit
result value. Now although there are 2ⁿ
possible result values in total, one of them is already "in-use" since switching the result to that one would be no change at all. This leaves only 2ⁿ - 1
result values that are eligible to be used by the entire set of m
input values produced by a single bit-flip.
Let's consider an example; in fact, to show how this technique might seem to border on spooky or downright magical, we'll consider a more extreme case, where m = 64
and n = 2
. With 2 output bits there are four possible result values, { 0, 1, 2, 3 }
. Assuming an arbitrary 64-bit input value 0x7521d9318fbdf523
, we obtain its 2-bit hash value of 1
:
(0x7521d9318fbdf523 * 0xAAAAAAAAAAAAAAAB) >> 62 // result --> '1'
But this result entails that no value in the set of 64 values where a single-bit of 0x7521d9318fbdf523
is toggled may have that same result value. That is, none of those 64 other results can use value 1
and all must instead use either 0
, 2
, or 3
. When every one of the 2⁶⁴ input values is selfishly hogging one-quarter of the output space for itself from 64 of its peers, does a simultaneously satisfying solution over all even exist?
Well sure enough, to show that (exactly?) one does, here are the hash result values, listed in order, for inputs that flipping a single bit of 0x7521d9318fbdf523
(one at a time), from MSB (position 63) down to LSB (0).
3 2 0 3 3 3 3 3 3 0 0 0 3 0 3 3 0 3 3 3 0 0 3 3 3 0 0 3 3 0 3 3
0 0 3 0 0 3 0 3 0 0 0 3 0 3 3 3 0 3 0 3 3 3 3 3 3 0 0 0 3 0 0 3 // <-- no '1' values
As you can see, there are no 1
values, which entails that every bit in the source "as-is" must be contributing to influence the result (or, if you prefer, the de facto state of each-and-every bit in 0x7521d9318fbdf523
is essential to keeping the result from being "not-1
"). Because no matter what single-bit change you make to the 64-bit input, the 2-bit result value will no longer be 1
.
Keep in mind that the "missing-value" table shown above was dumped from the analysis of just the one randomly-chosen example value 0x7521d9318fbdf523
; every other possible input value has a similar table of its own, each one eerily missing its owner's actual result value while yet somehow being globally consistent across its set-membership. This property essentially corresponds to maximally preserving the available entropy during the (inherently lossy) bit-width reduction task.
So we see that every one of the 2⁶⁴
possible source values independently imposes, on exactly 64 other source values, the constraint of excluding one of the possible result values. What defies my intuition about this is that there are untold quadrillions of these 64-member sets, each of whose members also belongs to 63 other, seemingly unrelated bit-twiddling sets. Yet somehow despite this most confounding puzzle of intertwined constraints, it is nevertheless trivial to exploit the one (I surmise) resolution which simultaneously satisfies them all exactly.
All this seems related to something you may have noticed in the tables above: namely, I don't see any obvious way to extend the technique to the case of compressing down to a 1-bit result. In this case, there are only two possible result values { 0, 1 }
, so if any/every given (e.g.) 64-bit input value still summarily excludes its own result from being the result for all 64 of its single-bit-flip neighbors, then that now essentially imposes the other, only remaining value on those 64. The math breakdown we see in the table seems to be signalling that a simultaneous result under such conditions is a bridge too far.
In other words, the special 'information-preserving' characteristic of XOR (that is, its luxuriously reliable guarantee that, as opposed to AND, OR, etc., it c̲a̲n̲ and w̲i̲l̲l̲ always change a bit) not surprisingly exacts a certain cost, namely, a fiercely non-negotiable demand for a certain amount of elbow room—at least 2 bits—to work with.