The book Numerical Recipes offers a method to calculate 64bit hash codes in order to reduce the number of collisions.
The algorithm is shown at http://www.javamex.com/tutorials/collections/strong_hash_code_implementation_2.shtml and is copied here for reference:
private static final createLookupTable() {
byteTable = new long[256];
long h = 0x544B2FBACAAF1684L;
for (int i = 0; i < 256; i++) {
for (int j = 0; j < 31; j++) {
h = (h >>> 7) ^ h;
h = (h << 11) ^ h;
h = (h >>> 10) ^ h;
}
byteTable[i] = h;
}
return byteTable;
}
public static long hash(CharSequence cs) {
long h = HSTART;
final long hmult = HMULT;
final long[] ht = byteTable;
final int len = cs.length();
for (int i = 0; i < len; i++) {
char ch = cs.charAt(i);
h = (h * hmult) ^ ht[ch & 0xff];
h = (h * hmult) ^ ht[(ch >>> 8) & 0xff];
}
return h;
}
My questions:
1) Is there a formula to estimate the probability of collisions taking into account the so-called Birthday Paradox?
2) Can you estimate the probability of a collision (i.e two keys that hash to the same value)? Let's say with 1,000 keys and with 10,000 keys?
EDIT: rephrased/corrected question 3
3) Is it safe to assume that a collision of a reasonable number of keys (say, less than 10,000 keys) is so improbable so that if 2 hash codes are the same we can say that the keys are the same without any further checking? e.g.
static boolean equals(key1, key2) {
if (key1.hash64() == key2.hash64())
return true; // probability of collision so low we don't need further check
return false;
}
This is not for security, but execution speed is imperative so avoiding further checks of the keys will save time. If the probability is so low, say less than (1 in 1 billion for 100,000 keys) it will probably be acceptable.
TIA!
I'll provide a rough approximation to the exact formulas provided in the other answers; the approximation may be able to help you answer #3. The rough approximation is that the probability of a collision occurring with k keys and n possible hash values with a good hashing algorithm is approximately (k^2)/2n, for k << n. For 100,000 keys with a 64 bit hash, that's 10^10 / 32x10^18 or about 1 in 3 billion.
However, I suspect that if you go with not checking the actual key values on collision, there is a larger chance you'll find out the hashing algorithm is not "good" enough, after all.
See: Birthday attack.
Assuming the distribution of hashes is uniform, the probability of a collision for
n
keys is approximately n2/265.It's only safe when you use a cryptographic hash function. Even if you can tolerate a mistake every 3*1011 times, you may have to consider the possibility that the input is specifically built to create a hash collision, as an attack on your program.
The probability of a single collision occurring depends on the key set generated as the hash function is uniform we can do following to calculate the probability that collision doesnt occurs at generation of k keys as follows :-
Hence if
sqrt(2^64)
keys that is2^32
key are generated there is higher chance that there is a single collision.This is a very interesting question because it depends on the size of key space. Suppose your keys are generated at random from space of
size = s
and hash space isx=2^64
as you mentioned. Probability of collision isPc(k=n|x) = 1-e^(-n^2)/2x
. If Probability of choosing same key in key space isP(k=n|s) = 1-e^(-n^2)/2s
. For it to be sure that if hash is same then keys are same:-Hence it shows that for keys to be same if hash is same that key set size must be small than
2^64
approx otherwise there is a chance of collision in hash more than in key set. The result is independent of number of keys generated.Using the Birthday Paradox formula simply tells you at what point you need to start worrying about a collision happening. This is at around
Sqrt[n]
wheren
is the total number of possible hash values. In this casen = 2^64
so the Birthday Paradox formula tells you that as long as the number of keys is significantly less thanSqrt[n] = Sqrt[2^64] = 2^32
or approximately 4 billion, you don't need to worry about collisions. The higher then
, the more accurate this estimation. In fact the probabilityp(k)
that a collision will occur withk
keys approaches a step function asn
gets larger, where the step occurs atk=Sqrt[n]
.Assuming the hash function is uniformly distributed it's straightforward to derive the formula.
That formula directly follows from starting with 1 key: The probability of no collision with 1 key is of course 1. The probability of no collision with 2 keys is
1 * (n-1)/n
. And so on for allk
keys. Conveniently, Mathematica has a Pochhammer[] function for this purpose to express this succinctly:Then, to calculate the probability that there is at least 1 collision for
k
keys, subtract it from 1:Using Mathematica, one can calculate for
n=2^64
:To answer this precisely depends upon the probability that 2 of the 10,000 keys were identical. What we are looking for is:
where
a
andb
are keys (possibly identical) andh()
is the hashing function. We can apply Bayes' Theorem directly:We immediately see that
p(h(a)=h(b)|a=b) = 1
(ifa=b
then of courseh(a)=h(b)
) so we getAs you can see this depends upon
p(a=b)
which is the probability thata
andb
are actually the same key. This depends upon how the group of 10,000 keys were selected in the first place. The calculations for the previous two questions assume all keys are distinct, so more information on this scenario is needed to fully answer it.