Why this class uses 48 bit seed in its linear congruence formula? I would have expected 32 or 64...
I know it takes higher order bits when asked for 32 bit values. But why only 16 more additional bits? Was it a "random" choice?
Why this class uses 48 bit seed in its linear congruence formula? I would have expected 32 or 64...
I know it takes higher order bits when asked for 32 bit values. But why only 16 more additional bits? Was it a "random" choice?
You need more bits of state than bits of output, because the nature of an LCG is such that the low-order bits of state are not very random at all. So if you want 32-bit outputs, you need more than 32 bits of state.
Why use 48 rather than 64? Because 48 is enough, and you're designing this decades ago, so there are good reasons to want to avoid using any more resources than are strictly necessary.
A Linear Congruential Generator (LCG) is characterized by three parameters a, c and m. Only certain combinations give the maximum period, and not all are equally well-studied. The choice was probably influenced by the usual trade-off between complexity and intended use. Fortunately, the class is reasonably well-designed for inheritance, so other implementations are possible.
The math behind it comes from number theory and the mathematical definition of pseudorandom number generators. It is certainly not a "random" (interpreted as arbitrary) choice.
A random number generator on the computer is actually attempting to be a true pseudorandom number generator.
You can think of a pseudorandom number generator as an expansion function that takes an input
seed
and then outputs a number streamG(seed)
.Ideally you would want your pseudorandom number generator to be indistinguishable from a true random number generator, but you must also realize that your pseudorandom number generator must be efficiently sampled (polynomial time) and deterministic (meaning it out puts exactly the same stream given the same input seed).
So having only a 32 bit seed space means that an adversary wishing to determine if your stream is truely random (or break your encryption algorithm depending upon the random number generator) only has to go through a 32 bit keyspace (seed space) and sample the output of the generator to compare against your provided "random" stream and see if it matches. Adding 16 more bits adds significant more range in the key (seed) space, making it a lot more difficult to enumerate all possible keys (seeds).
As to why not go for the full 64 bits... probably when the algorithm was being implemented the hardware processing capabilities did not support 64 bit operations as efficiently as they can be done today on modern x64 based processors so they stopped at 48.