可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I'm talking about this surprisingly simple implementation of rand()
from the C standard:
static unsigned long int next = 1;
int rand(void) /* RAND_MAX assumed to be 32767. */
{
next = next * 1103515245 + 12345;
return (unsigned)(next/65536) % 32768;
}
From this Wikipedia article we know that the multiplier a
(in above code a = 1103515245
) should fulfill only 2 conditions:
a - 1
is divisible by all prime factors of m
.
(In our case m = 2^32
, size of the int, so m
has only one prime factor = 2)
a - 1
is a multiple of 4 if m
is a multiple of 4.
(32768 is multiple of 4, and 1103515244 too)
Why they have chosen such a strange, hard-to-remember, "man, I'm fed up with these random numbers, write whatever" number, like 1103515245?
Maybe there are some wise reasons, that this number is somehow better than the other?
For example, why don't set a = 20000000001
? It's bigger, cool-looking and easier to remember.
回答1:
If you use a LCG to draw points on the d dimensional space, they will lie on at most (d!m)1/d hyperplanes. This is a known defect of LCGs.
If you don't carefully choose a and m (beyond the condition for full periodicity), they may lie on much fewer planes than that. Those numbers have been selected by what is called the spectral test.
The "spectral test" (the name comes from number theory) is the maximum distance between consecutive hyperplanes on which d-dimensional joint distributions lie. You want it to be as small as possible for as many d as you can test.
See this paper for a historical review on the topic. Note that the generator you quote is mentioned in the paper (as ANSIC) and determined to not be very good. The high order 16 bits are acceptable, however, but many applications will need more than 32768 distinct values (as you point out in the comments, the period is indeed 2^31 -- the conditions for full periodicity in Wikipedia's link are probably only necessary).
The original source code in the ANSI document did not take the high order 16 bits, yielding a very poor generator which is easy to misuse (rand() % n
is what people first think of to draw a number between 0
and n
, and this yields something very non-random in this case).
See also the discussion on LCGs in Numerical Recipes. Quoting:
Even worse, many early generators happened to make particularly bad
choices for m and a. One infamous such routine, RANDU, with a = 65539
and m = 231, was widespread on IBM mainframe computers for many years,
and widely copied onto other systems. One of us recalls as a graduate
student producing a “random” plot with only 11 planes and being told
by his computer center’s programming consultant that he had misused
the random number generator: “We guarantee that each number is random
individually, but we don’t guarantee that more than one of them is
random.” That set back our graduate education by at least a year!
回答2:
Remember that rand()
is an approximation of a uniform distribution. Those numbers are used because they have been tested to show that they generate a more uniform-looking distribution.
Given the multitude of pairs of unsigned integers in the representable range, I doubt anyone has tried them all with all valid seeds. If you think you have a better choice of parameters, just try it out! You have the code, just factor out the parameters of the LCG and run tests. Generate a bunch of numbers (say 10 million), compute a histogram of the generated numbers and plot that to look at the distribution.
edit
If you are interested in developing a pseudo-random number generator for use in real applications, I recommend that you read up on the considerable literature on the subject. The "advice" given above is only suggested to help show that choosing arbitrary "bigger, cool-looking and easier to remember" LCG parameters will give a very poor distribution.
/edit
Besides, it's a library function and I've never seen a program using the standard library version of rand()
to remember its LCG's parameters.
回答3:
Early computations tended to concern themselves with the bits and bytes and played tricks with the registers to minimize bytes of code (before lines there were bytes)
I have only found one reasonable clue below:
The output of this generator isn’t very random. If we use the sample generator listed above, then the sequence of 16 key bytes will be highly non-random. For instance, it turns out that the low bit of each successive output of rand() will alternate (e.g., 0,1,0,1,0,1, . . . ). Do you see why? The low bit of x * 1103515245 is the same as the low bit of x, and then adding 12345 just flips the low bit. Thus the low bit alternates. This narrows down the set of possible keys to only 2113 possibilities;much less than the desired value of 2128.
http://inst.eecs.berkeley.edu/~cs161/fa08/Notes/random.pdf
And two reasonable answers:
Improving a poor random number generator (1976)
by Bays, Durham Bays, Carter, S D Durham
http://en.wikipedia.org/wiki/TRNG
回答4:
That number seems special, it's just between two primes :P.
Now talking seriously, to see if it's a good choice, just look at the output. You should see very different results even if flipping a single bit.
Also, consider how much predictability you expect... that implementation is terrible, you may consider a more robust yet simple alternative, like FNV-1a.