I am playing around with PRNGs (like Mersenne Twister and rand()
function of stdlib) and I would want a good test that would help me ascertain the quality of random data produced by the PRNGs.
I have calculated the value of Pi using random numbers generated by the PRNGs, and I find rand()
and Mersenne Twister to be very close to offer a distinction (do I need to scrutinize after 10 decimal points?).
I do not have much idea about Monte Carlo simulations; please let me know about some algorithm/application (possibly something simple yet which could provide good inferences) that would help me distinguish them in terms of quality.
EDIT 1: I didn't notice before, but there is a similar thread: How to test random numbers?
EDIT 2: I am not able to interprete the results of NIST, as mentioned in one of the comments. I got this idea of visually interpreting the pattern (if any) from random.org and am following that because of it's simplicity. I would be very glad if someone could comment on the process of my testing:
- Generate N randoms from [0,1] using rand() and MT1997
- if
(round(genrand_real1() / rand_0_1()))
then red pixel, else black
As I understand that this is not a very precise solution, but if this provides a reasonable estimate, then I could live with this at the present moment.
You're best off looking into the volume 2 of the Knuth's series.
For a shorter read, look up the corresponding chapter of the Numerical Recipes.
If you're only interested in a sort of a baseline for a MC simulation--- linear congruential generators are best avoided, Mersenne Twister is good enough in the vast majority of cases.
There are two standard test suites for testing random numbers.
There is a R interface to the Dieharder library, called RDieHarder. This library provides an interface to both the NIST and Diehard test suites.
There are several statistical tests suites available. I wrote, copied, and otherwise gathered together 120 PRNGs and tested each with a variety of tests suites given 4 hours per PRNG per test suite:
How many of those were in PRNGs that the other test suites all missed?
No other test suite found any unique biases.
In short, only PractRand, TestU01, and possibly RaBiGeTe are worth using.
Full disclosure: I wrote PractRand, so either the set of PRNGs or any other non-qualitative measure could be biased in its favor.
Miscellaneous advantages:
Miscellaneous disadvantages:
The set of PRNGs tested: The PRNG set includes 1 large GFSR, 1 large LFSR, 4 xorshift type PRNGs, 2 xorwow type PRNGs, 3 other not-quite-LFSR PRNGs. It includes 10 simple power-of-2 modulus LCGs (which discard low bits to reach acceptable quality levels), 10 power-of-2 modulus not-quite-LCGs, and 9 combination generators based primarily around LCGs and not-quite-LCGs. It includes 19 reduced strength versions of CSPRNGs, plus one full strength CSPRNG. Of those, 14 were based upon indirection / dynamic s-boxes (e.g. RC4, ISAAC), four were ChaCha/Salsa parameterizations, and the remaining 2 were Trivium variants. It includes 11 PRNGs broadly classifiable as LFib-type or similar, not counting the LFSRs/GFSRs. The rest (about 35) were small state chaotic PRNGs, of which 10 used multiplication and the others were limited to arithmetic and bitwise logic.
Edit: There is also the test set in gjrand, which is very obscure and a little odd, but actually does extremely well.
Also, all of the PRNGs tested are included as non-recommended PRNGs in PractRand.