I need a reasonable supply of high-quality random data for an application I'm writing. Linux provides the /dev/random file for this purpose which is ideal; however, because my server is a single-service virtual machine, it has very limited sources of entropy, meaning /dev/random quickly becomes exhausted.
I've noticed that if I read from /dev/random, I will only get 16 or so random bytes before the device blocks while it waits for more entropy:
[duke@poopz ~]# hexdump /dev/random
0000000 f4d3 8e1e 447a e0e3 d937 a595 1df9 d6c5
<process blocks...>
If I terminate this process, go away for an hour and repeat the command, again only 16 or so bytes of random data are produced.
However - if instead I leave the command running for the same amount of time, much, much more random data are collected. I assume from this that over the course of a given timeperiod, the system produces plenty of entropy, but Linux only utilises it if you are actually reading from /dev/random, and discards it if you are not. If this is the case, my question is:
Is it possible to configure Linux to buffer /dev/random so that reading from it yields much larger bursts of high-quality random data?
It wouldn't be difficult for me to buffer /dev/random as part of my program but I feel doing this at a system level would be more elegant. I also wonder if having Linux buffer its random data in memory would have security implications.
Have you got, or can you buy, a Linux-compatible hardware random number generator? That could be a solution to your underlying problem. See http://www.linuxcertified.com/hw_random.html
Use /dev/urandom.
Sounds like you need an entropy deamon that feeds the entropy pool from other sources.