My program needs to generate many random integers in some range (int min, int max). Each call will have a different range. What is a good (preferably thread-safe) way to do this? The following is not thread-safe (and uses rand(), which people seem to discourage):
int intRand(const int & min, const int & max)
{
return (rand() % (max+1-min)) + min;
}
This is much slower, but uses <random>
:
int intRand(const int & min, const int & max) {
std::default_random_engine generator;
std::uniform_int_distribution<int> distribution(min,max);
return distribution(generator);
}
Something like this is what I'm going for (the changeParameters function doesn't exist though):
int intRand(const int & min, const int & max) {
static std::default_random_engine generator;
static std::uniform_int_distribution<int> distribution(0, 10);
distribution.changeParameters(min, max);
return distribution(generator);
}
Another option would be to make a wide range on the uniform_int_distribution
and then use mod like in the first example. However, I'm doing statistical work, so I want the numbers to come from as unbiased of a distribution as possible (e.g., if the range of the distribution used is not a multiple of (max-min), the distribution will be slightly biased). This is an option, but again, I would like to avoid it.
SOLUTION This solution comes from the answers by @konrad-rudolph @mark-ransom and @mathk . The seeding of the random number generator is done to suit my particular needs. A more common approach would be to use time(NULL). If you make many threads in the same second, they would then get the same seed though. Even with clock() this is an issue, so we include the thread id. A drawback - this leaks memory --- one generator per thread.
#if defined (_MSC_VER) // Visual studio
#define thread_local __declspec( thread )
#elif defined (__GCC__) // GCC
#define thread_local __thread
#endif
#include <random>
#include <time.h>
#include <thread>
using namespace std;
/* Thread-safe function that returns a random number between min and max (inclusive).
This function takes ~142% the time that calling rand() would take. For this extra
cost you get a better uniform distribution and thread-safety. */
int intRand(const int & min, const int & max) {
static thread_local mt19937* generator = nullptr;
if (!generator) generator = new mt19937(clock() + this_thread::get_id().hash());
uniform_int_distribution<int> distribution(min, max);
return distribution(*generator);
}
Have you tried this?
int intRand(const int & min, const int & max) {
static thread_local std::mt19937 generator;
std::uniform_int_distribution<int> distribution(min,max);
return distribution(generator);
}
Distributions are extremely cheap (they will be completely inlined by the optimiser so that the only remaining overhead is the actual random number rescaling). Don’t be afraid to regenerate them as often as you need – in fact, resetting them would conceptually be no cheaper (which is why that operation doesn’t exist).
The actual random number generator, on the other hand, is a heavy-weight object carrying a lot of state and requiring quite some time to be constructed, so that should only be initialised once per thread (or even across threads, but then you’d need to synchronise access which is more costly in the long run).
Make the generator static
, so it's only created once. This is more efficient, since good generators typically have a large internal state; more importantly, it means you are actually getting the pseudo-random sequence it generates, not the (much less random) initial values of separate sequences.
Create a new distribution each time; these are typically lightweight objects with little state, especially one as simple as uniform_int_distribution
.
For thread safety, options are to make the generator thread_local
, with a different seed for each thread, or to guard it with a mutex. The former is likely to be faster, especially if there's a lot of contention, but will consume more memory.
You can use one default_random_engine
per thread using Thread Local Storage.
I can not tell you how to correctly use TLS since it is OS dependent. The best source you can use is to search through the internet.
I am a person from the future with the same problem. The accepted answer won't compile on MSVC 2013, because it doesn't implement thread_local (and using __declspec(thread)
doesn't work because it doesn't like constructors).
The memory leak in your solution can be moved off the heap by modifying everything to use placement new.
Here's my solution (combined from a header and source file):
#ifndef BUILD_COMPILER_MSVC
thread_local std::mt19937 _generator;
#else
__declspec(thread) char _generator_backing[sizeof(std::mt19937)];
__declspec(thread) std::mt19937* _generator;
#endif
template <typename type_float> inline type_float get_uniform(void) {
std::uniform_real_distribution<type_float> distribution;
#ifdef BUILD_COMPILER_MSVC
static __declspec(thread) bool inited = false;
if (!inited) {
_generator = new(_generator_backing) std::mt19937();
inited = true;
}
return distribution(*_generator);
#else
return distribution(_generator);
#endif
}
Write a simple LCG (or whatever) PRNG for yourself, which will produce numbers up to the maximum possible required. Use a single static copy of the built-in RNG to seed a new local copy of your own PRNG for each new thread you generate. Each thread-local PRNG will have its own local storage, and never needs to refer to the central RNG again.
This assumes that a statistically good RNG is fine for you and that cryptographic security is not an issue.