I have a loop which should be nicely parallelized by insering one openmp pragma:
boost::normal_distribution<double> ddist(0, pow(retention, i - 1));
boost::variate_generator<gen &, BOOST_TYPEOF(ddist)> dgen(rng, ddist);
// Diamond
const std::uint_fast32_t dno = 1 << i - 1;
// #pragma omp parallel for
for (std::uint_fast32_t x = 0; x < dno; x++)
for (std::uint_fast32_t y = 0; y < dno; y++)
{
const std::uint_fast32_t diff = size/dno;
const std::uint_fast32_t x1 = x*diff, x2 = (x + 1)*diff;
const std::uint_fast32_t y1 = y*diff, y2 = (y + 1)*diff;
double avg =
(arr[x1][y1] + arr[x1][y2] + arr[x2][y1] + arr[x2][y2])/4;
arr[(x1 + x2)/2][(y1 + y2)/2] = avg + dgen();
}
(unless I make an error each execution does not depend on others at all. Sorry that not all of code is inserted).
However my question is - are boost RNG thread-safe? They seems to refer to gcc code for gcc so even if gcc code is thread-safe it may not be the case for other platforms.