Generate N random numbers within a range with a co

2019-01-11 20:10发布

问题:

I want to generate N random numbers drawn from a specif distribution (e.g uniform random) between [a,b] which sum to a constant C. I have tried a couple of solutions I could think of myself, and some proposed on similar threads but most of them either work for a limited form of problem or I can't prove the outcome still follows the desired distribution.

What I have tried: Generage N random numbers, divide all of them by the sum of them and multiply by the desired constant. This seems to work but the result does not follow the rule that the numbers should be within [a:b].

Generage N-1 random numbers add 0 and desired constant C and sort them. Then calculate the difference between each two consecutive nubmers and the differences are the result. This again sums to C but have the same problem of last method(the range can be bigger than [a:b].

I also tried to generate random numbers and always keep track of min and max in a way that the desired sum and range are kept and come up with this code:

bool generate(function<int(int,int)> randomGenerator,int min,int max,int len,int sum,std::vector<int> &output){
    /**
    * Not possible to produce such a sequence
    */
if(min*len > sum)
    return false;
if(max*len < sum)
    return false;

int curSum = 0;
int left = sum - curSum;
int leftIndexes = len-1;
int curMax = left - leftIndexes*min;
int curMin = left - leftIndexes*max;

for(int i=0;i<len;i++){
    int num = randomGenerator((curMin< min)?min:curMin,(curMax>max)?max:curMax);
    output.push_back(num);
    curSum += num;
    left = sum - curSum;
    leftIndexes--;
    curMax = left - leftIndexes*min;
    curMin = left - leftIndexes*max;
}

return true;
}

This seems to work but the results are sometimes very skewed and I don't think it's following the original distribution (e.g. uniform). E.g:

//10 numbers within [1:10] which sum to 50:
generate(uniform,1,10,10,50,output);
//result:
2,7,2,5,2,10,5,8,4,5 => sum=50
//This looks reasonable for uniform, but let's change to 
//10 numbers within [1:25] which sum to 50:
generate(uniform,1,25,10,50,output);
//result:
24,12,6,2,1,1,1,1,1,1 => sum= 50

Notice how many ones exist in the output. This might sound reasonable because the range is larger. But they really don't look like a uniform distribution. I am not sure even if it is possible to achieve what I want, maybe the constraints are making the problem not solvable.

回答1:

In case you want the sample to follow a uniform distribution, the problem reduces to generate N random numbers with sum = 1. This, in turn, is a special case of the Dirichlet distribution but can also be computed more easily using the Exponential distribution. Here is how:

  1. Take a uniform sample v1 … vN with all vi between 0 and 1.
  2. For all i, 1<=i<=N, define ui := -ln vi (notice that ui > 0).
  3. Normalize the ui as pi := ui/s where s is the sum u1+...+uN.

The p1..pN are uniformly distributed (in the simplex of dim N-1) and their sum is 1.

You can now multiply these pi by the constant C you want and translate them by summing some other constant A like this

qi := A + pi*C.

EDIT 3

In order to address some issues raised in the comments, let me add the following:

  • To ensure that the final random sequence falls in the interval [a,b] choose the constants A and C above as A := a and C := b-a, i.e., take qi = a + pi*(b-a). Since pi is in the range (0,1) all qi will be in the range [a,b].
  • One cannot take the (negative) logarithm -ln(vi) if vi happens to be 0 because ln() is not defined at 0. The probability of such an event is extremely low. However, in order to ensure that no error is signaled the generation of v1 ... vN in item 1 above must threat any occurrence of 0 in a special way: consider -ln(0) as +infinity (remember: ln(x) -> -infinity when x->0). Thus the sum s = +infinity, which means that pi = 1 and all other pj = 0. Without this convention the sequence (0...1...0) would never be generated (many thanks to @Severin Pappadeux for this interesting remark.)
  • As explained in the 4th comment attached to the question by @Neil Slater it is logically impossible to fulfill all the requirements of the original framing. Therefore any solution must relax the constraints to a proper subset of the original ones. Other comments by @Behrooz seem to confirm that this would suffice in this case.

EDIT 2

One more issue has been raised in the comments:

Why rescaling a uniform sample does not suffice?

In other words, why should I bother to take negative logarithms?

The reason is that if we just rescale then the resulting sample won't distribute uniformly across the segment (0,1) (or [a,b] for the final sample.)

To visualize this let's think 2D, i.e., let's consider the case N=2. A uniform sample (v1,v2) corresponds to a random point in the square with origin (0,0) and corner (1,1). Now, when we normalize such a point dividing it by the sum s=v1+v2 what we are doing is projecting the point onto the diagonal as shown in the picture (keep in mind that the diagonal is the line x + y = 1):

But given that green lines, which are closer to the principal diagonal from (0,0) to (1,1), are longer than orange ones, which are closer to the axes x and y, the projections tend to accumulate more around the center of the projection line (in blue), where the scaled sample lives. This shows that a simple scaling won't produce a uniform sample on the depicted diagonal. On the other hand, it can be proven mathematically that the negative logarithms do produce the desired uniformity. So, instead of copypasting a mathematical proof I would invite everyone to implement both algorithms and check that the resulting plots behave as this answer describes.

(Note: here is a blog post on this interesting subject with an application to the Oil & Gas industry)



回答2:

Let's try to simplify the problem. By substracting the lower bound, we can reduce it to finding N numbers in [0,b-a] such that their sum is C-Na.

Renaming the parameters, we can look for N numbers in [0,m] whose sum is S.

Now the problem is akin to partitioning a segment of length S in N distinct sub-segments of length [0,m].

I think the problem is simply not solvable.

if S=1, N=1000 and m anything above 0, the only possible repartition is one 1 and 999 zeroes, which is nothing like a random spread.

There is a correlation between N, m and S, and even picking random values will not make it disappear.

For the most uniform repartition, the length of the sub-segments will follow a gaussian curve with a mean value of S/N.

If you tweak your random numbers differently, you will end up with whatever bias, but in the end you will never have both a uniform [a,b] repartition and a total length of C, unless the length of your [a,b] interval happens to be 2C/N-a.



回答3:

For my answer I'll assume that we have a uniform distribution.

Since we have a uniform distribution, every tuple of C has the same probability to occur. For example for a = 2, b = 2, C = 12, N = 5 we have 15 possible tuples. From them 10 start with 2, 4 start with 3 and 1 starts with 4. This gives the idea of selecting a random number from 1 to 15 in order to choose the first element. From 1 to 10 we select 2, from 11 to 14 we select 3 and for 15 we select 4. Then we continue recursively.

#include <time.h>
#include <random>

std::default_random_engine generator(time(0));
int a = 2, b = 4, n = 5, c = 12, numbers[5];

// Calculate how many combinations of n numbers have sum c
int calc_combinations(int n, int c) {
    if (n == 1) return (c >= a) && (c <= b);
    int sum = 0;
    for (int i = a; i <= b; i++) sum += calc_combinations(n - 1, c - i);
    return sum;
}

// Chooses a random array of n elements having sum c
void choose(int n, int c, int *numbers) {
    if (n == 1) { numbers[0] = c; return; }

    int combinations = calc_combinations(n, c);
    std::uniform_int_distribution<int> distribution(0, combinations - 1);
    int s = distribution(generator);
    int sum = 0;
    for (int i = a; i <= b; i++) {
        if ((sum += calc_combinations(n - 1, c - i)) > s) {
            numbers[0] = i;
            choose(n - 1, c - i, numbers + 1);
            return;
        }
    }
}

int main() { choose(n, c, numbers); }

Possible outcome:

2
2
3
2
3

This algorithm won't scale well for large N because of overflows in the calculation of combinations (unless we use a big integer library), the time needed for this calculation and the need for arbitrarily large random numbers.



回答4:

well, for n=10000 cant we have a small number in there that is not random?

maybe generating sequence till sum > C-max reached and then just put one simple number to sum it up.

1 in 10000 is more like a very small noise in the system.



回答5:

Although this was old topic but I think I got a idea. Consider we want N random number which sum is C and each random between a and b. To solve problem, we create N holes and prepare C balls, for each time we ask each hole "Do you want another ball?". If no, we pass to next hole, else, we put a ball into the hole. Each hole has a cap value: b-a. If some hole reach the cap value then always pass to next hole.

Example:
3 random numbers between 0 and 2 which sum is 5.

simulation result:
1st run: -+-
2nd run: ++-
3rd run: ---
4th run: +*+
final:221

-:refuse ball
+:accept ball
*:full pass