Algorithm to select a single, random combination o

2018-12-31 18:09发布

Say I have y distinct values and I want to select x of them at random. What's an efficient algorithm for doing this? I could just call rand() x times, but the performance would be poor if x, y were large.

Note that combinations are needed here: each value should have the same probability to be selected but their order in the result is not important. Sure, any algorithm generating would qualify, but I wonder if it's possible to do this more efficiently without the random order requirement.

How do you efficiently generate a list of K non-repeating integers between 0 and an upper bound N covers this case for permutations.

7条回答
栀子花@的思念
2楼-- · 2018-12-31 18:28

Robert Floyd invented a sampling algorithm for just such situations. It's generally superior to shuffling then grabbing the first x elements since it doesn't require O(y) storage. As originally written it assumes values from 1..N, but it's trivial to produce 0..N and/or use non-contiguous values by simply treating the values it produces as subscripts into a vector/array/whatever.

In pseuocode, the algorithm runs like this (stealing from Jon Bentley's Programming Pearls column "A sample of Brilliance").

initialize set S to empty
for J := N-M + 1 to N do
    T := RandInt(1, J)
    if T is not in S then
        insert T in S
    else
        insert J in S

That last bit (inserting J if T is already in S) is the tricky part. The bottom line is that it assures the correct mathematical probability of inserting J so that it produces unbiased results.

It's O(x)1 and O(1) with regard to y, O(x) storage.

Note that, in accordance with the tag in the question, the algorithm only guarantees equal probability of each element occuring in the result, not of their relative order in it.


1O(x2) in the worst case for the hash map involved which can be neglected since it's a virtually nonexistent pathological case where all the values have the same hash

查看更多
初与友歌
3楼-- · 2018-12-31 18:30

If you really only need to generate combinations - where the order of elements does not matter - you may use combinadics as they are implemented e.g. here by James McCaffrey.

Contrast this with k-permutations, where the order of elements does matter.

In the first case (1,2,3), (1,3,2), (2,1,3), (2,3,1), (3,1,2), (3,2,1) are considered the same - in the latter, they are considered distinct, though they contain the same elements.

In case you need combinations, you may really only need to generate one random number (albeit it can be a bit large) - that can be used directly to find the m th combination. Since this random number represents the index of a particular combination, it follows that your random number should be between 0 and C(n,k). Calculating combinadics might take some time as well.

It might just not worth the trouble - besides Jerry's and Federico's answer is certainly simpler than implementing combinadics. However if you really only need a combination and you are bugged about generating the exact number of random bits that are needed and none more... ;-)

While it is not clear whether you want combinations or k-permutations, here is a C# code for the latter (yes, we could generate only a complement if x > y/2, but then we would have been left with a combination that must be shuffled to get a real k-permutation):

static class TakeHelper
{
    public static IEnumerable<T> TakeRandom<T>(
        this IEnumerable<T> source, Random rng, int count)
    {
        T[] items = source.ToArray();

        count = count < items.Length ? count : items.Length;

        for (int i = items.Length - 1 ; count-- > 0; i--)
        {
            int p = rng.Next(i + 1);
            yield return items[p];
            items[p] = items[i];
        }
    }
}

class Program
{
    static void Main(string[] args)
    {
        Random rnd = new Random(Environment.TickCount);
        int[] numbers = new int[] { 1, 2, 3, 4, 5, 6, 7 };
        foreach (int number in numbers.TakeRandom(rnd, 3))
        {
            Console.WriteLine(number);
        }
    }
}

Another, more elaborate implementation that generates k-permutations, that I had lying around and I believe is in a way an improvement over existing algorithms if you only need to iterate over the results. While it also needs to generate x random numbers, it only uses O(min(y/2, x)) memory in the process:

    /// <summary>
    /// Generates unique random numbers
    /// <remarks>
    /// Worst case memory usage is O(min((emax-imin)/2, num))
    /// </remarks>
    /// </summary>
    /// <param name="random">Random source</param>
    /// <param name="imin">Inclusive lower bound</param>
    /// <param name="emax">Exclusive upper bound</param>
    /// <param name="num">Number of integers to generate</param>
    /// <returns>Sequence of unique random numbers</returns>
    public static IEnumerable<int> UniqueRandoms(
        Random random, int imin, int emax, int num)
    {
        int dictsize = num;
        long half = (emax - (long)imin + 1) / 2;
        if (half < dictsize)
            dictsize = (int)half;
        Dictionary<int, int> trans = new Dictionary<int, int>(dictsize);
        for (int i = 0; i < num; i++)
        {
            int current = imin + i;
            int r = random.Next(current, emax);
            int right;
            if (!trans.TryGetValue(r, out right))
            {
                right = r;
            }
            int left;
            if (trans.TryGetValue(current, out left))
            {
                trans.Remove(current);
            }
            else
            {
                left = current;
            }
            if (r > current)
            {
                trans[r] = left;
            }
            yield return right;
        }
    }

The general idea is to do a Fisher-Yates shuffle and memorize the transpositions in the permutation. It was not published anywhere nor has it received any peer-review whatsoever. I believe it is a curiosity rather than having some practical value. Nonetheless I am very open to criticism and would generally like to know if you find anything wrong with it - please consider this (and adding a comment before downvoting).

查看更多
时光乱了年华
4楼-- · 2018-12-31 18:39

A little suggestion: if x >> y/2, it's probably better to select at random y - x elements, then choose the complementary set.

查看更多
姐姐魅力值爆表
5楼-- · 2018-12-31 18:39

If, for example, you have 2^64 distinct values, you can use a symmetric key algorithm (with a 64 bits block) to quickly reshuffle all combinations. (for example Blowfish).

for(i=0; i<x; i++)
   e[i] = encrypt(key, i)

This is not random in the pure sense but can be useful for your purpose. If you want to work with arbitrary # of distinct values following cryptographic techniques you can but it's more complex.

查看更多
刘海飞了
6楼-- · 2018-12-31 18:44

The trick is to use a variation of shuffle or in other words a partial shuffle.

function random_pick( a, n ) 
{
  N = len(a);
  n = min(n, N);
  picked = array_fill(0, n, 0); backup = array_fill(0, n, 0);
  // partially shuffle the array, and generate unbiased selection simultaneously
  // this is a variation on fisher-yates-knuth shuffle
  for (i=0; i<n; i++) // O(n) times
  { 
    selected = rand( 0, --N ); // unbiased sampling N * N-1 * N-2 * .. * N-n+1
    value = a[ selected ];
    a[ selected ] = a[ N ];
    a[ N ] = value;
    backup[ i ] = selected;
    picked[ i ] = value;
  }
  // restore partially shuffled input array from backup
  // optional step, if needed it can be ignored
  for (i=n-1; i>=0; i--) // O(n) times
  { 
    selected = backup[ i ];
    value = a[ N ];
    a[ N ] = a[ selected ];
    a[ selected ] = value;
    N++;
  }
  return picked;
}

NOTE the algorithm is strictly O(n) in both time and space, produces unbiased selections (it is a partial unbiased shuffling) and non-destructive on the input array (as a partial shuffle would be) but this is optional

adapted from here

update

another approach using only a single call to PRNG (pseudo-random number generator) in [0,1] by IVAN STOJMENOVIC, "ON RANDOM AND ADAPTIVE PARALLEL GENERATION OF COMBINATORIAL OBJECTS" (section 3), of O(N) (worst-case) complexity

enter image description here

查看更多
与风俱净
7楼-- · 2018-12-31 18:46

Assuming that you want the order to be random too (or don't mind it being random), I would just use a truncated Fisher-Yates shuffle. Start the shuffle algorithm, but stop once you have selected the first x values, instead of "randomly selecting" all y of them.

Fisher-Yates works as follows:

  • select an element at random, and swap it with the element at the end of the array.
  • Recurse (or more likely iterate) on the remainder of the array, excluding the last element.

Steps after the first do not modify the last element of the array. Steps after the first two don't affect the last two elements. Steps after the first x don't affect the last x elements. So at that point you can stop - the top of the array contains uniformly randomly selected data. The bottom of the array contains somewhat randomized elements, but the permutation you get of them is not uniformly distributed.

Of course this means you've trashed the input array - if this means you'd need to take a copy of it before starting, and x is small compared with y, then copying the whole array is not very efficient. Do note though that if all you're going to use it for in future is further selections, then the fact that it's in somewhat-random order doesn't matter, you can just use it again. If you're doing the selection multiple times, therefore, you may be able to do only one copy at the start, and amortise the cost.

查看更多
登录 后发表回答