Random sample from a very long iterable, in python

2019-01-26 17:18发布

I have a long python generator that I want to "thin out" by randomly selecting a subset of values. Unfortunately, random.sample() will not work with arbitrary iterables. Apparently, it needs something that supports the len() operation (and perhaps non-sequential access to the sequence, but that's not clear). And I don't want to build an enormous list just so I can thin it out.

As a matter of fact, it is possible to sample from a sequence uniformly in one pass, without knowing its length-- there's a nice algorithm in Programming perl that does just that (edit: "reservoir sampling", thanks @user2357112!). But does anyone know of a standard python module that provides this functionality?

Demo of the problem (Python 3)

>>> import itertools, random
>>> random.sample(iter("abcd"), 2)
...
TypeError: Population must be a sequence or set.  For dicts, use list(d).

On Python 2, the error is more transparent:

Traceback (most recent call last):
  File "<pyshell#12>", line 1, in <module>
    random.sample(iter("abcd"), 2)
  File "/usr/local/Cellar/python/2.7.9/Frameworks/Python.framework/Versions/2.7/lib/python2.7/random.py", line 321, in sample
    n = len(population)
TypeError: object of type 'iterator' has no len()

If there's no alternative to random.sample(), I'd try my luck with wrapping the generator into an object that provides a __len__ method (I can find out the length in advance). So I'll accept an answer that shows how to do that cleanly.

5条回答
欢心
2楼-- · 2019-01-26 17:22

Use the itertools.compress() function, with a random selector function:

itertools.compress(long_sequence, (random.randint(0, 100) < 10 for x in itertools.repeat(1)))
查看更多
We Are One
3楼-- · 2019-01-26 17:25

Use O(n) Algorithm R https://en.wikipedia.org/wiki/Reservoir_sampling, to select k random elements from iterable:

import itertools
import random

def reservoir_sample(iterable, k):
    it = iter(iterable)
    if not (k > 0):
        raise ValueError("sample size must be positive")

    sample = list(itertools.islice(it, k)) # fill the reservoir
    random.shuffle(sample) # if number of items less then *k* then
                           #   return all items in random order.
    for i, item in enumerate(it, start=k+1):
        j = random.randrange(i) # random [0..i)
        if j < k:
            sample[j] = item # replace item with gradually decreasing probability
    return sample

Example:

>>> reservoir_sample(iter('abcdefghijklmnopqrstuvwxyz'), 5)
['w', 'i', 't', 'b', 'e']

reservoir_sample() code is from this answer.

查看更多
爷的心禁止访问
4楼-- · 2019-01-26 17:34

One possible method is to build a generator around the iterator to select random elements:

def random_wrap(iterator, threshold):
    for item in iterator:
        if random.random() < threshold:
            yield item

This method would be useful when you don't know the length and the possible size of the iterator would be prohibitive. Note that guaranteeing the size of the final list is problematic.

Some sample runs:

>>> list(random_wrap(iter('abcdefghijklmnopqrstuvwxyz'), 0.25))
['f', 'h', 'i', 'r', 'w', 'x']

>>> list(random_wrap(iter('abcdefghijklmnopqrstuvwxyz'), 0.25))
['j', 'r', 's', 'u', 'x']

>>> list(random_wrap(iter('abcdefghijklmnopqrstuvwxyz'), 0.25))
['c', 'e', 'h', 'n', 'o', 'r', 'z']

>>> list(random_wrap(iter('abcdefghijklmnopqrstuvwxyz'), 0.25))
['b', 'c', 'e', 'h', 'j', 'p', 'r', 's', 'u', 'v', 'x']
查看更多
唯我独甜
5楼-- · 2019-01-26 17:36

Since you know the length the data returned by your iterable, you can use xrange() to quickly generate indices into your iterable. Then you can just run the iterable until you've grabbed all of the data:

import random

def sample(it, length, k):
    indices = random.sample(xrange(length), k)
    result = [None]*k
    for index, datum in enumerate(it):
        if index in indices:
            result[indices.index(index)] = datum
    return result

print sample(iter("abcd"), 4, 2)

In the alternative, here is an implementation of resevior sampleing using "Algorithm R":

import random

def R(it, k):
    '''https://en.wikipedia.org/wiki/Reservoir_sampling#Algorithm_R'''
    it = iter(it)
    result = []
    for i, datum in enumerate(it):
        if i < k:
            result.append(datum)
        else:
            j = random.randint(0, i-1)
            if j < k:
                result[j] = datum
    return result

print R(iter("abcd"), 2)

Note that algorithm R doesn't provide a random order for the results. In the example given, 'b' will never precede 'a' in the results.

查看更多
Explosion°爆炸
6楼-- · 2019-01-26 17:43

If you needed a subset of the original iterator with fixed frequency (i.e., if the generator generates 10000 numbers, you want "statistically" 100 of them, and if it generates 1000000 numbers, you want 10000 of them - always 1%), you would have wrapped the iterator in a construct yielding the inner loop's results with probability of 1%.

So I guess you want instead a fixed number of samples from a source of unknown cardinality, as in the Perl algorithm you mention.

You can wrap the iterator in a construct holding a small memory of its own for the purpose of keeping track of the reservoir, and cycling it with decreasing probability.

import random

def reservoir(iterator, size):
    n = size
    R = iterator[0:n]
    for e in iterator:
        j = random.randint(0, n-1)
        n = n + 1
        if (j < size):
                R[j] = e
    return R

So

print reservoir(range(1, 1000), 3)

might print out

[656, 774, 828]

I have tried generating one million rounds as above, and comparing the distributions of the three columns with this filter (I expected a Gaussian distribution).

#                get first column and clean it
python file.py | cut -f 1 -d " " | tr -cd "0-9\n" \
    | sort | uniq -c | cut -b1-8 | tr -cd "0-9\n" | sort | uniq -c

and while not (yet) truly Gaussian, it looks good enough to me.

查看更多
登录 后发表回答