Fast random weighted selection across all rows of

2020-02-05 06:59发布

问题:

numpy.random.choice allows for weighted selection from a vector, i.e.

arr = numpy.array([1, 2, 3])
weights = numpy.array([0.2, 0.5, 0.3])
choice = numpy.random.choice(arr, p=weights) 

selects 1 with probability 0.2, 2 with probability 0.5, and 3 with probability 0.3.

What if we wanted to do this quickly in a vectorized fashion for a 2D array (matrix) for which each of the rows are a vector of probabilities? That is, we want a vector of choices from a stochastic matrix? This is the super slow way:

import numpy as np

m = 10
n = 100 # Or some very large number

items = np.arange(m)
prob_weights = np.random.rand(m, n)
prob_matrix = prob_weights / prob_weights.sum(axis=0, keepdims=True)

choices = np.zeros((n,))
# This is slow, because of the loop in Python
for i in range(n):
    choices[i] = np.random.choice(items, p=prob_matrix[:,i])

print(choices):

array([ 4.,  7.,  8.,  1.,  0.,  4.,  3.,  7.,  1.,  5.,  7.,  5.,  3.,
        1.,  9.,  1.,  1.,  5.,  9.,  8.,  2.,  3.,  2.,  6.,  4.,  3.,
        8.,  4.,  1.,  1.,  4.,  0.,  1.,  8.,  5.,  3.,  9.,  9.,  6.,
        5.,  4.,  8.,  4.,  2.,  4.,  0.,  3.,  1.,  2.,  5.,  9.,  3.,
        9.,  9.,  7.,  9.,  3.,  9.,  4.,  8.,  8.,  7.,  6.,  4.,  6.,
        7.,  9.,  5.,  0.,  6.,  1.,  3.,  3.,  2.,  4.,  7.,  0.,  6.,
        3.,  5.,  8.,  0.,  8.,  3.,  4.,  5.,  2.,  2.,  1.,  1.,  9.,
        9.,  4.,  3.,  3.,  2.,  8.,  0.,  6.,  1.])

This post suggests that cumsum and bisect could be a potential approach, and is fast. But while numpy.cumsum(arr, axis=1) can do this along one axis of a numpy array, the bisect.bisect function only works on a single array at a time. Similarly, numpy.searchsorted only works on 1D arrays as well.

Is there a quick way to do this using only vectorized operations?

回答1:

Here's a fully vectorized version that's pretty fast:

def vectorized(prob_matrix, items):
    s = prob_matrix.cumsum(axis=0)
    r = np.random.rand(prob_matrix.shape[1])
    k = (s < r).sum(axis=0)
    return items[k]

In theory, searchsorted is the right function to use for looking up the random value in the cumulatively summed probabilities, but with m being relatively small, k = (s < r).sum(axis=0) ends up being much faster. Its time complexity is O(m), while the searchsorted method is O(log(m)), but that will only matter for much larger m. Also, cumsum is O(m), so both vectorized and @perimosocordiae's improved are O(m). (If your m is, in fact, much larger, you'll have to run some tests to see how large m can be before this method is slower.)

Here's the timing I get with m = 10 and n = 10000 (using the functions original and improved from @perimosocordiae's answer):

In [115]: %timeit original(prob_matrix, items)
1 loops, best of 3: 270 ms per loop

In [116]: %timeit improved(prob_matrix, items)
10 loops, best of 3: 24.9 ms per loop

In [117]: %timeit vectorized(prob_matrix, items)
1000 loops, best of 3: 1 ms per loop

The full script where the functions are defined is:

import numpy as np


def improved(prob_matrix, items):
    # transpose here for better data locality later
    cdf = np.cumsum(prob_matrix.T, axis=1)
    # random numbers are expensive, so we'll get all of them at once
    ridx = np.random.random(size=n)
    # the one loop we can't avoid, made as simple as possible
    idx = np.zeros(n, dtype=int)
    for i, r in enumerate(ridx):
        idx[i] = np.searchsorted(cdf[i], r)
    # fancy indexing all at once is faster than indexing in a loop
    return items[idx]


def original(prob_matrix, items):
    choices = np.zeros((n,))
    # This is slow, because of the loop in Python
    for i in range(n):
        choices[i] = np.random.choice(items, p=prob_matrix[:,i])
    return choices


def vectorized(prob_matrix, items):
    s = prob_matrix.cumsum(axis=0)
    r = np.random.rand(prob_matrix.shape[1])
    k = (s < r).sum(axis=0)
    return items[k]


m = 10
n = 10000 # Or some very large number

items = np.arange(m)
prob_weights = np.random.rand(m, n)
prob_matrix = prob_weights / prob_weights.sum(axis=0, keepdims=True)


回答2:

I don't think it's possible to completely vectorize this, but you can still get a decent speedup by vectorizing as much as you can. Here's what I came up with:

def improved(prob_matrix, items):
    # transpose here for better data locality later
    cdf = np.cumsum(prob_matrix.T, axis=1)
    # random numbers are expensive, so we'll get all of them at once
    ridx = np.random.random(size=n)
    # the one loop we can't avoid, made as simple as possible
    idx = np.zeros(n, dtype=int)
    for i, r in enumerate(ridx):
      idx[i] = np.searchsorted(cdf[i], r)
    # fancy indexing all at once is faster than indexing in a loop
    return items[idx]

Testing against the version in the question:

def original(prob_matrix, items):
    choices = np.zeros((n,))
    # This is slow, because of the loop in Python
    for i in range(n):
        choices[i] = np.random.choice(items, p=prob_matrix[:,i])
    return choices

Here's the speedup (using the setup code given in the question):

In [45]: %timeit original(prob_matrix, items)
100 loops, best of 3: 2.86 ms per loop

In [46]: %timeit improved(prob_matrix, items)
The slowest run took 4.15 times longer than the fastest. This could mean that an intermediate result is being cached
10000 loops, best of 3: 157 µs per loop

I'm not sure why there's a big discrepancy in timings for my version, but even the slowest run (~650 µs) is still almost 5x faster.