Find cumsum of subarrays split by indices for nump

2019-02-24 13:52发布

问题:

Given an array 'array' and a set of indices 'indices', how do I find the cumulative sum of the sub-arrays formed by splitting the array along those indices in a vectorized manner? To clarify, suppose I have:

>>> array = np.arange(20)
>>> array
array([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19])
indices = np.arrray([3, 8, 14])

The operation should output:

array([0, 1, 3, 3, 7, 12, 18, 25, 8, 17, 27, 38, 50, 63, 14, 29, 45, 62, 80, 99])

Please note that the array is very big (100000 elements) and so, I need a vectorized answer. Using any loops would slow it down considerably. Also, if I had the same problem, but a 2D array and corresponding indices, and I need to do the same thing for each row in the array, how would I do it?

For the 2D version:

>>>array = np.arange(12).reshape((3,4))
>>>array
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11]])
>>> indices = np.array([[2], [1, 3], [1, 2]])

The output would be:

array([[ 0,  1,  3,  3],
       [ 4,  9,  6, 13],
       [ 8, 17, 10, 11]])

To clarify: Every row will be split.

回答1:

You can introduce differentiation of originally cumulatively summed array at indices positions to create a boundary like effect at those places, such that when the differentiated array is cumulatively summed, gives us the indices-stopped cumulatively summed output. This might feel a bit contrived at first-look, but stick with it, try with other samples and hopefully would make sense! The idea is very similar to the one applied in this other MATLAB solution. So, following such a philosophy here's one approach using numpy.diff along with cumulative summation -

# Get linear indices
n = array.shape[1]
lidx = np.hstack(([id*n+np.array(item) for id,item in enumerate(indices)]))

# Get successive differentiations
diffs = array.cumsum(1).ravel()[lidx] - array.ravel()[lidx]

# Get previous group's offsetted summations for each row at all 
# indices positions across the entire 2D array
_,idx = np.unique(lidx/n,return_index=True)
offsetted_diffs = np.diff(np.append(0,diffs))
offsetted_diffs[idx] = diffs[idx]

# Get a copy of input array and place previous group's offsetted summations 
# at indices. Then, do cumulative sum which will create a boundary like 
# effect with those offsets at indices positions.
arrayc = array.copy()
arrayc.ravel()[lidx] -= offsetted_diffs
out = arrayc.cumsum(1)

This should be an almost vectorized solution, almost because even though we are calculating linear indices in a loop, but since it's not the computationally intensive part here, so it's effect on the total runtime would be minimal. Also, you can replace arrayc with array if you don't care about destructing the input for saving on memory.

Sample input, output -

In [75]: array
Out[75]: 
array([[ 0,  1,  2,  3,  4,  5,  6,  7],
       [ 8,  9, 10, 11, 12, 13, 14, 15],
       [16, 17, 18, 19, 20, 21, 22, 23]])

In [76]: indices
Out[76]: array([[3, 6], [4, 7], [5]], dtype=object)

In [77]: out
Out[77]: 
array([[ 0,  1,  3,  3,  7, 12,  6, 13],
       [ 8, 17, 27, 38, 12, 25, 39, 15],
       [16, 33, 51, 70, 90, 21, 43, 66]])


回答2:

You can use np.split to split your array along the indices then using python built in function map apply the np.cumsum() to your sub arrays. And at the end by using np.hstack convert the result to an integrated array:

>>> np.hstack(map(np.cumsum,np.split(array,indices)))
array([ 0,  1,  3,  3,  7, 12, 18, 25,  8, 17, 27, 38, 50, 63, 14, 29, 45,
       62, 80, 99])

Note that since map is a built in function in python and has been implemented in C inside the Python interpreter it would performs better than a regular loop.1

Here is an alternative for 2D arrays:

>>> def func(array,indices):
...     return np.hstack(map(np.cumsum,np.split(array,indices)))
... 
>>> 
>>> array
array([[ 0,  1,  2,  3],
       [ 4,  5,  6,  7],
       [ 8,  9, 10, 11]])
>>> 
>>> indices
array([[2], [1, 3], [1, 2]], dtype=object)

>>> np.array([func(arr,ind) for arr,ind in np.array((array,indices)).T])
array([[ 0,  1,  2,  5],
       [ 4,  5, 11,  7],
       [ 8,  9, 10, 21]])

Note that your expected output is not based on the way that np.split works.

If you want to such results you need to add 1 to your indices :

>>> indices = np.array([[3], [2, 4], [2, 3]], dtype=object)
>>> 
>>> np.array([func(arr,ind) for arr,ind in np.array((array,indices)).T])
array([[  0.,   1.,   3.,   3.],
       [  4.,   9.,   6.,  13.],
       [  8.,  17.,  10.,  11.]])

Due to a comment which said there is not performance difference between using generator expression and map function I ran a benchmark which demonstrates result better.

# Use map
~$ python -m timeit --setup "import numpy as np;array = np.arange(20);indices = np.array([3, 8, 14])" "np.hstack(map(np.cumsum,np.split(array,indices)))"
10000 loops, best of 3: 72.1 usec per loop
# Use generator expression
~$ python -m timeit --setup "import numpy as np;array = np.arange(20);indices = np.array([3, 8, 14])" "np.hstack(np.cumsum(a) for a in np.split(array,indices))"
10000 loops, best of 3: 81.2 usec per loop

Note that this doesn't mean that using map which performs in C speed makes that code preforms in C speed. That's because of that, the code has implemented in python and calling the function (first argument) and applying it on iterable items would take time.