I am looking for a fast formulation to do a numerical binning of a 2D numpy array. By binning I mean calculate submatrix averages or cumulative values. For ex. x = numpy.arange(16).reshape(4, 4) would have been splitted in 4 submatrix of 2x2 each and gives numpy.array([[2.5,4.5],[10.5,12.5]]) where 2.5=numpy.average([0,1,4,5]) etc...
How to perform such an operation in an efficient way... I don't have really any ideay how to perform this ...
Many thanks...
I assume that you only want to know how to generally build a function that performs well and does something with arrays, just like
numpy.reshape
in your example. So if performance really matters and you're already using numpy, you can write your own C code for that, like numpy does. For example, the implementation of arange is completely in C. Almost everything with numpy which matters in terms of performance is implemented in C.However, before doing so you should try to implement the code in python and see if the performance is good enough. Try do make the python code as efficient as possible. If it still doesn't suit your performance needs, go the C way.
You may read about that in the docs.
You can use a higher dimensional view of your array and take the average along the extra dimensions:
In general, if you want bins of shape
(a, b)
for an array of(rows, cols)
, your reshaping of it should be.reshape(rows // a, a, cols // b, b)
. Note also that the order of the.mean
is important, e.g.a_view.mean(axis=1).mean(axis=3)
will raise an error, becausea_view.mean(axis=1)
only has three dimensions, althougha_view.mean(axis=1).mean(axis=2)
will work fine, but it makes it harder to understand what is going on.As is, the above code only works if you can fit an integer number of bins inside your array, i.e. if
a
dividesrows
andb
dividescols
. There are ways to deal with other cases, but you will have to define the behavior you want then.See the SciPy Cookbook on rebinning, which provides this snippet: