Is it possible to map a NumPy array in place? If yes, how?
Given a_values
- 2D array - this is the bit of code that does the trick for me at the moment:
for row in range(len(a_values)):
for col in range(len(a_values[0])):
a_values[row][col] = dim(a_values[row][col])
But it's so ugly that I suspect that somewhere within NumPy there must be a function that does the same with something looking like:
a_values.map_in_place(dim)
but if something like the above exists, I've been unable to find it.
It's only worth trying to do this in-place if you are under significant space constraints. If that's the case, it is possible to speed up your code a little bit by iterating over a flattened view of the array. Since
reshape
returns a new view when possible, the data itself isn't copied (unless the original has unusual structure).I don't know of a better way to achieve bona fide in-place application of an arbitrary Python function.
Some timings:
It's about twice as fast as the nested loop version:
Of course vectorize is still faster, so if you can make a copy, use that:
And if you can rewrite
dim
using built-in ufuncs, then please, please, don'tvectorize
:numpy
does operations like+=
in place, just as you might expect -- so you can get the speed of a ufunc with in-place application at no cost. Sometimes it's even faster! See here for an example.By the way, my original answer to this question, which can be viewed in its edit history, is ridiculous, and involved vectorizing over indices into
a
. Not only did it have to do some funky stuff to bypassvectorize
's type-detection mechanism, it turned out to be just as slow as the nested loop version. So much for cleverness!This is just an updated version of mac's write-up, actualized for Python 3.x, and with numba and numpy.frompyfunc added.
numpy.frompyfunc takes an abitrary python function and returns a function, which when cast on a numpy.array, applies the function elementwise.
However, it changes the datatype of the array to object, so it is not in place, and future calculations on this array will be slower.
To avoid this drawback, in the test numpy.ndarray.astype will be called, returning the datatype to int.
As side note:
Numba isn't included in Python's basic libraries and has to be downloaded externally if you want to test it. In this test, it actually does nothing, and if it would have been called with @jit(nopython=True), it would have given an error message saying that it can't optimize anything there. Since, however, numba can often speed-up code written in a functional style, it is included for integrity.
And finally, the results:
Q: Is it possible to map a numpy array in place?
A: Yes but not with a single array method. You have to write your own code.
Below a script that compares the various implementations discussed in the thread:
The output of the above script - at least in my system - is:
As you can observe, using numpy's
ufunc
increases speed of more than 2 and almost 3 orders of magnitude compared with the second best and worst alternatives respectively.If using
ufunc
is not an option, here's a comparison of the other alternatives only:HTH!
if ufuncs are not possible, you should maybe consider using cython. it is easy to integrate and give big speedups on specific use of numpy arrays.
Why not using numpy implementation, and the out_ trick ?
got: