When summing an array over a specific axis, the dedicated array method array.sum(ax)
may actually be slower than a for-loop :
v = np.random.rand(3,1e4)
timeit v.sum(0) # vectorized method
1000 loops, best of 3: 183 us per loop
timeit for row in v[1:]: v[0] += row # python loop
10000 loops, best of 3: 39.3 us per loop
The vectorized method is more than 4 times slower than an ordinary for-loop! What is going (wr)on(g) here, can't I trust vectorized methods in numpy to be faster than for-loops?
No you can't. As your interesting example points out
numpy.sum
can be suboptimal, and a better layout of the operations via explicit for loops can be more efficient.Let me show another example:
Here you clearily see that
numpy.sum
is optimal if summing on the fast running index (v
is C-contiguous) and suboptimal when summing on the slow running axis. Interestingly enough an opposite pattern is true forfor
loops:I had no time to inspect
numpy
code, but I suspect that what makes the difference is contiguous memory access or strided access.As this examples shows, when implementing a numerical algorithm, a correct memory layout is of great significance. Vectorized code not necessarily solves every problem.