When summing an array over a specific axis, the dedicated array method array.sum(ax)
may actually be slower than a for-loop :
v = np.random.rand(3,1e4)
timeit v.sum(0) # vectorized method
1000 loops, best of 3: 183 us per loop
timeit for row in v[1:]: v[0] += row # python loop
10000 loops, best of 3: 39.3 us per loop
The vectorized method is more than 4 times slower than an ordinary for-loop! What is going (wr)on(g) here, can't I trust vectorized methods in numpy to be faster than for-loops?
No you can't. As your interesting example points out numpy.sum
can be suboptimal, and a better layout of the operations via explicit for loops can be more efficient.
Let me show another example:
>>> N, M = 10**4, 10**4
>>> v = np.random.randn(N,M)
>>> r = np.empty(M)
>>> timeit.timeit('v.sum(axis=0, out=r)', 'from __main__ import v,r', number=1)
1.2837879657745361
>>> r = np.empty(N)
>>> timeit.timeit('v.sum(axis=1, out=r)', 'from __main__ import v,r', number=1)
0.09213519096374512
Here you clearily see that numpy.sum
is optimal if summing on the fast running index (v
is C-contiguous) and suboptimal when summing on the slow running axis. Interestingly enough an opposite pattern is true for for
loops:
>>> r = np.zeros(M)
>>> timeit.timeit('for row in v[:]: r += row', 'from __main__ import v,r', number=1)
0.11945700645446777
>>> r = np.zeros(N)
>>> timeit.timeit('for row in v.T[:]: r += row', 'from __main__ import v,r', number=1)
1.2647287845611572
I had no time to inspect numpy
code, but I suspect that what makes the difference is contiguous memory access or strided access.
As this examples shows, when implementing a numerical algorithm, a correct memory layout is of great significance. Vectorized code not necessarily solves every problem.