Why is a.dot(b) faster than a@b although Numpy rec

2020-07-09 07:02发布

问题:

According to the answers from this question and also according to numpy, matrix multiplication of 2-D arrays is best done via a @ b, or numpy.matmul(a,b) as compared to a.dot(b).

If both a and b are 2-D arrays, it is matrix multiplication, but using matmul or a @ b is preferred.

I did the following benchmark and found contrary results.

Questions: Is there an issue with my benchmark? If not, why does Numpy not recommend a.dot(b) when it is faster than a@b or numpy.matmul(a,b)?

Benchmark used python 3.5 numpy 1.15.0.

$ pip3 list | grep numpy
numpy                         1.15.0
$ python3 --version
Python 3.5.2

Benchmark code:

import timeit

setup = '''
import numpy as np
a = np.arange(16).reshape(4,4)
b = np.arange(16).reshape(4,4)
''' 
test = '''
for i in range(1000):
    a @ b
'''
test1 = '''
for i in range(1000):
    np.matmul(a,b)
'''
test2 = '''
for i in range(1000):
    a.dot(b)
'''

print( timeit.timeit(test, setup, number=100) )
print( timeit.timeit(test1, setup, number=100) )
print( timeit.timeit(test2, setup, number=100) )

Results:

test  : 0.11132473500038031
test1 : 0.10812476599676302
test2 : 0.06115105600474635

Add on results:

>>> a = np.arange(16).reshape(4,4)
>>> b = np.arange(16).reshape(4,4)
>>> a@b
array([[ 56,  62,  68,  74],
       [152, 174, 196, 218],
       [248, 286, 324, 362],
       [344, 398, 452, 506]])
>>> np.matmul(a,b)
array([[ 56,  62,  68,  74],
       [152, 174, 196, 218],
       [248, 286, 324, 362],
       [344, 398, 452, 506]])
>>> a.dot(b)
array([[ 56,  62,  68,  74],
       [152, 174, 196, 218],
       [248, 286, 324, 362],
       [344, 398, 452, 506]])

回答1:

Your premise is incorrect. You should use larger matrices to measure performance to avoid function calls dwarfing insignificant calculations.

Using Python 3.60 / NumPy 1.11.3 you will find, as explained here, that @ calls np.matmul and both outperform np.dot.

import numpy as np

n = 500
a = np.arange(n**2).reshape(n, n)
b = np.arange(n**2).reshape(n, n)

%timeit a.dot(b)        # 134 ms per loop
%timeit a @ b           # 71 ms per loop
%timeit np.matmul(a,b)  # 70.6 ms per loop

Also note, as explained in the docs, np.dot is functionally different to @ / np.matmul. In particular, they differ in treatment of matrices with dimensions greater than 2.



回答2:

matmul and dot don't do the same thing. They behave differently with 3D arrays and scalars. The documentation may be stating that matmul is preferred because it is more "clear" and general, not necessarily for reasons of performance. It would be nice if the documentation was more clear on why one was preferred over the other.

As has been pointed out by @jpp it isn't necessarily true that the performance of matmul is actually worse.