According to the answers from this question and also according to numpy, matrix multiplication of 2-D arrays is best done via a @ b
, or numpy.matmul(a,b)
as compared to a.dot(b)
.
If both a and b are 2-D arrays, it is matrix multiplication, but using matmul or a @ b is preferred.
I did the following benchmark and found contrary results.
Questions: Is there an issue with my benchmark? If not, why does Numpy not recommend a.dot(b)
when it is faster than a@b
or numpy.matmul(a,b)
?
Benchmark used python 3.5 numpy 1.15.0.
$ pip3 list | grep numpy
numpy 1.15.0
$ python3 --version
Python 3.5.2
Benchmark code:
import timeit
setup = '''
import numpy as np
a = np.arange(16).reshape(4,4)
b = np.arange(16).reshape(4,4)
'''
test = '''
for i in range(1000):
a @ b
'''
test1 = '''
for i in range(1000):
np.matmul(a,b)
'''
test2 = '''
for i in range(1000):
a.dot(b)
'''
print( timeit.timeit(test, setup, number=100) )
print( timeit.timeit(test1, setup, number=100) )
print( timeit.timeit(test2, setup, number=100) )
Results:
test : 0.11132473500038031
test1 : 0.10812476599676302
test2 : 0.06115105600474635
Add on results:
>>> a = np.arange(16).reshape(4,4)
>>> b = np.arange(16).reshape(4,4)
>>> a@b
array([[ 56, 62, 68, 74],
[152, 174, 196, 218],
[248, 286, 324, 362],
[344, 398, 452, 506]])
>>> np.matmul(a,b)
array([[ 56, 62, 68, 74],
[152, 174, 196, 218],
[248, 286, 324, 362],
[344, 398, 452, 506]])
>>> a.dot(b)
array([[ 56, 62, 68, 74],
[152, 174, 196, 218],
[248, 286, 324, 362],
[344, 398, 452, 506]])