I have two 3d arrays A and B with shape (N, 2, 2) that I would like to multiply element-wise according to the N-axis with a matrix product on each of the 2x2 matrix. With a loop implementation, it looks like
C[i] = dot(A[i], B[i])
Is there a way I could do this without using a loop? I've looked into tensordot, but haven't been able to get it to work. I think I might want something like tensordot(a, b, axes=([1,2], [2,1]))
but that's giving me an NxN matrix.
It seems you are doing matrix-multiplications for each slice along the first axis. For the same, you can use
np.einsum
like so -We can also use
np.matmul
-On Python 3.x, this
matmul
operation simplifies with@
operator -Benchmarking
Approaches -
Timings -
You just need to perform the operation on the first dimension of your tensors, which is labeled by
0
:This will work as you wish. Also you don't need a list of axes, because it's just along one dimension you're performing the operation. With
axes([1,2],[2,1])
you're cross multiplying the 2nd and 3rd dimensions. If you write it in index notation (Einstein summing convention) this corresponds toc[i,j] = a[i,k,l]*b[j,k,l]
, thus you're contracting the indices you want to keep.EDIT: Ok, the problem is that the tensor product of a two 3d object is a 6d object. Since contractions involve pairs of indices, there's no way you'll get a 3d object by a
tensordot
operation. The trick is to split your calculation in two: first you do thetensordot
on the index to do the matrix operation and then you take a tensor diagonal in order to reduce your 4d object to 3d. In one command:In tensor notation
d[i,j,k] = c[i,j,i,k] = a[i,j,l]*b[i,l,k]
.