Element wise dot product of matrices and vectors [

2019-07-14 20:25发布

This question already has an answer here:

There are really similar questions here, here, here, but I don't really understand how to apply them to my case precisely.

I have an array of matrices and an array of vectors and I need element-wise dot product. Illustration:

In [1]: matrix1 = np.eye(5)

In [2]: matrix2 = np.eye(5) * 5

In [3]: matrices = np.array((matrix1,matrix2))

In [4]: matrices
Out[4]: 
array([[[ 1.,  0.,  0.,  0.,  0.],
        [ 0.,  1.,  0.,  0.,  0.],
        [ 0.,  0.,  1.,  0.,  0.],
        [ 0.,  0.,  0.,  1.,  0.],
        [ 0.,  0.,  0.,  0.,  1.]],

       [[ 5.,  0.,  0.,  0.,  0.],
        [ 0.,  5.,  0.,  0.,  0.],
        [ 0.,  0.,  5.,  0.,  0.],
        [ 0.,  0.,  0.,  5.,  0.],
        [ 0.,  0.,  0.,  0.,  5.]]])

In [5]: vectors = np.ones((5,2))

In [6]: vectors
Out[6]: 
array([[ 1.,  1.],
       [ 1.,  1.],
       [ 1.,  1.],
       [ 1.,  1.],
       [ 1.,  1.]])

In [9]: np.array([m @ v for m,v in zip(matrices, vectors.T)]).T
Out[9]: 
array([[ 1.,  5.],
       [ 1.,  5.],
       [ 1.,  5.],
       [ 1.,  5.],
       [ 1.,  5.]])

This last line is my desired output. Unfortunately it is very inefficient, for instance doing matrices @ vectors that computes unwanted dot products due to broadcasting (if I understand well, it returns the first matrix dot the 2 vectors and the second matrix dot the 2 vectors) is actually faster.

I guess np.einsum or np.tensordot might be helpful here but all my attempts have failed:

In [30]: np.einsum("i,j", matrices, vectors)
ValueError: operand has more dimensions than subscripts given in einstein sum, but no '...' ellipsis provided to broadcast the extra dimensions.

In [34]: np.tensordot(matrices, vectors, axes=(0,1))
Out[34]: 
array([[[ 6.,  6.,  6.,  6.,  6.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.]],

       [[ 0.,  0.,  0.,  0.,  0.],
        [ 6.,  6.,  6.,  6.,  6.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.]],

       [[ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 6.,  6.,  6.,  6.,  6.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.]],

       [[ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 6.,  6.,  6.,  6.,  6.],
        [ 0.,  0.,  0.,  0.,  0.]],

       [[ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 0.,  0.,  0.,  0.,  0.],
        [ 6.,  6.,  6.,  6.,  6.]]])

NB: my real-case scenario use more complicated matrices than matrix1 and matrix2

2条回答
Summer. ? 凉城
2楼-- · 2019-07-14 20:58

With np.einsum, you might use:

np.einsum("ijk,ki->ji", matrices, vectors)

#array([[ 1.,  5.],
#       [ 1.,  5.],
#       [ 1.,  5.],
#       [ 1.,  5.],
#       [ 1.,  5.]])
查看更多
【Aperson】
3楼-- · 2019-07-14 21:20

You can use @ as follows

matrices @ vectors.T[..., None]
# array([[[ 1.],
#         [ 1.],
#         [ 1.],
#         [ 1.],
#         [ 1.]],

#        [[ 5.],
#         [ 5.],
#         [ 5.],
#         [ 5.],
#         [ 5.]]])

As we can see it computes the right thing but arranges them wrong. Therefore

(matrices @ vectors.T[..., None]).squeeze().T
# array([[ 1.,  5.],
#        [ 1.,  5.],
#        [ 1.,  5.],
#        [ 1.,  5.],
#        [ 1.,  5.]])
查看更多
登录 后发表回答