tensorflow einsum vs. matmul vs. tensordot

2020-02-06 05:39发布

问题:

In tensorflow, the functions tf.einsum, tf.matmul, and tf.tensordot can all be used for the same tasks. (I realize that tf.einsum and tf.tensordot have more general definitions; I also realize that tf.matmul has batch functionality.) In a situation where any of the three could be used, does one function tend to be fastest? Are there other recommendation rules?

For example, suppose that A is a rank-2 tensor, and b is rank-1 tensor, and you want to compute the product c_j = A_ij b_j. Of the three options:

c = tf.einsum('ij,j->i', A, b)

c = tf.matmul(A, tf.expand_dims(b,1))

c = tf.tensordot(A, b, 1)

is any generally preferable to the others?

回答1:

Both tf.tensordot() and tf.einsum() are syntactic sugar that wrap one or more invocations of tf.matmul() (although in some special cases tf.einsum() can reduce to the simpler elementwise tf.multiply()).

In the limit, I'd expect all three functions to have equivalent performance for the same computation. However, for smaller matrices it may be more efficient to use tf.matmul() directly, because it would yield a simpler TensorFlow graph with fewer operations, and hence the per-operation invocation costs will be lower.