My question is in two connected parts:
How do I calculate the max along a certain axis of a tensor? For example, if I have
x = tf.constant([[1,220,55],[4,3,-1]])
I want something like
x_max = tf.max(x, axis=1) print sess.run(x_max) output: [220,4]
I know there is a
tf.argmax
and atf.maximum
, but neither give the maximum value along an axis of a single tensor. For now I have a workaround:x_max = tf.slice(x, begin=[0,0], size=[-1,1]) for a in range(1,2): x_max = tf.maximum(x_max , tf.slice(x, begin=[0,a], size=[-1,1]))
But it looks less than optimal. Is there a better way to do this?
Given the indices of an
argmax
of a tensor, how do I index into another tensor using those indices? Using the example ofx
above, how do I do something like the following:ind_max = tf.argmax(x, dimension=1) #output is [1,0] y = tf.constant([[1,2,3], [6,5,4]) y_ = y[:, ind_max] #y_ should be [2,6]
I know slicing, like the last line, does not exist in TensorFlow yet (#206).
My question is: what is the best workaround for my specific case (maybe using other methods like gather, select, etc.)?
Additional information: I know
x
andy
are going to be two dimensional tensors only!
As of TensorFlow 1.10.0-dev20180626,
tf.reduce_max
acceptsaxis
andkeepdims
keyword arguments offering the similar functionality ofnumpy.max
.To have a resultant tensor of the same dimension as the input tensor, use
keepdims=True
If the
axis
argument is not explicitly specified then the tensor level maximum element is returned (i.e. all axes are reduced).The
tf.reduce_max()
operator provides exactly this functionality. By default it computes the global maximum of the given tensor, but you can specify a list ofreduction_indices
, which has the same meaning asaxis
in NumPy. To complete your example:If you compute the argmax using
tf.argmax()
, you could obtain the the values from a different tensory
by flatteningy
usingtf.reshape()
, converting the argmax indices into vector indices as follows, and usingtf.gather()
to extract the appropriate values: