Given a set of training examples for training a neural network, we want to give more or less weight to various examples in training. We apply a weight between 0.0 and 1.0 to each example based on some criteria for the "value" (e.g. validity or confidence) of the example. How can this be implemented in Tensorflow, in particular when using tf.nn.sparse_softmax_cross_entropy_with_logits()
?
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
回答1:
In the most common case where you call tf.nn.sparse_softmax_cross_entropy_with_logits
with logits
of shape [batch_size, num_classes]
and labels
of shape [batch_size]
, the function returns a tensor of shape batch_size
. You can multiply this tensor with a weight tensor before reducing them to a single loss value:
weights = tf.placeholder(name="loss_weights", shape=[None], dtype=tf.float32)
loss_per_example = tf.nn.sparse_softmax_cross_entropy_with_logits(logits, labels)
loss = tf.reduce_mean(weights * loss_per_example)