I am starting to use tensorflow (coming from Caffe), and I am using the loss sparse_softmax_cross_entropy_with_logits
. The function accepts labels like 0,1,...C-1
instead of onehot encodings. Now, I want to use a weighting depending on the class label; I know that this could be done maybe with a matrix multiplication if I use softmax_cross_entropy_with_logits
(one hot encoding), Is there any way to do the same with sparse_softmax_cross_entropy_with_logits
?
相关问题
- how to define constructor for Python's new Nam
- streaming md5sum of contents of a large remote tar
- batch_dot with variable batch size in Keras
- How to get the background from multiple images by
- Evil ctypes hack in python
Specifically for binary classification, there is
weighted_cross_entropy_with_logits
, that computes weighted softmax cross entropy.sparse_softmax_cross_entropy_with_logits
is tailed for a high-efficient non-weighted operation (seeSparseSoftmaxXentWithLogitsOp
which usesSparseXentEigenImpl
under the hood), so it's not "pluggable".In multi-class case, your option is either switch to one-hot encoding or use
tf.losses.sparse_softmax_cross_entropy
loss function in a hacky way, as already suggested, where you will have to pass the weights depending on the labels in a current batch.The class weights are multiplied by the logits, so that still works for sparse_softmax_cross_entropy_with_logits. Refer to this solution for "Loss function for class imbalanced binary classifier in Tensor flow."
As a side note, you can pass weights directly into sparse_softmax_cross_entropy
This method is for cross-entropy loss using
Weight acts as a coefficient for the loss. If a scalar is provided, then the loss is simply scaled by the given value. If weight is a tensor of size [batch_size], then the loss weights apply to each corresponding sample.