When trying to get cross entropy with sigmoid activation function, there is difference between
loss1 = -tf.reduce_sum(p*tf.log(q), 1)
loss2 = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q),1)
But they are same when with softmax activation function.
Following is the sample code:
import tensorflow as tf
sess2 = tf.InteractiveSession()
p = tf.placeholder(tf.float32, shape=[None, 5])
logit_q = tf.placeholder(tf.float32, shape=[None, 5])
q = tf.nn.sigmoid(logit_q)
sess.run(tf.global_variables_initializer())
feed_dict = {p: [[0, 0, 0, 1, 0], [1,0,0,0,0]], logit_q: [[0.2, 0.2, 0.2, 0.2, 0.2], [0.3, 0.3, 0.2, 0.1, 0.1]]}
loss1 = -tf.reduce_sum(p*tf.log(q),1).eval(feed_dict)
loss2 = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=p, logits=logit_q),1).eval(feed_dict)
print(p.eval(feed_dict), "\n", q.eval(feed_dict))
print("\n",loss1, "\n", loss2)
You're confusing the cross-entropy for binary and multi-class problems.
Multi-class cross-entropy
The formula that you use is correct and it directly corresponds to
tf.nn.softmax_cross_entropy_with_logits
:p
andq
are expected to be probability distributions over N classes. In particular, N can be 2, as in the following example:Note that
q
is computingtf.nn.softmax
, i.e. outputs a probability distribution. So it's still multi-class cross-entropy formula, only for N = 2.Binary cross-entropy
This time the correct formula is
Though mathematically it's a partial case of the multi-class case, the meaning of
p
andq
is different. In the simplest case, eachp
andq
is a number, corresponding to a probability of the class A.Important: Don't get confused by the common
p * -tf.log(q)
part and the sum. Previousp
was a one-hot vector, now it's a number, zero or one. Same forq
- it was a probability distribution, now's it's a number (probability).If
p
is a vector, each individual component is considered an independent binary classification. See this answer that outlines the difference between softmax and sigmoid functions in tensorflow. So the definitionp = [0, 0, 0, 1, 0]
doesn't mean a one-hot vector, but 5 different features, 4 of which are off and 1 is on. The definitionq = [0.2, 0.2, 0.2, 0.2, 0.2]
means that each of 5 features is on with 20% probability.This explains the use of
sigmoid
function before the cross-entropy: its goal is to squash the logit to[0, 1]
interval.The formula above still holds for multiple independent features, and that's exactly what
tf.nn.sigmoid_cross_entropy_with_logits
computes:You should see that the last three tensors are equal, while the
prob1
is only a part of cross-entropy, so it contains correct value only whenp
is1
:Now it should be clear that taking a sum of
-p * tf.log(q)
alongaxis=1
doesn't make sense in this setting, though it'd be a valid formula in multi-class case.