I'm trying to learn how to use tensorflow and tensorboard. I have a test project based on the MNIST neural net tutorial.
In my code, I construct a node that calculates the fraction of digits in a data set that are correctly classified, like this:
correct = tf.nn.in_top_k(self._logits, labels, 1)
correct = tf.to_float(correct)
accuracy = tf.reduce_mean(correct)
Here, self._logits
is the inference part of the graph, and labels
is a placeholder that contains the correct labels.
Now, what I would like to do is evaluate the accuracy for both the training set and the validation set as training proceeds. I can do this by running the accuracy node twice, with different feed_dicts:
train_acc = tf.run(accuracy, feed_dict={images : training_set.images, labels : training_set.labels})
valid_acc = tf.run(accuracy, feed_dict={images : validation_set.images, labels : validation_set.labels})
This works as intended. I can print the values, and I can see that initially, the two accuracies will both increase, and eventually the validation accuracy will flatten out while the training accuracy keeps increasing.
However, I would also like to get graphs of these values in tensorboard, and I can not figure out how to do this. If I simply add a scalar_summary
to accuracy
, the logged values will not distinguish between training set and validation set.
I also tried creating two identical accuracy
nodes with different names and running one on the training set and one on the validation set. I then add a scalar_summary
to each of these nodes. This does give me two graphs in tensorboard, but instead of one graph showing the training set accuracy and one showing the validation set accuracy, they are both showing identical values that do not match either of the ones printed to the terminal.
I am probably misunderstanding how to solve this problem. What is the recommended way of separately logging the output from a single node for different inputs?