I have a TensorFlow model, and one part of this model evaluates the accuracy. The accuracy
is just another node in the tensorflow graph, that takes in logits
and labels
.
When I want to plot the training accuracy, this is simple: I have something like:
tf.scalar_summary("Training Accuracy", accuracy)
tf.scalar_summary("SomethingElse", foo)
summary_op = tf.merge_all_summaries()
writer = tf.train.SummaryWriter('/me/mydir/', graph=sess.graph)
Then, during my training loop, I have something like:
for n in xrange(1000):
...
summary, ..., ... = sess.run([summary_op, ..., ...], feed_dict)
writer.add_summary(summary, n)
...
Also inside that for loop, every say, 100 iterations, I want to evaluate the validation accuracy. I have a separate feed_dict for this, and I am able to evaluate the validation accuracy very nicely in python.
However, here is my problem: I want to make another summary for the validation accuracy, by using the accuracy
node. I am not clear on how to do this though. Since I have the accuracy
node it makes sense that I should be able to re-use it, but I am unsure how to do this exactly, such that I can also get the validation accuracy written out as a separate scalar_summary...
How might this be possible?
You can reuse the the accuracy node but you need to use two different SummaryWriters, one for the training runs and one for the test data. Also you have to assign the scalar summary for accuracy to a variable.
Then in your training loop you have the normal training and record your summaries with the train_writer. In addition you run the graph on the test set each 100th iteration and record only the accuracy summary with the test_writer.
You can then point TensorBoard to the parent directory (summaries_dir) and it will load both data sets.
This can be also found in the TensorFlow HowTo's https://www.tensorflow.org/versions/r0.11/how_tos/summaries_and_tensorboard/index.html
To run the same operation but get summaries with different feed_dict data, simply attach two summary ops to that op. Say you want to run accuracy op on both validation and test data and want to get summaries for both:
Also remember you can always pull raw (scalar) data out of the protobuff summary_str like this and do your own logging.