I created a few summary ops throughout my graph like so:
tf.summary.scalar('cross_entropy', cross_entropy)
tf.summary.scalar('accuracy', accuracy)
and of course merged and got a writer:
sess = tf.InteractiveSession()
summaries = tf.summary.merge_all()
train_writer = tf.summary.FileWriter(TENSORBOARD_TRAINING_DIR, sess.graph)
tf.global_variables_initializer().run()
and I write these in each training iteration:
summary, acc = sess.run([summaries, accuracy], feed_dict={...})
train_writer.add_summary(summary, i)
when I load the tensorboard, I get some weird results:
this is weird for a couple reasons:
- Y-axis on cross_entropy graph doesn't have increasing (or different) tick marks
- Line plots appear to fold back on itself or go back in time
I did check - there are a few previous event files in my training summaries folder:
$ ls /tmp/tv_train/
events.out.tfevents.1517210066.xxxxxxx.local
events.out.tfevents.1517210097.xxxxxxx.local
...
events.out.tfevents.1517210392.xxxxxxx.local
I think I must have restarted the train loop at some point, causing there to be multiple summaries logged at (0, 1, etc) indices.
How can I append to old training logs? Can I point my writer to a specific tfevents file to "start back where I left off"?