TensorFlow - Importing data from a TensorBoard TFE

2020-02-02 04:40发布

问题:

I've run several training sessions with different graphs in TensorFlow. The summaries I set up show interesting results in the training and validation. Now, I'd like to take the data I've saved in the summary logs and perform some statistical analysis and in general plot and look at the summary data in different ways. Is there any existing way to easily access this data?

More specifically, is there any built in way to read a TFEvent record back into Python?

If there is no simple way to do this, TensorFlow states that all its file formats are protobuf files. From my understanding of protobufs (which is limited), I think I'd be able to extract this data if I have the TFEvent protocol specification. Is there an easy way to get ahold of this? Thank you much.

回答1:

As Fabrizio says, TensorBoard is a great tool for visualizing the contents of your summary logs. However, if you want to perform a custom analysis, you can use tf.train.summary_iterator() function to loop over all of the tf.Event and tf.Summary protocol buffers in the log:

for summary in tf.train.summary_iterator("/path/to/log/file"):
    # Perform custom processing in here.

UPDATE for tf2:

from tensorflow.python.summary.summary_iterator import summary_iterator

You need to import it, that module level is not currently imported by default. On 2.0.0-rc2



回答2:

To read a TFEvent you can get a Python iterator that yields Event protocol buffers.

# This example supposes that the events file contains summaries with a
# summary value tag 'loss'.  These could have been added by calling
# `add_summary()`, passing the output of a scalar summary op created with
# with: `tf.scalar_summary(['loss'], loss_tensor)`.
for e in tf.train.summary_iterator(path_to_events_file):
    for v in e.summary.value:
        if v.tag == 'loss' or v.tag == 'accuracy':
            print(v.simple_value)

more info: summary_iterator



回答3:

You can simply use:

tensorboard --inspect --event_file=myevents.out

or if you want to filter a specific subset of events of the graph:

tensorboard --inspect --event_file=myevents.out --tag=loss

If you want to create something more custom you can dig into the

/tensorflow/python/summary/event_file_inspector.py 

to understand how to parse the event files.



回答4:

Here is a complete example for obtaining values from a scalar. You can see the message specification for the Event protobuf message here

import tensorflow as tf


for event in tf.train.summary_iterator('runs/easy_name/events.out.tfevents.1521590363.DESKTOP-43A62TM'):
    for value in event.summary.value:
        print(value.tag)
        if value.HasField('simple_value'):
            print(value.simple_value)


回答5:

Following works as of tensorflow version 2.0.0-beta1:

import os

import tensorflow as tf
from tensorflow.python.framework import tensor_util

summary_dir = 'tmp/summaries'
summary_writer = tf.summary.create_file_writer('tmp/summaries')

with summary_writer.as_default():
  tf.summary.scalar('loss', 0.1, step=42)
  tf.summary.scalar('loss', 0.2, step=43)
  tf.summary.scalar('loss', 0.3, step=44)
  tf.summary.scalar('loss', 0.4, step=45)


from tensorflow.core.util import event_pb2
from tensorflow.python.lib.io import tf_record

def my_summary_iterator(path):
    for r in tf_record.tf_record_iterator(path):
        yield event_pb2.Event.FromString(r)

for filename in os.listdir(summary_dir):
    path = os.path.join(summary_dir, filename)
    for event in my_summary_iterator(path):
        for value in event.summary.value:
            t = tensor_util.MakeNdarray(value.tensor)
            print(value.tag, event.step, t, type(t))

the code for my_summary_iterator is copied from tensorflow.python.summary.summary_iterator.py - there was no way to import it at runtime.



回答6:

You can use the script serialize_tensorboard, which will take in a logdir and write out all the data in json format.

You can also use an EventAccumulator for a convenient Python API (this is the same API that TensorBoard uses).



回答7:

I've been using this. It assumes that you only want to see tags you've logged more than once whose values are floats and returns the results as a pd.DataFrame. Just call metrics_df = parse_events_file(path).

from collections import defaultdict
import pandas as pd
import tensorflow as tf

def is_interesting_tag(tag):
    if 'val' in tag or 'train' in tag:
        return True
    else:
        return False


def parse_events_file(path: str) -> pd.DataFrame:
    metrics = defaultdict(list)
    for e in tf.train.summary_iterator(path):
        for v in e.summary.value:

            if isinstance(v.simple_value, float) and is_interesting_tag(v.tag):
                metrics[v.tag].append(v.simple_value)
            if v.tag == 'loss' or v.tag == 'accuracy':
                print(v.simple_value)
    metrics_df = pd.DataFrame({k: v for k,v in metrics.items() if len(v) > 1})
    return metrics_df