tensorboard scalar are missing

2019-09-14 16:26发布

问题:

I've wrote a Tensorflow program, now I want see what I've done using Tensorboard, here is the interesting part of the code :

def train_neural_network(x):
prediction = neuronal_network_model(x)
tf.summary.scalar('Prediction',prediction)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=prediction, labels=y))
tf.summary.scalar('cost',cost)
# default step size of optimizer is 0.001
optimizer = tf.train.AdamOptimizer(learning_rate=1e-3).minimize(cost)
# epoch is feeding forward and  + backpropagation (adjusting the weights and the biases )
number_of_epochs = 200

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    for epoch in range(number_of_epochs):
        shuffle_data()
        epoch_loss = 0
        for j in range(len(train_data)-1):
            if np.shape(train_labels[j]) ==(batch_size,n_classes):
                epoch_x  = train_data[j]
                epoch_y =  train_labels[j]
                _, c = sess.run([optimizer, cost], feed_dict={x: epoch_x, y: np.reshape(epoch_y,(batch_size,n_classes))})
                epoch_loss += c
                correct = tf.equal(tf.argmax(prediction, 1), tf.argmax(y, 1))
                accuracy = tf.reduce_mean(tf.cast(correct, 'float'))
                tf.summary.scalar('Prediction',prediction)
            #    print('Epoch', epoch, 'complete out of ', number_of_epochs, 'loss', epoch_loss)
                loss_vector.append(epoch_loss)
        for i in range(len(test_data)-1):
            if np.shape(test_labels[i]) == (batch_size,n_classes):
                print('Accuracy', accuracy.eval({x: test_data[i], y: test_labels[i]}))
                accuracy_vector.append(accuracy.eval({x: test_data[i], y: test_labels[i]}))
        merged = tf.summary.merge_all()

        train_writer = tf.summary.FileWriter('Tensorboard/DNN',sess.graph)

when I run Tensorboard I can see the graphs but the scalar tab is empty ? Update

here the is the input placeholder declaration:

x = tf.placeholder('float', [None, len(Training_Data[0])],name='input_values')
y = tf.placeholder('float',name='prediction')

回答1:

You need to evalute your merged summaries the same way you evaluate other tensor for it to dump data:

_, c, smry = sess.run([optimizer, cost, merged], feed_dict={x: epoch_x, y: np.reshape(epoch_y,(batch_size,n_classes))})
train_writer.add_summary(smry, j)

where j is your training index. Obviously this has to take place within the training loop. You may want to write summaries every n-th value of j only to alleviate both summary writing and visualization.

More details here.