With Tensorflow it is possible to monitor quantities during training, using tf.summary.
Is it possible to do the same using Keras ? Could you include an example by modifying the code at https://github.com/fchollet/keras/blob/master/examples/variational_autoencoder.py and monitoring the KL loss (defined at line 53)
Thank you in advance !
Have you tried the TensorBoard callback? [1]
tensorboard = keras.callbacks.TensorBoard(log_dir='./logs',
histogram_freq=1,
write_graph=True,
write_images=False)
vae.fit(x_train,
shuffle=True,
epochs=epochs,
batch_size=batch_size,
validation_data=(x_test, x_test),
callbacks=[tensorboard])
Then run:
tensorboard --logdir=./logs
You could write a modified version of the callback to handle the specific items you are interested in.
[1] https://keras.io/callbacks/#tensorboard
Actually a workaround consists in adding the quantities to monitor as metrics when compiling the model.
For instance, I wanted to monitor the KL divergence (in the context of variational auto encoders), so I wrote this:
def kl_loss(y_true, y_pred):
kl_loss = - 0.5 * K.sum(1 + K.log(z_var_0+1e-8) - K.square(z_mean_0) - z_var_0, axis=-1)
return kl_loss
vae.compile(optimizer='rmsprop', loss=vae_loss, metrics=['accuracy', kl_loss])
And it does what I need