No easy way to add Tensorboard output to pre-defin

2019-03-31 13:51发布

问题:

I have been using the estimator interface in TF 1.3 including the creation of the data input function:

training_input_fn = tf.estimator.inputs.pandas_input_fn(x=training_data, y=training_label, batch_size=64, shuffle=True, num_epochs=None)

and building the NN:

dnnclassifier = tf.estimator.DNNClassifier( feature_columns=dnn_features, hidden_units=[1024, 500, 100], n_classes=2, model_dir='./tmp/ccsprop', optimizer=tf.train.ProximalAdagradOptimizer( learning_rate=0.001, l1_regularization_strength=0.01 ))

and executing it

dnnclassifier.train(input_fn=training_input_fn, steps=1500)

After much searching I see no easy way to add tensorboard output without resorting to recreating the model from scratch and indicated here https://www.tensorflow.org/extend/estimators

And even then I can find no good examples to follow that both create a simple dnnClassifier with tensorboard output. any guidance?

I have the basic model working but need to examine it much more closely for tuning eventually using experiments as well. Don't see how?

回答1:

When calling DNNClassifier.train, it accepts hooks parameter, you can create a SummarySaverHook and add it to hooks.

Update

When add a metric (accuracy for example) into TensorBoard, you should flow several steps:

  1. Define a Tensor which calculate the accuracy: acc_op = ...;

  2. Add the Tensor into tf.summary.scalar: tf.summary.scalar('acc', acc_op);

  3. There can be multiple tf.summary in tf.Graph, so we define a merge_summary_op = tf.summary.merge_all() to get an op to merge all the metric Tensors.

  4. Add the merge_summary_op into a summary_writer = tf.summary.FileWriter();

  5. Add the summary_writer into a SummarySaverHook or call the summary_writer by your own code.



回答2:

See here for an extended discussion on GH: https://github.com/tensorflow/tensorflow/issues/12974#issuecomment-339856673

This does the trick to get a full set of TB output from canned models:

dnnclassifier = tf.estimator.DNNClassifier(
  feature_columns=dnn_features,
  hidden_units=[1024, 500, 100],
  n_classes=2, 
  model_dir='./tmp/ccsprop',
  optimizer=tf.train.ProximalAdagradOptimizer(
    learning_rate=0.001,
    l1_regularization_strength=0.01),
  config=tf.estimator.RunConfig().replace(save_summary_steps=10)
)

Note the last line and be observant of where you need parentheses!



回答3:

Hello does this solution still work for the current version of tf.estimator? I implemented the code structure and i get "No scalar data was found", "No graph definition files were found." Is there another way I should do this? Below is my code:

outdir = './gp_trained'
shutil.rmtree(outdir, ignore_errors = True) # start fresh each time
myopt = tf.train.AdamOptimizer(learning_rate = 0.01, beta1=0.9,beta2=0.999) 
model = tf.estimator.DNNRegressor(model_dir = outdir,
hidden_units = [50],
feature_columns = featcols.values(),
optimizer = myopt,
dropout = 0.1,
config=tf.estimator.RunConfig().replace(save_summary_steps=10))
NSTEPS = (100 * len(df_train)) / BATCH_SIZE
model.train(input_fn = train_input_fn, steps = NSTEPS)
print_rmse(model, 'eval', eval_input_fn)

Thanks.