We're trying to translate old training code based into a more tf.estimator.Estimator compliant code. In the initial code we fine tune an original model for a target dataset. Only some layers are loaded from the checkpoint before the training takes place using a combination of variables_to_restore and init_fn with the MonitoredTrainingSession. How can one achieve this kind of weight loading with the tf.estimator.Estimator approach ?
标签:
tensorflow
相关问题
- batch_dot with variable batch size in Keras
- How to use Reshape keras layer with two None dimen
- CV2 Image Error: error: (-215:Assertion failed) !s
- Why keras use “call” instead of __call__?
- How to conditionally scale values in Keras Lambda
相关文章
- tensorflow 神经网络 训练集准确度远高于验证集和测试集准确度?
- Tensorflow: device CUDA:0 not supported by XLA ser
- Numpy array to TFrecord
- conditional graph in tensorflow and for loop that
- How to downgrade to cuda 10.0 in arch linux?
- Apply TensorFlow Transform to transform/scale feat
- How to force tensorflow tensors to be symmetric?
- keras model subclassing examples
you have two options, first one is simpler:
1- use
tf.train.init_from_checkpoint
in yourmodel_fn
2-
model_fn
returns anEstimatorSpec
. You can set scaffold viaEstimatorSpec
.