I am trying to deploy a retrained version of the inception model on google cloud ml-engine. Gathering informations from the SavedModel documentation, this reference, and this post of rhaertel80, I exported successfully my retrained model to a SavedModel, uploaded it to a bucket and tried to deploy it to a ml-engine version.
This last task actually creates a version, but it outputs this error:
Create Version failed. Bad model detected with error: "Error loading the model: Unexpected error when loading the model"
And when I try to get predictions from the model via commandline I get this error message:
"message": "Field: name Error: Online prediction is unavailable for this version. Please verify that CreateVersion has completed successfully."
I have made several attempts, trying different method_name
and tag
options but none worked.
The code added to the original inception code is
### DEFINE SAVED MODEL SIGNATURE
in_image = graph.get_tensor_by_name('DecodeJpeg/contents:0')
inputs = {'image_bytes': tf.saved_model.utils.build_tensor_info(in_image)}
out_classes = graph.get_tensor_by_name('final_result:0')
outputs = {'prediction': tf.saved_model.utils.build_tensor_info(out_classes)}
signature = tf.saved_model.signature_def_utils.build_signature_def(
inputs=inputs,
outputs=outputs,
method_name='tensorflow/serving/predict'
)
### SAVE OUT THE MODEL
b = saved_model_builder.SavedModelBuilder('new_export_dir')
b.add_meta_graph_and_variables(sess,
[tf.saved_model.tag_constants.SERVING],
signature_def_map={'predict_images': signature})
b.save()
Another consideration that might help:
I have used an exported a trained_graph.pb
with graph_def.SerializeToString()
to get the predictions locally and it works fine, but when I substitute it with the saved_model.pb
it fails.
Any suggestions on what the issue might be?
In your signature_def_map, use the key 'serving_default', which is defined in
signature_constants
asDEFAULT_SERVING_SIGNATURE_DEF_KEY
: