How to deploy locally trained TensorFlow graph fil

2020-02-06 03:54发布

I've followed the TensorFlow for Poets tutorial and replaced the stock flower_photos with a few classes of my own. Now I've got my labels.txt file and my graph.pb saved on my local machine.

Is there a way for me to deploy this pre-trained model to Google Cloud Platform? I've been reading the docs and all I can find are instructions on how to create, train, and deploy models from within their ML Engine. But I don't want to spend money training my model on Google's servers when I only need them to host my model so I can call it for predictions.

Anyone else run into the same problem?

2条回答
Deceive 欺骗
2楼-- · 2020-02-06 04:11

Partial answer only, unfortunately, but I have been able to accomplish this...but with some ongoing issues that I have not yet resolved. I ported over the trained pb and txt files to my server, installed Tensorflow, and am calling the trained model via HTTP request. It works perfectly...on the first run. Then fails every other time.

tensorflow deployment on openshift, errors with gunicorn and mod_wsgi

Surprised there are not more people out there trying to go after this general issue.

查看更多
家丑人穷心不美
3楼-- · 2020-02-06 04:31

Deploying a locally trained model is a supported use case; the instructions are essentially the same regardless of where you trained it:

To deploy a model version you'll need:

A TensorFlow SavedModel saved on Google Cloud Storage. You can get a model by:

  • Following the Cloud ML Engine training steps to train in the cloud.

  • Training elsewhere and exporting to a SavedModel.

Unfortunately, TensorFlow for Poets does not show how to export a SavedModel (I've filed a feature request to address that). In the meantime, you can write a "converter" script like the following (you could alternatively do this at the end of training instead of saving out graph.pb and reading it back in):

input_graph = 'graph.pb'
saved_model_dir = 'my_model'

with tf.Graph() as graph:
  # Read in the export graph
  with tf.gfile.FastGFile(input_graph, 'rb') as f:
      graph_def = tf.GraphDef()
      graph_def.ParseFromString(f.read())
      tf.import_graph_def(graph_def, name='')

  # CloudML Engine and early versions of TensorFlow Serving do
  # not currently support graphs without variables. Add a
  # prosthetic variable.
  dummy_var = tf.Variable(0)

  # Define SavedModel Signature (inputs and outputs)
  in_image = graph.get_tensor_by_name('DecodeJpeg/contents:0')
  inputs = {'image_bytes': 
tf.saved_model.utils.build_tensor_info(in_image)}

  out_classes = graph.get_tensor_by_name('final_result:0')
  outputs = {'prediction': tf.saved_model.utils.build_tensor_info(out_classes)}

  signature = tf.saved_model.signature_def_utils.build_signature_def(
      inputs=inputs,
      outputs=outputs,
      method_name='tensorflow/serving/predict'
  )

  # Save out the SavedModel.
  b = saved_model_builder.SavedModelBuilder(saved_model_dir)
  b.add_meta_graph_and_variables(sess,
                                 [tf.saved_model.tag_constants.SERVING],
                                 signature_def_map={'predict_images': signature})
  b.save() 

(Untested code based on this codelab and this SO post).

If you want the output to use string labels instead of integer indices, make the following change:

  # Loads label file, strips off carriage return
  label_lines = [line.rstrip() for line 
                 in tf.gfile.GFile("retrained_labels.txt")]
  out_classes = graph.get_tensor_by_name('final_result:0')
  out_labels = tf.gather(label_lines, ot_classes)
  outputs = {'prediction': tf.saved_model.utils.build_tensor_info(out_labels)}
查看更多
登录 后发表回答