Constructing input for TensorFlow's high-level

2019-06-03 22:46发布

A high-level estimator (e.g., tf.contrib.learn.DNNRegressor) is trained and saved in Python (using export_savedmodel with serving_input_fn). It is then loaded from C++ (using LoadSavedModel) for predictions. According to saved_model_cli the input tensor expected is of shape: (-1) and dtype: DT_STRING.

I can define such input tensor by constructing a tensorflow::Example object and then serialize it as string. However, I wonder if there are more efficient ways to do that? (That is, assuming the inputs are a bunch of floats, then building an object, defining feature map, serializing to string and then parsing it seems wasteful when this is done millions+ times.)

0条回答
登录 后发表回答