How to serve multiple versions of model via standa

2019-05-25 02:44发布

问题:

I'm new to Tensorflow serving,

I just tried Tensorflow serving via docker with this tutorial and succeeded.

However, when I tried it with multiple versions, it serves only the latest version.

Is it possible to do that? Or do I need to try something different?

回答1:

This require a ModelServerConfig, which will be supported by the next docker image tensorflow/serving release 1.11.0 (available since 5. Okt 2018). Until then, you can create your own docker image, or use tensorflow/serving:nightly or tensorflow/serving:1.11.0-rc0 as stated here. See that thread for how to implement multiple models.

If you on the other hand want to enable multiple versions of a single model, you can use the following config file called "models.config":

model_config_list: {
    config: {
        name: "my_model",
        base_path: "/models/my_model",
        model_platform: "tensorflow",
        model_version_policy: {
            all: {}
        }
    }
}

here "model_version_policy: {all:{ } }" make every versions of the model available. Then run the docker:

docker run -p 8500:8500 8501:8501 \
    --mount type=bind,source=/path/to/my_model/,target=/models/my_model \
    --mount type=bind,source=/path/to/my/models.config,target=/models/models.config \
    -t tensorflow/serving:nightly --model_config_file=/models/models.config

Edit:
Now that version 1.11.0 is available, you can start by pulling the new image:

docker pull tensorflow/serving

Then run the docker image as above, using tensorflow/serving instead of tensorflow/serving:nightly.



回答2:

I found a way to achieve this by building my own docker image which uses --model_config_file option instead of --model_name and --model_base_path.

So I'm running tensorflow serving with below command.

docker run -p 8501:8501 -v {local_path_of_models.conf}:/models -t {docker_iamge_name}

Of course, I wrote 'models.conf' for multiple models also.

edit:

Below is what I modified from original docker file.

original version:

tensorflow_model_server --port=8500 --rest_api_port=8501 \ --model_name=${MODEL_NAME} --model_base_path=${MODEL_BASE_PATH}/${MODEL_NAME} \

modified version:

tensorflow_model_server --port=8500 --rest_api_port=8501 \ --model_config_file=${MODEL_BASE_PATH}/models.conf \