-->

Tensorflow Serving on pretrained Keras ResNet50 mo

2020-07-25 11:03发布

问题:

I'm using the following code to export a pre-trained ResNet50 keras' model to tensorflow, for tensorflow-serving:

import tensorflow as tf
sess = tf.Session()
from keras import backend as K
K.set_session(sess)
K.set_learning_phase(0)

# Modelo resnet con pesos entrenados en imagenet
from keras.applications.resnet50 import ResNet50
model = ResNet50(weights='imagenet')

# exportar en tensorflow
import os
version_number = max([ int(x) for x in os.listdir('./resnet-classifier') ]) + 1
export_path = './resnet-classifier/{}'.format(version_number)
with tf.keras.backend.get_session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    tf.saved_model.simple_save(sess, export_path,
            inputs=dict(input_image=model.input),
            outputs={t.name:t for t in model.outputs}
    )

I tried some variations of the above, all of them with the same results (same prediction while served by tensorflow serving).

Then i run tensorflow-serving like:

docker run -p 8501:8501 \
  -v ./resnet-classifier:/models/resnet-classifier \
  -e MODEL_NAME=resnet-classifier -e MODEL_BASE_PATH=/models \
  -t tensorflow/serving

Finally, i'm using the following function to make predictions against tensorflow serving:

def imagepath_to_tfserving_payload(img_path):
    import numpy as np
    from keras.preprocessing import image
    from keras.applications.resnet50 import preprocess_input
    img = image.img_to_array(image.load_img(img_path, target_size=(224, 224)))
    X = np.expand_dims(img, axis=0).astype('float32')
    X = preprocess_input(X)
    payload = dict(instances=X.tolist())
    payload = json.dumps(payload)
    return payload

def tfserving_predict(image_payload, url=None):
    import requests
    if url is None:
        url = 'http://localhost:8501/v1/models/resnet-classifier:predict'
    r = requests.post(url, data=image_payload)
    pred_json = json.loads(r.content.decode('utf-8'))
    from keras.applications.resnet50 import decode_predictions
    predictions = decode_predictions(np.asarray(pred_json['predictions']), top=3)[0]
    return predictions

Then i use both functions above from an ipython shell to select random imagenes from imagenet's val set, which i've stored locally. The problem is that tensorflow serving is always returning the same prediction for all images i send.

Each time i export the model with the first script above, i'm getting slightly different classes, with '1' confidence for the first class and '0' for others, for example:

# Serialization 1, in ./resnet-classifier/1 always returning:
[
  [
    "n07745940",
    "strawberry",
    1.0
  ],
  [
    "n02104029",
    "kuvasz",
    1.4013e-36
  ],
  [
    "n15075141",
    "toilet_tissue",
    0.0
  ]
]

# Serialization 2, in ./resnet-classifier/2 always returning:
[
  [
    "n01530575",
    "brambling",
    1.0
  ],
  [
    "n15075141",
    "toilet_tissue",
    0.0
  ],
  [
    "n02319095",
    "sea_urchin",
    0.0
  ]
]

This might be related with Tensorflow : serving model return always the same prediction, but i don't know how the answers there (no accepted one) may help.

Anybody knows what's wrong above, and how to fix it?

回答1:

I've found that calling sess.run(tf.global_variables_initializer()) overrides pretrained weights, clue found at http://zachmoshe.com/2017/11/11/use-keras-models-with-tf.html.

The solution for me was really simple, just change the first block of code in the original question by the following, which calls tf.global_variables_initializer() before model instantiation / weight load:

import tensorflow as tf
sess = tf.Session()
sess.run(tf.global_variables_initializer())

from keras import backend as K
K.set_session(sess)
K.set_learning_phase(0)

# Modelo resnet con pesos entrenados en imagenet
from keras.applications.resnet50 import ResNet50
model = ResNet50(weights='imagenet')

# exportar en tensorflow
import os
versions = [ int(x) for x in os.listdir('./resnet-classifier') ]
version_number = max(versions) + 1 if versions else 1
export_path = './resnet-classifier/{}'.format(version_number)

tf.saved_model.simple_save(sess, export_path,
        inputs=dict(input_image=model.input),
        outputs={t.name:t for t in model.outputs}
)


回答2:

I had sometimes this kind of problems when forgot to normalize image. I think resnet accepts images in the format of float numbers between 0. and 1. (or maybe -1. to 1.). I don't know what preprocess_input function does but you can check if it returns array in the expected format.