Keras VGG16 preprocess_input modes

2020-03-01 10:24发布

I'm using the Keras VGG16 model.

I've seen it there is a preprocess_input method to use in conjunction with the VGG16 model. This method appears to call the preprocess_input method in imagenet_utils.py which (depending on the case) calls _preprocess_numpy_input method in imagenet_utils.py.

The preprocess_input has a mode argument which expects "caffe", "tf", or "torch". If I'm using the model in Keras with TensorFlow backend, should I absolutely use mode="tf"?

If yes, is this because the VGG16 model loaded by Keras was trained with images which underwent the same preprocessing (i.e. changed input image's range from [0,255] to input range [-1,1])?

Also, should the input images for testing mode also undergo this preprocessing? I'm confident the answer to the last question is yes, but I would like some reassurance.

I would expect Francois Chollet to have done it correctly, but looking at https://github.com/fchollet/deep-learning-models/blob/master/vgg16.py either he is or I am wrong about using mode="tf".

Updated info

@FalconUA directed me to the VGG at Oxford which has a Models section with links for the 16-layer model. The information about the preprocessing_input mode argument tf scaling to -1 to 1 and caffe subtracting some mean values is found by following the link in the Models 16-layer model: information page. In the Description section it says:

"In the paper, the model is denoted as the configuration D trained with scale jittering. The input images should be zero-centered by mean pixel (rather than mean image) subtraction. Namely, the following BGR values should be subtracted: [103.939, 116.779, 123.68]."

3条回答
够拽才男人
2楼-- · 2020-03-01 10:54

In my experience in training VGG16 in Keras, the inputs should be from 0 to 255, subtracting the mean [103.939, 116.779, 123.68]. I've tried transfer learning (freezing the bottom and stack a classifier on top) with inputs centering from -1 to 1, and the results are much worse than 0..255 - [103.939, 116.779, 123.68].

查看更多
别忘想泡老子
3楼-- · 2020-03-01 10:59

The mode here is not about the backend, but rather about on what framework the model was trained on and ported from. In the keras link to VGG16, it is stated that:

These weights are ported from the ones released by VGG at Oxford

So the VGG16 and VGG19 models were trained in Caffe and ported to TensorFlow, hence mode == 'caffe' here (range from 0 to 255 and then extract the mean [103.939, 116.779, 123.68]).

Newer networks, like MobileNet and ShuffleNet were trained on TensorFlow, so mode is 'tf' for them and the inputs are zero-centered in the range from -1 to 1.

查看更多
不美不萌又怎样
4楼-- · 2020-03-01 11:02

Trying to use VGG16 myself again lately, i had troubles getting descent results by just importing preprocess_input from vgg16 like this:

from keras.applications.vgg16 import VGG16, preprocess_input

Doing so, preprocess_input by default is set to 'caffe' mode but having a closer look at keras vgg16 code, i noticed that weights name

'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels.h5'

is referring to tensorflow twice. I think that preprocess mode should be 'tf'.

processed_img = preprocess_input(img, mode='tf')
查看更多
登录 后发表回答