Error with 8-bit Quantization in Tensorflow

2020-04-17 13:54发布

I have been experimenting with the new 8-bit quantization feature available in TensorFlow. I could run the example given in the blog post (quantization of googlenet) without any issue and it works fine for me !!!

Now, I would like to apply the same for a simpler network. So I used a pre-trained network for CIFAR-10 (which is trained on Caffe), extracted its parameters, created corresponding graph in tensorflow, initialized the weights with this pre-trained weights and finally saved it as a GraphDef object. See this IPython Notebook for full procedure.

Now I applied the 8-bit quantization with the tensorflow script as mentioned in the Pete Warden's blog:

bazel-bin/tensorflow/contrib/quantization/tools/quantize_graph --input=cifar.pb  --output=qcifar.pb --mode=eightbit --bitdepth=8 --output_node_names="ArgMax"

Now I wanted to run the classification on this quantized network. So I loaded the new qcifar.pb to a tensorflow session and passed the image (the same way I passed it to original version). Full code can be found in this IPython Notebook.

But as you can see at the end, I am getting following error:

NotFoundError: Op type not registered 'QuantizeV2'

Can anybody suggest what am I missing here?

1条回答
甜甜的少女心
2楼-- · 2020-04-17 14:42

Because the quantized ops and kernels are in contrib, you'll need to explicitly load them in your python script. There's an example of that in the quantize_graph.py script itself:

from tensorflow.contrib.quantization import load_quantized_ops_so from tensorflow.contrib.quantization.kernels import load_quantized_kernels_so

This is something that we should update the documentation to mention!

查看更多
登录 后发表回答