How to handle different queue batch size and feed

2019-07-22 12:44发布

问题:

My code used to work on tensorflow 0.6, but it no longer works on the lastest tensorflow.

I would like to perform inference every few training iterations. My training data is pulled from a queue, my inference data is from feed_dict. The training batch size is 128 while the inference batch size is 1. What Should I do to make the network accept the two different batch sizes?

batch_size = 128
x_batch = tf.placeholder("float", [None, 100])
q = tf.FIFOQueue(10, [tf.float32], shapes=[[batch_size, 100]])
enqueue_op = q.enqueue([x_batch])

# during training
x = q.dequeue() # dequeue operation

# network definition, takes x as input, and output y
......

# during inference
x_array_of_batch_size_1 = .. # a 1x100 numpy array
sess.run([y], feed_dict={x: x_array_of_batch_size_1))

I got the following error:

ValueError: Cannot feed value of shape (1, 100) for Tensor u'fifo_queue_Dequeue:0', which has shape '(128, 100)'

回答1:

We added this check recently to prevent errors (and add a few optimization opportunities). You can make your program work again by changing the declaration of x to use the new tf.placeholder_with_default() op:

x = tf.placeholder_with_default(q.dequeue(), shape=[None, 100])