Tensorflow - Convolutonal neural network with cust

2019-08-14 02:55发布

I am trying to make a convolutional neural network for custom data set. The classifier has only two classes. I am able to read the input images properly and have also assigned them the batch_labels for the two corresponding classes . The code executes without error, but the output is anomalous. For some reason, the accuracy is always 50%.

image=inputs()

image_batch=tf.train.batch([image],batch_size=150)
label_batch_pos=tf.train.batch([tf.constant([0,1])],batch_size=75) # label_batch for first class
label_batch_neg=tf.train.batch([tf.constant([1,0])],batch_size=75) # label_batch for second class
label_batch=tf.concat(0,[label_batch_pos,label_batch_neg])

W_conv1 = weight_variable([5, 5, 3, 32])
b_conv1 = bias_variable([32])

image_4d = tf.reshape(image, [-1,32,32,3])

h_conv1 = tf.nn.relu(conv2d(image_4d, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)

W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

W_fc1 = weight_variable([8 * 8 * 64, 1024])
b_fc1 = bias_variable([1024])

h_pool2_flat = tf.reshape(h_pool2, [-1, 8*8*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
h_fc1_drop = tf.nn.dropout(h_fc1, 0.5)

W_fc2 = weight_variable([1024, 2])
b_fc2 = bias_variable([2])

y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
cross_entropy = -tf.reduce_sum(tf.cast(label_batch,tf.float32)*tf.log(y_conv+1e-9))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)

init = tf.initialize_all_variables()
sess = tf.Session()
sess.run(init)


tf.train.start_queue_runners(sess=sess)
correct_prediction=tf.equal(tf.argmax(y_conv,1), tf.argmax(label_batch,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

for i in range(100):
 train_step.run(session=sess)
 print(sess.run(accuracy))

print(sess.run(correct_prediction))

When I print the correct_prediction tensor, I get the following output no matter what.

[ True  True  True  True  True  True  True  True  True  True  True  True
  True  True  True  True  True  True  True  True  True  True  True  True
  True  True  True  True  True  True  True  True  True  True  True  True
  True  True  True  True  True  True  True  True  True  True  True  True
  True  True  True  True  True  True  True  True  True  True  True  True
  True  True  True  True  True  True  True  True  True  True  True  True
  True  True  True False False False False False False False False False
 False False False False False False False False False False False False
 False False False False False False False False False False False False
 False False False False False False False False False False False False
 False False False False False False False False False False False False
 False False False False False False False False False False False False
 False False False False False False]

The accuracy is always 0.5, as if the weights are not being updated at all. When I print weights after each training step, they remain unchanged. I think I have some coding error. Could it be that the network is training on the same image again and again? But even so, the weights must update. I have 150 training examples, with 75 belonging to each class. Could someone please point me in the right direction ?

EDIT: This is how I initialize weights

def weight_variable(shape,name):
  initial = tf.truncated_normal(shape, stddev=0.5)
  return tf.Variable(initial,name=name)

def bias_variable(shape,name):
  initial = tf.constant(1.0, shape=shape)
  return tf.Variable(initial,name=name)

1条回答
Fickle 薄情
2楼-- · 2019-08-14 03:13

Your network has some design flaws. Because of mathematical issues it is not a good idea to calculate the cross entropy yourself and apply a softmax on the output layer. If you are interested in the mathematics I could add this, if not stick to the Tensorflow explanation and method: tf.nn.softmax_cross_entropy_with_logits.

Have you already tried many different configurations? Depending on the complexity of your images a higher or lower kernel size and amount of feature maps might be a good idea. Generally, if your images are relatively homogeneous, a lot of rather similar information is added up and therefore the network has it harder to converge if you have many feature maps. Since you have only two output neurons I assume are images are not very complex?

The next thing is your dropout. You are always using a dropout of 0.5 but normally, for test/validation (like your accuracy prediction) you do not use dropout. In most cases you only use it for training. You can create a placeholder specifying your dropout rate and feed this sess.run.

Here is some example of my own:

h_fc_drop = tf.nn.dropout(h_fc, keep_prob)

(...)

accu, top1, top3, top5 = sess.run([accuracy, te_top1, te_top3, te_top5],
                            feed_dict={
                                x: teX[i: i + batch_size],
                                y: teY[i: i + batch_size]
                                keep_prob: 1.0
                            }
                         )

This lets Tensorflow calculate my equations for accuracy and topX error rate, while I feed in the test data input teX and the real labels teY for the output with a keep probability keep_prob of 1.0 for the dropout.

Despite this the initialization of your weights is really important in deep neural networks. Even if your design is sufficient for your kind of problem (this also has to be investigated) your network could refuse to learn, diverge or converge to 0 if your weights are not initialized properly. You did not add details to your initialization so you maybe want to look up Xavier initialization. This is an easy beginning for Xavier initialization.

Conclusively I can just encourage you to plot some weights, feature maps, the output over time and so on to get an idea of what your network is doing. Normally this helps a lot.

查看更多
登录 后发表回答