Why are my TensorFlow network weights and costs Na

2019-03-09 13:57发布

问题:

I can't get TensorFlow RELU activations (neither tf.nn.relu nor tf.nn.relu6) working without NaN values for activations and weights killing my training runs.

I believe I'm following all the right general advice. For example I initialize my weights with

weights = tf.Variable(tf.truncated_normal(w_dims, stddev=0.1))
biases = tf.Variable(tf.constant(0.1 if neuron_fn in [tf.nn.relu, tf.nn.relu6] else 0.0, shape=b_dims))

and use a slow training rate, e.g.,

tf.train.MomentumOptimizer(0.02, momentum=0.5).minimize(cross_entropy_loss)

But any network of appreciable depth results in NaN for cost and and at least some weights (at least in the summary histograms for them). In fact, the cost is often NaN right from the start (before training).

I seem to have these issues even when I use L2 (about 0.001) regularization, and dropout (about 50%).

Is there some parameter or setting that I should adjust to avoid these issues? I'm at a loss as to where to even begin looking, so any suggestions would be appreciated!

回答1:

Following He et. al (as suggested in lejlot's comment), initializing the weights of the l-th layer to a zero-mean Gaussian distribution with standard deviation

where nl is the flattened length of the the input vector or

stddev=np.sqrt(2 / np.prod(input_tensor.get_shape().as_list()[1:]))

results in weights that generally do not diverge.



回答2:

If you use a softmax classifier at the top of your network, try to make the initial weights of the layer just below the softmax very small (e.g. std=1e-4). This makes the initial distribution of outputs of the network very soft (high temperature), and helps ensure that the first few steps of your optimization are not too large and numerically unstable.



回答3:

Have you tried gradient clipping and/or a smaller learning rate?

Basically, you will need to process your gradients before applying them, as follows (from tf docs, mostly):

# Replace this with what follows
# opt = tf.train.MomentumOptimizer(0.02, momentum=0.5).minimize(cross_entropy_loss)

# Create an optimizer.
opt = tf.train.MomentumOptimizer(learning_rate=0.001, momentum=0.5)

# Compute the gradients for a list of variables.
grads_and_vars = opt.compute_gradients(cross_entropy_loss, tf.trainable_variables())

# grads_and_vars is a list of tuples (gradient, variable).  Do whatever you
# need to the 'gradient' part, for example cap them, etc.
capped_grads_and_vars = [(tf.clip_by_value(gv[0], -5., 5.), gv[1]) for gv in grads_and_vars]

# Ask the optimizer to apply the capped gradients.
opt = opt.apply_gradients(capped_grads_and_vars)

Also, the discussion in this question might help.