Multilayer perceptron with sigmoid activation prod

2019-08-27 17:16发布

问题:

I'm trying to approximate noisy data from the sin(2x) function using a multilayer perceptron:

# Get data
datasets = gen_datasets()
# Add noise
datasets["ysin_train"] = add_noise(datasets["ysin_train"])
datasets["ysin_test"] = add_noise(datasets["ysin_test"])
# Extract wanted data
patterns_train = datasets["x_train"]
targets_train = datasets["ysin_train"]
patterns_test = datasets["x_test"]
targets_test = datasets["ysin_test"]
# Reshape to fit model
patterns_train = patterns_train.reshape(62, 1)
targets_train = targets_train.reshape(62, 1)
patterns_test = patterns_test.reshape(62, 1)
targets_test = targets_test.reshape(62, 1)

# Parameters
learning_rate = 0.001
training_epochs = 10000
batch_size = patterns_train.shape[0]
display_step = 1

# Network Parameters
n_hidden_1 = 2
n_hidden_2 = 2
n_input = 1
n_classes = 1

# tf Graph input
X = tf.placeholder("float", [None, n_input])
Y = tf.placeholder("float", [None, n_classes])

# Store layers weight & bias
weights = {
    'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
    'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
    'b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'out': tf.Variable(tf.random_normal([n_classes]))
}

# Create model
def multilayer_perceptron(x):
    # Hidden fully connected layer with 2 neurons
    layer_1 = tf.sigmoid(tf.add(tf.matmul(x, weights['h1']), biases['b1']))
    # Hidden fully connected layer with 2 neurons
    layer_2 = tf.sigmoid(tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']))
    # Output fully connected layer
    out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
    return out_layer

# Construct model
logits = multilayer_perceptron(X)

# Define loss and optimizer
loss_op = tf.reduce_mean(tf.losses.absolute_difference(labels = Y, predictions = logits, reduction=tf.losses.Reduction.NONE))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)

# Initializing the variables
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)

    # Training Cycle
    for epoch in range(training_epochs):

        _ = sess.run(train_op, feed_dict={X: patterns_train,
                                          Y: targets_train})
        c = sess.run(loss_op, feed_dict={X: patterns_test,
                                         Y: targets_test})
        if epoch % display_step == 0:
            print("Epoch: {0: 4} cost={1:9}".format(epoch+1, c))
    print("Optimization finished!")
    outputs = sess.run(logits, feed_dict={X: patterns_test})
    print("outputs: {0}".format(outputs.T))
    plt.plot(patterns_test, outputs, "r.", label="outputs")
    plt.plot(patterns_test, targets_test, "b.", label="targets")
    plt.legend()
    plt.show()

When I plot this at the end, I get a straight line, as if I have a linear network. Take a look at the plot:

This is a correct minimization of the error for a linear network. But I shouldn't have a linear betwork because I'm using the sigmoid function in my multilayer_perceptron() function! Why is my network behaving like this?

回答1:

The default value of stddev=1.0 in tf.random_normal, which you use for weight & bias initialization, is huge. Try an explicit value of stddev=0.01 for the weights; as for the biases, common practice is to initialize them to zero.

As an initial approach, I would also try a higher learning_rate of 0.01 (or maybe not - see answer in a related question here)