Neural networks: avoid bias in any direction for o

2019-07-27 12:05发布

I'm having difficulties with the CartPole problem.

The input to the Cart takes either 0 or 1 as input; Either move left or right.

Lets say we have a net with 4 inputs plus bias, 3 hidden layers with 1 neuron each and 1 output; where all weights are randomized floats between 0 and 1, and the inputs will also be randomized floats between -10 and 10.

Because i chose everything random, I inherently expect the output to be approximately 0.5 on average, and that the cart will go as much right as it goes left.

This is not the case; i approximately get 0.63 on average. This leads to big problems, because the cart never decides to go to the left. This seems to be dependent on the amounts of neurons per hidden layer.

class NeuralNetwork(object):
  def __init__(self):
     self.inputLayerSize = 4
     self.hiddenLayerCount = 3
     self.hiddenLayerSize = 1
     self.outputLayerSize = 1

     #Initialize weights
     self.W = []
     self.W.append(np.random.rand(self.inputLayerSize + 1, self.hiddenLayerSize))
     for _ in range(self.hiddenLayerCount - 1):
        self.W.append( np.random.rand(self.hiddenLayerSize, self.hiddenLayerSize))
     self.W.append( np.random.rand(self.hiddenLayerSize, self.outputLayerSize))

  def forward(self, data):                                                                     
     layers = []
     data = np.append(data, [1])   #ADD BIAS                                                        
     layers.append(data)
     for h in range(self.hiddenLayerCount + 1):                                                
         z = np.dot( layers[h], self.W[h] )                                                     
         a = sigmoid(z)                                                                         
         layers.append(a)

     return sigmoid( layers[self.hiddenLayerCount + 1] )

I fix the problem by subtracting the output with 0.1, but this is obviously cheating; I see no mathematical reason to use 0.1 as some sort of magic number.

I believe I'm approaching the problem wrong, or got some of my code messed up. Any help would be appreciated!

1条回答
叛逆
2楼-- · 2019-07-27 12:40

There's at least one problem with your neural network that skews your result probabilities: the model output is the sigmoid of the last layer which itself is a sigmoid.

This means that your logit (i.e., the raw score) is in [0, 1], so the final probability is computed on a [0, 1] range, not [-inf, inf].

sigmoid

As you can see from the graph above, this makes the results probability to be greater than 0.5.

Solution: try the same network without the last sigmoid.

查看更多
登录 后发表回答