Is a Neural network with 2 input nodes, 2 hidden nodes and an output supposed to be able to solve the XOR problem provided there is no bias? Or can it get stuck?
相关问题
- neural network does not learn (loss stays the same
- Convolutional Neural Network seems to be randomly
- How to convert Onnx model (.onnx) to Tensorflow (.
- Splitting list and iterating in prolog
- XOR Java Neural Network
相关文章
- how to flatten input in `nn.Sequential` in Pytorch
- What are the problems associated to Best First Sea
- Looping through training data in Neural Networks B
- Why does this Keras model require over 6GB of memo
- How to measure overfitting when train and validati
- Create image of Neural Network structure
- Which computer vision library & algorithm(s), for
- Neural Network – Predicting Values of Multiple Var
Yes, you can if you use an activation function like Relu (f(x) =max(0,x))
Example of weights of such network are:
For the first (hidden) layer:
For the second (output) layer: Since the weights are [[1], [1]] (and there can be no negative activations from previous layer due to ReLU), the layer simply acts as a summation of activations in layer 1
While this method coincidentally works in the example above, it is limited to using zero (0) label for False examples of the XOR problem. If, for example, we used ones for False examples and twos for True examples, this approach would not work anymore.
If I remember correctly it's not possible to have XOR without a bias.
I have built a neural network without bias and a 2x2x1 architecture solves XOR in 280 epochs. Am new to this, so didn't know either way, but it works, so it is possible.
Regards,
Leave the bias in. It doesn't see the values of your inputs.
In terms of a one-to-one analogy, I like to think of the bias as the offsetting
c
-value in the straight line equation:y = mx + c
; it adds an independent degree of freedom to your system that is not influenced by the inputs to your network.