I am trying to understand backpropagation
in a simple 3 layered neural network with MNIST
.
There is the input layer with weights
and a bias
. The labels are MNIST
so it's a 10
class vector.
The second layer is a linear tranform
. The third layer is the softmax activation
to get the output as probabilities.
Backpropagation
calculates the derivative at each step and call this the gradient.
Previous layers appends the global
or previous
gradient to the local gradient
. I am having trouble calculating the local gradient
of the softmax
Several resources online go through the explanation of the softmax and its derivatives and even give code samples of the softmax itself
def softmax(x):
"""Compute the softmax of vector x."""
exps = np.exp(x)
return exps / np.sum(exps)
The derivative is explained with respect to when i = j
and when i != j
. This is a simple code snippet I've come up with and was hoping to verify my understanding:
def softmax(self, x):
"""Compute the softmax of vector x."""
exps = np.exp(x)
return exps / np.sum(exps)
def forward(self):
# self.input is a vector of length 10
# and is the output of
# (w * x) + b
self.value = self.softmax(self.input)
def backward(self):
for i in range(len(self.value)):
for j in range(len(self.input)):
if i == j:
self.gradient[i] = self.value[i] * (1-self.input[i))
else:
self.gradient[i] = -self.value[i]*self.input[j]
Then self.gradient
is the local gradient
which is a vector. Is this correct? Is there a better way to write this?
As I said, you have
n^2
partial derivatives.If you do the math, you find that
dSM[i]/dx[k]
isSM[i] * (dx[i]/dx[k] - SM[i])
so you should have:instead of
By the way, this may be computed more concisely like so:
I am assuming you have a 3-layer NN with
W1
,b1
for is associated with the linear transformation from input layer to hidden layer andW2
,b2
is associated with linear transformation from hidden layer to output layer.Z1
andZ2
are the input vector to the hidden layer and output layer.a1
anda2
represents the output of the hidden layer and output layer.a2
is your predicted output.delta3
anddelta2
are the errors (backpropagated) and you can see the gradients of the loss function with respect to model parameters.This is a general scenario for a 3-layer NN (input layer, only one hidden layer and one output layer). You can follow the procedure described above to compute gradients which should be easy to compute! Since another answer to this post already pointed to the problem in your code, i am not repeating the same.
np.exp is not stable because it has Inf. So you should subtract maximum in x.
If x is matrix, please check the softmax function in this notebook(https://github.com/rickiepark/ml-learn/blob/master/notebooks/5.%20multi-layer%20perceptron.ipynb)