Shalmanese
Platinum Member
I'm coding a 2 hidden layer NN at the moment and theres something I'm just not understanding about the whole backpropogation thing.
My neurons are using a sigmoid function (1 / (1 + e^-weightSum)) and the backpropogation algorithm I got from the book is sigma = p (1 - p) (tk - p). But, in this case, when p = 0 or p = 1, sigma will always be 0 regardless of the desired outcome so none of the weights will be updated.
Also, since the output of the first hidden layer is between 0 and 1, the inputs of the second hidden layer are going to be fairly small so the output will always be ~0.5. Is this correct behaviour?
My neurons are using a sigmoid function (1 / (1 + e^-weightSum)) and the backpropogation algorithm I got from the book is sigma = p (1 - p) (tk - p). But, in this case, when p = 0 or p = 1, sigma will always be 0 regardless of the desired outcome so none of the weights will be updated.
Also, since the output of the first hidden layer is between 0 and 1, the inputs of the second hidden layer are going to be fairly small so the output will always be ~0.5. Is this correct behaviour?