• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Neural Network Backpropogation

Shalmanese

Platinum Member
I'm coding a 2 hidden layer NN at the moment and theres something I'm just not understanding about the whole backpropogation thing.

My neurons are using a sigmoid function (1 / (1 + e^-weightSum)) and the backpropogation algorithm I got from the book is sigma = p (1 - p) (tk - p). But, in this case, when p = 0 or p = 1, sigma will always be 0 regardless of the desired outcome so none of the weights will be updated.

Also, since the output of the first hidden layer is between 0 and 1, the inputs of the second hidden layer are going to be fairly small so the output will always be ~0.5. Is this correct behaviour?
 
Also, since the output of the first hidden layer is between 0 and 1, the inputs of the second hidden layer are going to be fairly small so the output will always be ~0.5. Is this correct behaviour?
This is incorrect. The output can have a very strong deviation from the mean. It can be zero all the time or 1 all the time or anything in between. It does not have to be ~0.5 all of the time.

 
Back
Top