Author 
Topic: Back propagation and Sigmoid function. 

yanncore 
Posted: 03May11 11:26 



Hi everyone! I understand the basics. I even made software in python with which you can create multilayer perceptrons, but I have a problem. Apparently, when I wan't to do something more complex, like the XOR function, I have to use back propagation and a different activation function. Before I would just check if the sum of all the weighted inputs is bigger then the threshold and set the neuron's state accordingly. Now I have to use a more complex activation function. That would be the sigmoid function. But I don't get it  what am I supposed to do with it? I run the sum through it and I get a weird number. I though the output was supposed to be 1 or 0, not 0.489324... and so on! What do I do with that number when I get it? Also can someone explain how to do back propagation? Should I use the last neuron's error signal as an input? Basically I have three neurons and I want to teach them XOR using BP and sigmoid (actually I would like to use a normal activation function, but I can't). 


pejman 
Posted: 03May11 15:59 



You need to use a step activation function (threshold or hardlim as they call it) or cut off outputs:
e.g. hardlim(x) = 1 if x >= 0 else = 0



yanncore 
Posted: 04May11 07:55 



So let me get this straight, after I get a strange number from the Sigmoid function, I should run that number through a normal activation function? 


pejman 
Posted: 05May11 19:33 



no, you can either replace the Sigmoid with a step activation function or keep the Sigmoid, but then round the output to your use. 

