why the suggestion of output activation function of neural network in NNtool box is pureline?

465 Views Asked by At

When I study neural network, the mathematical derivation always use sigma function in the hidden layer and the output layer. But the NNtool box in Mathworks suggests the user to use sigma in the hidden layer and pureline in the output layer. Can anyone tell me why the output layer can be pureline? I just can't catch the reason for this activation function.

https://i.stack.imgur.com/c91K0.jpg // the traditional back propagation formula

As the formula,If I use pureline function, the result will be very different. But I don't see any derivation of back propagation where the output activation function is pureline. I just wonder if there are any reason for using prureline while it is not the same as the traditional back propagation.

1

There are 1 best solutions below

0
On

The aim of using the sigmoid function as the activation function in an artificial neural network is to limit the range of outputs. If that's been used at the hidden layer, the number of hidden layer nodes is finite, and the gain of the output layer nodes is bounded, then the output layer nodes will have bounded output.

But, it is just a suggestion. You could still use the sigmoid in the output layer.