FF Neural network and binary classification

1k Views Asked by At

Whenever I train a FeedForward neural network on a binary classification problem, the net returns float values. What's the theory behind this? Can this be interpreted as a probability? For instance, if the net returns 0.7 is that equivalent of saying that there's a 70% probability that the value is 1 and not 0? So should I just scale the float values and define a threshold to be either 0 or 1?

2

There are 2 best solutions below

4
On BEST ANSWER

I'm assuming you're using a sigmoid function as your activation function?

It will return values in that range. When u was playing around with mine, I treated it as a percentage of some arbitrary range. It can be a binary result though if you can tolerate a little bit of error. When I was training logic gates, after a fairly successful training session, 1 AND 1 resulted in something like 0.9999999; which is pretty much 1. You can round it at point that.

I made a post about this a month or two ago. I'll link to it if I can find it.

My question

1
On

When you train a NN on a binary problem, if you don't use a binary activation function, the answer is going to be a probability (if you use a sigmoid): the instance probability to belong to a class.

I never use a threshold or binary activation function, because it is always interesting to study the probabilities. For example, you can have a misclassified instance, but you observe that its probability is around 0.5. Therefore the NN is not sure about the class to pin on the instance. At the opposite, if an instance is misclassified and has a strong probability (close to 0 or 1), then it is a strong error and you should seriously understand why it is misclassified.