I am implementing a simple perceptron for classifying OR function in python. however the error doesn't converse. Any suggestion would be highly appreciated.
def activation_function(x):
if x<0:
return 0
else:
return 1
training_set = [((0, 0), 0), ((0, 1), 1), ((1, 0), 1), ((1, 1), 1)]
w = random.rand(2)
errors = []
eta = .2
n = 100
for i in range(n):
for x, y in training_set:
u = sum(x*w)
error = y - activation_function(u)
errors.append(error)
for index, value in enumerate(x):
w[index] += eta * error * value
ylim([-1,1])
plot(errors)
Error plot:
I would say you are missing the bias b...
If you add it it converges beautifully.
Note that I imported the library differently than you with some more reasonable name, so that I know where which function come from... Let me know if that helps you...
And by the way this is the result of the classification. I hope the colors make sense... REd and blue are kind of flashy but you get the idea. Note that you can find infinite solutions to this problem. So if you change the random seed you will get a different line that will linearly separate your points.
Additionally your algorithm does not converge since when you have your line passing through (0,0), although your prediction is wrong, the weights will not be updated since
value=0
for this particular point. So the problem is that your update will not do anything. That is the reason of the oscillations of your error.EDIT as requested I wrote a small tutorial (a Jupyter notebook) with some example on how to draw the decision boundary of a classifier. You can find it on github
github repository: https://github.com/michelucci/python-Utils
Hope it is useful.
EDIT 2: And if you want the quick and very dirty version (the one I used for the plot in red and blue) here is the code