Is it possible that the accuracy of perceptron decreases as I go through the training more times? In this case, I use the same training set several times.
Relation between perceptron accuracy and epoch
1.3k Views Asked by user2817608 At
2
There are 2 best solutions below
0

Neither the accuracy on training data set nor on test data set is stable as the epoch increases. Actually the experimental data indicated that the trend of either in-sample-error or out-sample-error is not even monotonic. And a "pocket" strategy is often applied. Unlike early stop, the pocket algorithm keeps the best solution seen so far "in its pocket" instead of the last solution.
Yes.
This is a commonly studied phenomenon, the accuracy on never-before-seen data (testing data) start to decrease after certain point (after a certain number of passes through the training data-- what you call epochs). This phenomenon is called overfitting and is well understood. You want to stop early, as early as possible or use regularization.