If I run opencv MLP train and classify consecutively on the same data, I get different results. Meaning, if I put training a new mlp on the same train data and classifying on the same test data in a for loop, each iteration will give me different results.
Even though I am creating a new mlp object each iteration. However, if instead of using a for loop I just run the program a few times, restarting the program after each train and classify; the results are exactly the same.
So question is, does opencv use previous weights, variables, or something of the sorts from other mlp trains? Even though it is not the same mlp object. Why does it do this?
I've only done a little bit of poking around so far, but what I've seen confirms my first suspicion...
It looks as though each time you start the program, the random number generator is seeded to a fixed value:
So each time you run the program you're generating the same random sequence. By running in a loop, you continue generating the next random numbers in the sequence, which is different (generally) from the last sequence.