Differential evolution algorithm different results for different runs

950 Views Asked by At

As the title says, I am using the Differential Evolution algorithm as implemented in the Python mystic package for a global optimisation problem for O(10) parameters, with bounds and constraints.

I am using the simple interface diffev

result = my.diffev(func, x0, npop = 10*len(list(bnds)), bounds = bnds, 
        ftol = 1e-11, gtol = gtol, maxiter = 1024**3, maxfun = 1024**3, 
        constraints = constraint_eq, penalty = penalty, 
        full_output=True, itermon=mon, scale =  scale) 

I was experimenting running the SAME optimisation over several times: given a scaling for the differential evolution algorithm, I run 10 times the optimisation problem.

Result? I get different answers for almost all the results!

I experiment with scaling of 0.7, 0.75, 0.8, and 0.85, all roughly same bad behaviour (as suggested on the mystic page).

Here there is an example: on the x-axis there are the parameters, on the y-axis their values. The labels represent the iteration. Ideally you want to see only one line. Image for parameters at different iterations.

I run with gtol = 3500, so it should be quite long. I am using npop = 10*number pars, ftol = 1e-11, and the other important arguments for the diffev algorithm are the default ones.

Does anyone have some suggestion for tuning the differential evolution with mystic? Is there a way to avoid this variance in the results? I know it is a stochastic algorithm, but I did not expect it to give different results while running on gtol of 3500. My understanding was also that this algorithm does not get stuck into local minima, but I might be wrong.

p.s.

This is not relevant for the question, but just to give some context of why this is important for me.

What I need to do for my work is to minimise a function, under the conditions above, for several input data: I optimize for each data configuration over the O(10) parameters, then the configuration with some parameters that gives the overall minimum is the 'chosen' one.

Now, if the optimiser is not stable, it might give me the wrong data configuration by chance as the optimal one, as I run over hundreds of them.

1

There are 1 best solutions below

1
On

I'm the mystic author. As you state, differential evolution (DE) is a stochastic algorithm. Essentially, DE uses a random mutations on the current solution vector to come up with new candidate solutions. So, you can expect to get different results for different runs in many cases, especially when the function is nonlinear.

Theoretically, if you let it run forever, it will find the global minimum. However, most of us don't want to wait that long. So, there's termination conditions like gtol (change over generations) which sets the cutoff for number of iterations without improvement. There are also solver parameters that effect how the mutation is generated, like cross, scale, and strategy. Essentially, if you get different results for different runs, all that means is that you haven't tuned the optimizer for the particular cost function yet, and should try to play with the settings.

Of importance is the balance between npop and gtol, and that's where I often go first. You want to increase the population of candidates, generally, until it saturates (i.e. doesn't have an effect) or becomes too slow.

If you have other information you can constrain the problem with, that often helps (i.e. use constraints or penalty to restrict your search space).

I also use mystic's visualization tools to try to get an understanding of what the response surface looks like (i.e. visualization and interpolation of log data).

Short answer is, any solver that includes randomness in the algorithm will often need to be tuned before you get consistent results.