Scipy minimize returns a higher value than minimum

1k Views Asked by At

As a part of multi-start optimization, I am running differential evolution (DE), the output of which I feed as initial values to scipy minimization with SLSQP (I need constraints).

I am testing testing the procedure on the Ackley function. Even in situations in which DE returns the optimum (zeros), scipy minimization deviates from the optimal initial value and returns a value higher than at the optimum.

Do you know how to make scipy minimize return the optimum? I noticed it helps to specify tolerance for scipy minimize, but it does not solve the issue completely. Scaling the objective function makes things worse. The problem is not present for COBYLA solver.

Here are the optimization steps:

# Set up
x0min = -20
x0max = 20
xdim = 4
fun = ackley
bounds = [(x0min,x0max)] * xdim
tol = 1e-12


# Get a DE solution
result = differential_evolution(fun, bounds, 
                                maxiter=10000,
                                tol=tol,
                                workers = 1, 
                                init='latinhypercube')

# Initialize at DE output
x0 = result.x

# Estimate the model
r = minimize(fun, x0, method='SLSQP', tol=1e-18)

which in my case yields

result.fun = -4.440892098500626e-16
r.fun = 1.0008238682246429e-09

result.x = array([0., 0., 0., 0.])
r.x = array([-1.77227927e-10, -1.77062108e-10,  4.33179228e-10, -2.73031830e-12])

Here is the implementation of the Ackley function:

def ackley(x):
    # Computes the value of Ackley benchmark function.
    # ACKLEY accepts a matrix of size (dim,N) and returns a vetor
    # FVALS of size (N,)

    # Parameters
    # ----------
    # x : 1-D array size (dim,) or a 2-D array size (dim,N)
    #     Each row of the matrix represents one dimension. 
    #     Columns have therefore the interpretation of different points at which
    #     the function is evaluated.  N is number of points to be evaluated.
 
    # Returns
    # -------
    # fvals : a scalar if x is a 1-D array or 
    #         a 1-D array size (N,) if x is a 2-D array size (dim,N)
    #         in which each row contains the function value for each column of X.

    n = x.shape[0]
    ninverse = 1 / n
    sum1 = np.sum(x**2, axis=0)
    sum2 = np.sum(np.cos(2 * np.pi * x), axis=0)
    
    fvals = (20 + np.exp(1) - (20 * np.exp(-0.2 * np.sqrt( ninverse * sum1))) 
             - np.exp( ninverse * sum2))
    return fvals
1

There are 1 best solutions below

0
On BEST ANSWER

Decreasing the "step size used for numerical approximation of the Jacobian" in the SLSQP options solved the issue for me.