Trivial constrained optimization problem with mystic violates constraint and returns infinite function value

128 Views Asked by At

I am trying to use mystic for some constrained optimization problems, but it will not respect my constraint, even after using mystic's simplify function. I reckon I must be missing something, but I can't figure out what it is.

Here is a simple example of code that should illustrate my issue:

# %pip install mystic
import mystic as my
import numpy as np

def objective(X):
  x0, x1, x2 = X
  return x0*x1*x2

bounds = [(0., 1e5)]*3

eqn = "x0 + x1 + x2 - 100 == 0.0"

constraint = my.symbolic.generate_constraint(
    my.symbolic.generate_solvers(my.symbolic.simplify(eqn))
)

mon = my.monitors.VerboseMonitor(10)

result = my.solvers.diffev2(
  objective, 
  x0=bounds, 
  bounds=bounds, 
  constraints=constraint, 
  disp=True, 
  full_output=True, 
  itermon=mon)#, map=p.map)
print (result)

This is a very simple objective function (my actual work involves something more complicated), which should leave us with a result of x0 = x1 = x2 = 100/3; f(x) = (100/3)^3, but mystic is ignoring the constraint and saying that the function value is infinite. I have also tried making the objective function negative, making the constraint negative, and both at the same time, each time mystic ignored the constraint.

What is missing from this code to make mystic follow the simple summation constraint?

Edit: I found that by removing the "s" from "constraints" in the diffev2 call caused mystic to stop reporting infinite function values. I also changed the objective to -x0*x1*x2 because mystic is attempting to minimize, so this should actually give the x0=x1=x2 result I was expecting. Unfortunately mystic now always converges to [100000, 100000, 100000], even when changing the constant in the constraint, changing the objective by squaring one of the terms, or increasing npop and gtol.

1

There are 1 best solutions below

5
On

For the bounds you are using the objective function's minimum value is not at [100/3] * 3; it has multiple solutions at [100, 0, 0], [0, 100, 0], and[0, 0, 100].

The issue you are running into is npop is too small. It is the initial number of population sample to use. The default value is only 4, which is fine for 1-D problems. But for a multidimensional problem with such a large set of bounds, you will need to start with more points.

Also, the optimizer will need additional iterations to explore the phase space in search of better locations. You can control this with gtol.

Using the following, it converged on the 3 attempt.

mon = my.monitors.VerboseMonitor(50)

result = my.solvers.diffev2(
  objective,
  x0=bounds,
  bounds=bounds,
  npop=2000,
  gtol=300,
  constraints=constraint,
  disp=True,
  full_output=True,
  itermon=mon,
  )

print(result)

Here is the output:

Generation 0 has ChiSquare: inf
Generation 50 has ChiSquare: 234.327267
Generation 100 has ChiSquare: 234.327267
Generation 150 has ChiSquare: 234.327267
Generation 200 has ChiSquare: 2.427155
Generation 250 has ChiSquare: 2.427155
Generation 300 has ChiSquare: 2.427155
Generation 350 has ChiSquare: 0.015526
Generation 400 has ChiSquare: 0.000000
Generation 450 has ChiSquare: 0.000000
Generation 500 has ChiSquare: 0.000000
Generation 550 has ChiSquare: 0.000000
Generation 600 has ChiSquare: 0.000000
Generation 650 has ChiSquare: 0.000000
STOP("ChangeOverGeneration with {'tolerance': 0.005, 'generations': 300}")
Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 655
         Function evaluations: 1312000
(array([0.00000000e+00, 6.62999242e-06, 9.99999934e+01]), 0.0, 655, 1312000, 0)

And it found one of the correct optimizations at [0, 0, 100].