How to plot the progress of an optimization?

1.5k Views Asked by At

Is there a way to plot the progressive value of a function being optimized with scipy.optimize's differential evolution? In the following the plotting part doesn't work:

from scipy.optimize import rosen, differential_evolution
bounds = [(0, 5), (0, 5), (0, 5), (0, 5), (0, 5)]
result = differential_evolution(rosen, bounds, disp=False)
print(result.x, result.fun)
import matplotlib.pyplot as plt
x, f = zip(*result)
plt.plot(x, f)
1

There are 1 best solutions below

3
On BEST ANSWER

Note: I originally answered this question thinking you wanted the path taken by the optimizer, not the value during optimization. I've updated the answer so that it shows both, but you're probably only interested in the second plot.

The object returned by differential_evolution does not contain the path towards the result, nor the values along the way. However, you can use the callback argument to provide a callback function that gets called on each iteration. That callback can then record the progress.

For example:

progress = []
progress_err = []

def cb(x, convergence):
    progress.append(x)
    progress_val.append(rosen(x))

bounds = [(0, 5), (0, 5), (0, 5), (0, 5), (0, 5)]
result = differential_evolution(rosen, bounds, disp=False, callback=cb)

progress = np.array(progress)
progress_val = np.array(progress_val)

Since you seem to want to optimize the 5D Rosenbrock function, visualization of the entire path becomes a bit tricky. If I opt to visualize just the first two coordinates (+ the value, which is what you're actually asking about), i.e.

fig = plt.figure()
ax = fig.add_subplot(2,1,1)
ax.plot(progress[:, 0], progress[:, 1])
ax = fig.add_subplot(2,1,2)
ax.plot(progress_val)
plt.show()

I get

A path and value during optimization

The value, which is what I just realized you're actually asking about, is the bottom plot. If you don't need the path itself, disregard anything in the code having to do with progress.

Of course, your result may look different because our random seeds, and therefore our paths towards the optimum, are different.