I am using scipy.minimize
with the 'CG' method and I want to callback the gradient norm at each iteration. So far, I have been able to call back the function at each iteration using this:
def min_method(fn, grad, x0):
all_fn = [fn(x0).item()]
def store(x): # callback function
all_fn.append(fn(x).item())
ans = minimize(fn, x0, method='CG', jac=grad, callback=store,
options={'disp':True,'gtol': 1e-06})
return ans, all_fn
How can I add a line to the store()
function in order to get the gradient norm at each iteration?
You could use the
approx_fprime
function. It could be something like:##Update
If you want the jacobian from the 'CG', I thing you need to modify the function that is called through the scipy.optimize.minimize.
in
_minimize.py
go to the functionminimize
then find the call to the 'CG' methode line 673
_minimize_cg
in that function in the
optimize.py
around line 1681 replacewith
then you can have access to the jacobian in the callback function