I want to do gradient-based minimization using a package like 'optim', where my function and gradient are quite complex. As you can see below in a simplified example, f calculates delta which is also necessary to compute for g. Within the same iteration, however, they are the same. Can I do optimization such that I don't need to compute it twice each iteration? It is the result of a fixed point problem with millions of parameters.
Any ideas? Or reference to a package? Thanks
f <- function(par,dt,..){
delta <- complexfunction(par,dt)
obj_value <- complexfunction2(par,dt,delta)
return(obj_value)
}
g <- function(par,dt){
delta <- complexfunction(par,dt) #it's the same as computed in the same iteration in f, can i cross reference?
gradient <- complexfunction4(par, delta)
return(gradient)
}
I don't have a error code yet, I just don't know how to do it. Possibly creating a global and updating it each iteration?
If you have control over the optimization algorithm, you could attach computations as an attribute to
parand so pass it between gradient and objective function. For an example of this, see Section "Updating" in the vignette of packageneighbours. (I am the maintainer of that package.) If you cannot do this, you could use an environment, as shown e.g. in https://stat.ethz.ch/pipermail/r-devel/2023-August/082759.html .