There's three components to this problem:
- A three dimensional vector A.
- A "smooth" function F.
- A desired vector B (also three dimensional).
We want to find a vector A that when put through F will produce the vector B.
F(A) = B
F can be anything that somehow transforms or distorts A in some manner. The point is that we want to iteratively call F(A) until B is produced.
The question is:
How can we do this, but with the least amount of calls to F before finding a vector that equals B (within a reasonable threshold)?
I am assuming that what you call "smooth" is tantamount to being differentiable. Since the concept of smoothness only makes sense in the rational / real numbers, I will also assume that you are solving a floating point-based problem.
In this case, I would formulate the problem as a nonlinear programming problem. i.e. minimizing the squared norm of the difference between f(A) and B, given by
It should be clear that this expression is zero if and only if f(A) = B and positive otherwise. Therefore you would want to minimize it.
As an example, you could use the solvers built into the
scipy
optimization suite (available for python):A binary search (as pointed out above) only works if the function is 1-d, which is not the case here. You can try out different optimization methods by adding the
method="name"
to the call tominimize
, see the API. It is not always clear which method works best for your problem without knowing more about the nature of your function. As a rule of thumb, the more information you give to the solver, the better. If you can compute the derivative ofF
explicitly, passing it to the solver will help reduce the number of required evaluations. IfF
has a Hessian (i.e., if it is twice differentiable), providing the Hessian will help as well.As an alternative, you can use the
least_squares
function onF
directly viares = least_squares(f, x0)
. This could be faster since the solver can take care of the fact that you are solving a least squares problem rather than a generic optimization problem.From a more general standpoint, the problem of restoring the function arguments producing a given value is called an Inverse Problem. These problems have been extensively studied.