I need a stable nonlinear root finding for a function that takes two inputs and return two nonlinear output in Python without "Black Box" libraries
I found that the Newton-Raphson algorithm is helpful but it doesn't converge always, Then I read about BFGS but I didn't know how to write this algorithm with two inputs - two output functions, just like I did in Newton Raphson
def root_finding(f, x0, y0, eps=1e-6, max_iter=100):
"""
Finds the root of a two-input, two-output function using the Newton-Raphson method.
Parameters:
f (callable): the function to find the root of; should take two inputs and return a two-element array
x0 (float): initial guess for the first input
y0 (float): initial guess for the second input
eps (float, optional): convergence tolerance; iterations stop when the norm of the increment is less than eps
max_iter (int, optional): maximum number of iterations
Returns:
tuple: the root (x, y)
"""
x = x0
y = y0
for i in range(1):
# Compute the function and its Jacobian at the current guess
fxy = np.array(f(x, y))
print(fxy)
J = np.array([
[f(x + eps, y)[0] - fxy[0], f(x, y + eps)[0] - fxy[0]],
[f(x + eps, y)[1] - fxy[1], f(x, y + eps)[1] - fxy[1]]
]) / eps
print(J)
# Compute the increment and update the guess
if np.linalg.det(J)==0:
print('The Matrix is singular')
return None,None
delta = np.linalg.solve(J, -fxy)
x += delta[0]
y += delta[1]
# Check for convergence
if np.linalg.norm(delta) < eps:
return x, y
print( RuntimeError("Failed to converge after {} iterations".format(max_iter)))
return None,None