I'm trying to express the following optimization problem in scipy.
Assuming r is a known array with values: [0.96366965, 0.93341242, 0.90676733, 0.88186071, 0.8582291, 0.83472442, 0.80977363, 0.783683, 0.75743122, 0.73259614]
def objective_function(x):
e_t = np.zeros(10)
e_t[0] = 1 - 1 / (1 + r[0])
for t in range(1, 9):
sub = 0
for ti in range(0, t):
sub += np.sum(x[: (ti + 1)]) / (1 + r[ti]) ** (ti + 1)
e_t[t] = 1 - 1 / (1 + r[t]) ** (t + 1) - sub
e_t[9] = 0
return np.sum(e_t**2)
def constraint_function(t):
def calculate_et(x):
sub = 0
for ti in range(0, t):
sub += np.sum(x[: (ti + 1)]) / (1 + r[ti]) ** (ti + 1)
e_t = 1 - 1 / (1 + r[t]) ** (t + 1) - sub
return e_t
return calculate_et
x0 = np.zeros(10)
cons = (
{"type": "ineq", "fun": constraint_function(1)},
{"type": "ineq", "fun": constraint_function(2)},
{"type": "ineq", "fun": constraint_function(3)},
{"type": "ineq", "fun": constraint_function(4)},
{"type": "ineq", "fun": constraint_function(5)},
{"type": "ineq", "fun": constraint_function(6)},
{"type": "ineq", "fun": constraint_function(7)},
{"type": "ineq", "fun": constraint_function(8)},
{"type": "ineq", "fun": constraint_function(9)},
)
result = minimize(
objective_function,
x0,
bounds=Bounds(lb=0),
constraints=cons,
)
alpha = result.x
c = np.zeros(10)
c[0] = alpha[0]
for i in range(1, 10):
c[i] = alpha[i-1] + alpha[i]
However, the solution I got was quite off from the expected values, so I'm assuming this is not the right method to solve the problem. What is wrong with my current approach and are there other ways to approach it?
the solution I got for c is [9.02212140e-01 9.02212140e-01 2.39808173e-13 1.06359366e-13 4.57411886e-14 1.75415238e-14 6.16173779e-15 2.17534324e-15 4.67095103e-16 1.25975522e-17]
the expected solution is closer to [0.0318, 0.0318, 0.0318, 0.0318, 0.0318, 0.0318, 0.0320, 0.0325, 0.0336, 0.0353]
Check your math. With your current objective, the "expected solution" has a much higher objective cost than what
minimize
is able to come up with: