change minimize weighted sum of absolute value into a linear optimization

79 Views Asked by At

For Example, we have a optimization problem

min $$\Sum_{i=1}^{i=n} |w_{i} - a_{i}| * b_{i}$$ s.t. \sum_{i=1}^{n} ci * wi = 0

and ai, bi, ci are given. how to convert it into a linear prgram?

I know that if we want to minimize |x - a|, we should let minimize b where b > x-a and b > -(x-a). However, in this case, we want to minimize the weighed sum of absolute value and do not know the sign of bi. does anyone know how to do?

1

There are 1 best solutions below

0
On

The problem is convex only if ${b}_{i} \geq 0$, otherwise the problem is not convex.
In order to see it, assume the problem is in 1D, all are scalars.
Set $a = 0, b = -1$ the the output function the $- \left| w \right|$ which is not convex (It is concave).

Assuming non negative values for ${b}_{i}$ then:

import numpy as np
import scipy as sp

# %%
numElements = 10
vA = np.random.randn(numElements)
vB = np.random.randn(numElements)
vC = np.random.randn(numElements)

hObjFun = lambda vW: np.dot(np.abs(vW - vA), np.maximum(vB, 0))

# %%
# Solution in the given formulation

vW1 = cp.Variable(numElements)

oObjFun = cp.Minimize( cp.abs(vW1 - vA).T @ cp.pos(vB) )
oCvxPrb = cp.Problem(oObjFun, [(vW1.T @ vC) == 0])
oCvxPrb.solve(solver = cp.CLARABEL)

# %% 
# Solution in Linear Programming form

vW2 = cp.Variable(numElements)
vT = cp.Variable(numElements)

oObjFun = cp.Minimize( vT.T @ cp.pos(vB) )
oCvxPrb = cp.Problem(oObjFun, [(vW2.T @ vC) == 0, vW2 - vA <= vT, -vT <= vW2 - vA])
oCvxPrb.solve(solver = cp.CLARABEL)

# %%
# Verify solution

np.isclose(hObjFun(vW1.value), hObjFun(vW2.value)) #<! Should print `True