I'm working on a quadratic programming problem and have implemented solutions in both R and Python. However, I've noticed a significant difference in execution times between these two implementations, and I'm curious to understand why.
in R: Time 0.0006551743 secs vs in Python 0.04484891891479492
Here's the R code using solve.QP from the quadprog
package:
library(quadprog)
Dmat <- matrix(0, 3, 3)
diag(Dmat) <- 1
dvec <- c(0, 5, 0)
Amat <- matrix(c(-4, -3, 0, 2, 1, 0, 0, -2, 1), 3, 3)
bvec <- c(-8, 2, 0)
result <- solve.QP(Dmat, dvec, Amat, bvec = bvec)
And here's the Python code using scipy.optimize.minimize
:
import numpy as np
from scipy.optimize import minimize
def objective_function(x, Dmat, dvec):
return 0.5 * np.dot(x.T, np.dot(Dmat, x)) + np.dot(dvec, x)
def constraint_eq(x, Amat, bvec):
return np.dot(Amat, x) - bvec
Dmat = np.diag([1, 1, 1])
dvec = np.array([0, 5, 0])
Amat = np.array([[-4, -3, 0], [2, 1, 0], [0, -2, 1]])
bvec = np.array([-8, 2, 0])
x0 = np.zeros(len(dvec))
constraints = {'type': 'eq', 'fun': constraint_eq, 'args': (Amat, bvec)}
result = minimize(objective_function, x0, args=(Dmat, dvec), constraints=constraints)
The R implementation completes significantly faster than the Python implementation. I am aware that different libraries might use different algorithms and optimizations, but I was not expecting such a stark difference.
Can anyone explain why there is such a discrepancy in execution time? Are there underlying differences in how solve.QP and scipy.optimize.minimize handle quadratic programming problems?