I have a (fairly) simple Pyomo model with 5 parameters and a set of size 48 (which represent time intervals). GLPK works absolutely fine if I use a particular data file:
# Data file
param : n := 48;
param : E_demand :=
1 231.674545
2 223.328638
3 218.047274
4 212.285910
5 214.539544
6 213.940455
7 216.871637
8 205.824183
9 208.905001
(this continues in a similar vein up to index 48 and 4 more parameters).
But if I use another (only slightly different) data file, the problem takes much longer to solve (from less than a second to more than 20 minutes, I wasn't bothered to find out how much longer). If I just change two of the parameters to about 1/3 of their value (like below), the problem takes longer to solve.
param : E_demand :=
1 76.464996
2 69.815002
3 71.355003
4 75.004997
5 72.360001
6 71.065002
7 70.669998
8 71.809998
9 72.309998
I think the problem must be to do with scaling, since if I gradually replace the smaller values from one data file to the other the problem takes more time until it becomes cumbersomely slow. Is there a way of changing the glpk scaling using Pyomo? Would using a different solver potentially solve this problem?
For
ConcreteModel
, you could always implement some form of scaling during model construction by checking the parameter values and applying the relevant scaling factors in the model formulation.A similar discussion on variable/constraint scaling can be found on the Pyomo issues page: https://github.com/Pyomo/pyomo/issues/219