I am using SCIP 3.0.2 with cplex 12.6 as LP-solver. My model requires Column generation. I already implemented it in CPLEX but since CPLEX can only do CG in the root node I am using SCIP to do Branch-and-Price.
In CPLEX it turned out to be beneficial to turn off heursitics, cuts and preprocessing/probing. I set the following in SCIP:
SCIP_CALL( SCIPsetBoolParam(scip, "lp/presolving", FALSE) );
SCIPsetSeparating(scip, SCIP_PARAMSETTING_OFF, true); //disable cuts
SCIPsetHeuristics(scip, SCIP_PARAMSETTING_OFF, true); //disable heuristics
SCIPsetPresolving(scip, SCIP_PARAMSETTING_OFF, true); //disable presolving
My parameter-file looks as follows:
display/primalbound/active = 1
presolving/maxrounds = 0
separating/maxrounds = 0
separating/maxroundsroot = 0
separating/maxcuts = 0
separating/maxcutsroot = 0
lp/initalgorithm = d
lp/resolvealgorithm = d
lp/fastmip = 1
lp/threads = 1
limits/time = 7200
limits/memory = 2900
limits/absgap = 0
#display/verblevel = 5
#display/freq = 10
To check that the models are the same I solved the CPLEX model in SCIP (without CG) and I obtained the same LP-bound as for the model generated with SCIP but different from the LP-bound when solving with CPLEX.
It seems that SCIP is still using some 'magic' I have not deactivated yet. So my question is what do I have to deactivate to obtain an LP-bound relying just on my model.
I already took a look at the statistics out-put and there are indeed some things that might help to solve the problem:
Constraints #EnfoLP lists 1 for integral (seems strange since cuts are disabled?)
The transformed problem seems to be ok. The statistics-output prints:
Presolved Problem :
Problem name : t_ARLP
Variables : 969 (806 binary, 0 integer, 0 implicit integer, 163 continuous)
Constraints : 9311 initial, 9311 maximal
and before the iterations start I get the following:
LP Solver : row representation of the basis not available -- SCIP parameter lp/rowrepswitch has no effect
transformed problem has 897 variables (806 bin, 0 int, 0 impl, 91 cont) and 9311 constraints
9311 constraints of type < linear >
presolving:
presolving (0 rounds):
0 deleted vars, 0 deleted constraints, 0 added constraints, 0 tightened bounds, 0 added holes, 0 changed sides, 0 changed coefficients
0 implications, 0 cliques
presolved problem has 897 variables (806 bin, 0 int, 0 impl, 91 cont) and 9311 constraints
9311 constraints of type < linear >
Presolving Time: 0.00
I added 72 columns: 91 original +72 added = 163 total. This seems to be ok.
I added the suggested parameters. It seems that domain propagation has not been in use before but there has been strong branching. Unfortunately nothing changed with the parameters.
In addition to adding the parameters I also tried to use SCIP 3.0.1 instead. This improved my bound from 670.194 to 699.203 but this is still quite different from the cplex bound with 754.348. I know that the solvers differ by a lot of numerical parameters but I guess the difference is too large to be caused by these parameters?
There are two further things that might affect the LP bound at the root node: domain propagation and strong branching.
Domain propagation is a sort of node preprocessing and tries to reduce variable domains based on the current local domains and constraints. Strong branching precomputes the LP bounds of potential child nodes to select a good variable to branch on. If one of the child nodes is detected to be infeasible, its domain is reduced.
You can disable domain propagation by setting
propagating/maxrounds = 0
propagating/maxroundsroot = 0
Strong branching can be disabled by setting a high priority to a branching rule which does not apply strong branching. For example, set
branching/pscost/priority = 100000000
in order to enable pure pseudo cost branching.
In general, you should check the statistics for non-zero values in the DomReds columns.
You can just write the internal problem to a file and then compare it to the original:
SCIP> write transproblem
You should also read SCIP's statistics thoroughly to find out what kind of 'magic' SCIP performed:
SCIP> display statistics
I almost forgot about the thread and then stumbled upon it again and thought it might be good to add the answer after finding it myself:
Within the cut callback (unfortunately I did not mention that I used one) I used the method:
SCIPisCutEfficacious
which discarded some of the cuts that are relevant to obtain a true LP bound. Not calling this method slows down the solution process but at least it preserves the result.
Related
I am currently implementing an optimization problem with pyomo and since now some hours I get the message that my problem is unbounded. After searching for the issue, I came along one term which seems to be unbounded. I excluded this term from the objective function and it shows that it takes a very high negative value, which supports the assumption that it is unbounded to -Inf.
But I have checked the problem further and it is impossible that the term is unbounded, as following code and results show:
model.nominal_cap_storage = Var(model.STORAGE, bounds=(0,None)) #lower bound is 0
#I assumed very high CAPEX for each storage (see print)
dict_capex_storage = {'battery': capex_battery_storage,
'co2': capex_co2_storage,
'hydrogen': capex_hydrogen_storage,
'heat': capex_heat_storage,
'syncrude': capex_syncrude_storage}
print(dict_capex_storage)
>>> {'battery': 100000000000000000, 'co2': 100000000000000000,
'hydrogen': 1000000000000000000, 'heat': 1000000000000000, 'syncrude': 10000000000000000000}
From these assumptions I already assume that it is impossible that the one term can be unbounded towards -Inf as the capacity has the lower bound of 0 and the CAPEX is a positive fixed value. But now it gets crazy. The following term is has the issue of being unbounded:
model.total_investment_storage = Var()
def total_investment_storage_rule(model):
return model.total_investment_storage == sum(model.nominal_cap_storage[storage] * dict_capex_storage[storage] \
for storage in model.STORAGE)
model.total_investment_storage_con = Constraint(rule=total_investment_storage_rule)
If I exclude the term from the objective function, I get following value after the optimization. It seems, that it can take high negative values.
>>>>
Variable total_investment_storage
-1004724108.3426505
So I checked the term regarding the component model.nominal_cap_storage to see the value of the capacity:
model.total_cap_storage = Var()
def total_cap_storage_rule(model):
return model.total_cap_storage == sum(model.nominal_cap_storage[storage] for storage in model.STORAGE)
model.total_cap_storage_con = Constraint(rule=total_cap_storage_rule)
>>>>
Variable total_cap_storage
0.0
I did the same for the dictionary, but made a mistake: I forgot to delete the model.nominal_cap_storage. But the result is confusing:
model.total_capex_storage = Var()
def total_capex_storage_rule(model):
return model.total_capex_storage == sum(model.nominal_cap_storage[storage] * dict_capex_storage[storage] \
for storage in model.STORAGE)
model.total_capex_storage_con = Constraint(rule=total_capex_storage_rule)
>>>>
Variable total_capex_storage
0.0
So my question is why is the term unbounded and how is it possible that model.total_investment_storage and model.total_capex_storage have different solutions though both are calculated equally? Any help is highly appreciated.
I think you are misinterpreting "unbounded." When the solver says the problem is unbounded, that means the objective function value is unbounded based on the variables and constraints in the problem. It has nothing to do with bounds on variables, unless one of those variable bounds prevents the objective from being unbound.
If you want help on above problem, you need to edit and post the full problem, with the objective function, and (if possible) the error. What you have now is a collection of different snippets of different variations of a problem, which isn't really informative on the overall issue.
I solved the problem by setting a lower bound to the term, which takes a negative value:
model.total_investment_storage = Var(bounds=(0, None)
I am still not sure why this term can take negative values but this solved at least my problem
I have a simple linear programming problem written in OSiL format, which is carved out from a complicated non-linear problem that reported as infeasible by SCIP. This simple problem is the minimal lines to reproduce this infeasible problem, however it confuses me. Below is the content of the OSiL:
<instanceData>
<variables numberOfVariables="1">
<var name="F"/>
</variables>
<objectives numberOfObjectives="1">
<obj maxOrMin="min" numberOfObjCoef="1" >
<coef idx="0">1</coef>
</obj>
</objectives>
<constraints numberOfConstraints="1">
<con lb="10"/>
</constraints>
</instanceData>
Isn't the OSiL saying:
Minimize: F
Subject to: F >= 0
? Why should this problem be infeasible? Looks to me, the <con lb="10"/> is useless because no one is referencing it. But in fact this constraint does influence the original problem in a way that I failed to notice, because the problem can be solved if the lower bound is changed to 0 or smaller, or change it to upper bound.
Can someone explain this to me? I'm a newbie in numerical optimization and the OSiL format, so thanks in advance for your time.
There is no F in your constraint, you only added the variable to the objective.
The constraint that is formulated there is 10 <= 0, which is infeasible.
If you look at the problem in SCIP, this may become more apparent:
original problem has 1 variables (0 bin, 0 int, 0 impl, 1 cont) and 1 constraints
SCIP> disp prob
STATISTICS
Problem name : a.osil
Variables : 1 (0 binary, 0 integer, 0 implicit integer, 1 continuous)
Constraints : 0 initial, 1 maximal
OBJECTIVE
Sense : minimize
VARIABLES
[continuous] <F>: obj=1, original bounds=[0,+inf]
CONSTRAINTS
[linear] <cons0>: 0 >= 10;
END
I am completely new to CPLEX and far from an expert in MIP but I am trying to solve a problem with this technology (CPLEX 12.4).
I ahve decided to create the MIP models in an .lp file and give it to CPLEx so I can have a plenty of inputs and test different solvers etc. But I am finding one thing about indicator constraints a bit problematic.
I want something like:
c1: a AND NOT(b)-> i1 - 100 v1 = 0
c2: b AND NOT(a)-> i1 - 120 v1 = 0
c3: a AND b -> i1 - 80 v1 =0
But there is no such thing ans AND or NOT in the LP format (I am not even sure if I could do that on the CPX interface, but I am trying to avoid it).
The only workaround I have found is doing:
ca: a_not_b = 1 <-> a - b = 1
cb: b_not_a = 1 <-> a - b = -1
cab: a_and_b = 1 <-> a + b = 2
c1: a_not_b-> i1 - 100 v1 = 0
c2: b_not_a-> i1 - 120 v1 = 0
c3: a_and_b = 1-> i1 - 80 v1 =0
I would be ok with having this, because I am going to be generating this LP with another program, but does this slow down CPLEX? Is there a better way of doing this?
Thanks
You are pretty much correct. The LP and MILP approach does not directly allow for these sorts of logical constraints. Rather you typically need to create auxiliary variables in your model that can be used to capture those conditions. In many cases those will be boolean or 0/1 integer variables. A large part of the art & craft of writing MILP models is finding good ways to re-write those conditions in the rather restrictive 'language' of LP and MILP models. This can lead to some rather interesting mental gymnastics! Note that this is a limitation of the MILP approach, not just of CPLEX. Modelling languages that support those logical relationships mostly just provide syntactic sugar around these same modelling techniques with auxiliary binary/integer variables.
MILP solvers can handle very large problems with millions of variables and constraints; but that is because of the underlying mathematical structures and assumptions. Techniques like constraint programming do allow such direct modelling of logical and other relationships, but are usually limited to much smaller problem instances.
As to whether this will slow down CPLEX (or any other solver), the answer is probably yes it will. However there may be no alternative if you are using a MILP solver - it is better to solve the correct problem than the wrong one.
I'm new to SCIP, so I'm not sure if this is a bug or if I'm just doing something wrong.
I have a MIP instance that solves perfectly using SCIP, however when I try to solve a copy of the model SCIP says that it is infeasible. It seems to be more noticeable when presolve is turned off.
I'm using windows with the pre-built SCIP v3.2.0. The model only has binary and integer variables.
The following code outlines my attempt:
SCIP* _scip, subscip;
SCIPcreate(&_scip);
SCIPincludeDefaultPlugins(_scip);
SCIPcreateProbBasic(_scip, "interval_solver")); // create an empty problem
SCIPsetPresolving(_scip, SCIP_PARAMSETTING_OFF, true); //disable presolving
// build model (snipped)
SCIPsolve(_scip); // succeeds and gives feasible solution
SCIP_Bool valid = FALSE;
SCIPcreate(&subscip);
SCIPcopy(_scip, subscip, NULL, NULL, "1", TRUE, FALSE, TRUE, &valid);
SCIPsolve(subscip); // infeasible
Something that might be related (and seems weird to me) is that after solving the original problem (and getting a feasible solution), checking the solution reports an infeasible result. i.e.
SCIP_SOL* sol = SCIPgetBestSol(_scip);
SCIPcheckSol(_scip, sol, TRUE, TRUE, TRUE, TRUE, &valid);
gives:
solution value 1 violates bounds of <t_x71_(6,1275,6805)_(9,1275,6805)>[-0,0] by 1
Any ideas why this could be happening? Thanks!
Propagation in SCIP may take into account the best solution known so far and do reductions which are only valid for the problem of finding a solution better than this.
For example, if you have a minimization problem with n variables x_1,...,x_n with objective coefficients c_1,...,c_n >= 0 and already found a solution with x_1 = 1, x_2 = ... = x_n = 0, then propagation will globally fix x_1 to 0, because the objective of any solution with x_1 = 1 will be at least as large as the objective of the solution you already found.
This means that the solutions found so far may not be feasible anymore for the remaining problem (which looks for a strictly better solution).
In order to check the solution, you should check it in the original problem space, which you can do with SCIPcheckSolOrig().
Disabling presolving an propagation might help, but does not guarantee that the global presolved problem is not changed. Presolving in the LP solver should not be the problem, but might have changed the reported optimal LP solution (if there are multiple optima) and therefore caused a change in the solving process. This might have avoided your issue in this case, but probably by pure luck and the issue may still appear again on other instances. Moreover, the more features you disable, the more this will have a negative impact on your performance.
However, there is an easy solution to your problem: You can copy the original unchanged problem by using SCIPcopyOrig().
Some of the variable bounds were still being presolved. To fix the issue I needed to add:
SCIPsetBoolParam(_scip, "lp/presolving", FALSE);
This fixed most things, but the following also helped fix some 'check solution' issues:
SCIPsetIntParam(_scip, "propagating/maxrounds", 0);
SCIPsetIntParam(_scip, "propagating/maxroundsroot", 0);
I am using SCIP (SoPlex) to solve a MIP (mixed integer program) provided by a .mps file. I use SCIP via command line as follows:
SCIP> read file.mps
original problem has 1049 variables (471 bin, 0 int, 0 impl, 578 cont) and 638 constraints
SCIP> optimize # so i am using default settings
... some solving information ...
SCIP Status : problem is solved [optimal solution found]
Solving Time (sec) : 0.46
Solving Nodes : 1
Primal Bound : -6.58117502066443e+05 (2 solutions)
Dual Bound : -6.58117502066443e+05
Gap : 0.00 %
[linear] c_2_141>: x_2_73_141[C] - 1000000000 y_2_141[B] <= 0;
violation: right hand side is violated by 236.775818639799
best solution is not feasible in original problem
I do not want to have an infeasible solution – a want the best feasible. For your information: I used CPLEX with the same file and it confirmed that there is an optimal feasible solution with slightly worse obj value (like 0.05 % worse).
I already tried to put emphasis on feasibility with SCIP> set emphasis feasibility but that did not help me – see for yourself:
SCIP Status : problem is solved [optimal solution found]
Solving Time (sec) : 0.42
Solving Nodes : 3 (total of 5 nodes in 3 runs)
Primal Bound : -6.58117502066443e+05 (4 solutions)
Dual Bound : -6.58117502066443e+05
Gap : 0.00 %
[linear] c_2_141>: x_2_73_141[C] - 1000000000 y_2_141[B] <= 0;
violation: right hand side is violated by 236.775818639799
best solution is not feasible in original problem
Kind regards.
EDIT:
In response to the answer of user mattmilten, I have to share that using set numerics feastol 1e-9 alone did not bring a feasible solution, but using a lower tolerance like 1e-10 in combination with set emphasis feasibility, SCIP is able to provide a good feasible solution that is just 0.005 % worse than CPLEX’.
Thanks for your help mattmilten!
You could try to tighten the tolerances, especially the feasibility tolerance:
set numerics feastol 1e-9
The violated constraint contains a very large coefficient. This is likely the cause for this high absolute error. In CPLEX you should also try
display solution quality
to check whether the solution found by CPLEX is also violating the bounds.