Cannot understand the reason of the problem being infeasible - scip

I have a simple linear programming problem written in OSiL format, which is carved out from a complicated non-linear problem that reported as infeasible by SCIP. This simple problem is the minimal lines to reproduce this infeasible problem, however it confuses me. Below is the content of the OSiL:
<instanceData>
<variables numberOfVariables="1">
<var name="F"/>
</variables>
<objectives numberOfObjectives="1">
<obj maxOrMin="min" numberOfObjCoef="1" >
<coef idx="0">1</coef>
</obj>
</objectives>
<constraints numberOfConstraints="1">
<con lb="10"/>
</constraints>
</instanceData>
Isn't the OSiL saying:
Minimize: F
Subject to: F >= 0
? Why should this problem be infeasible? Looks to me, the <con lb="10"/> is useless because no one is referencing it. But in fact this constraint does influence the original problem in a way that I failed to notice, because the problem can be solved if the lower bound is changed to 0 or smaller, or change it to upper bound.
Can someone explain this to me? I'm a newbie in numerical optimization and the OSiL format, so thanks in advance for your time.

There is no F in your constraint, you only added the variable to the objective.
The constraint that is formulated there is 10 <= 0, which is infeasible.
If you look at the problem in SCIP, this may become more apparent:
original problem has 1 variables (0 bin, 0 int, 0 impl, 1 cont) and 1 constraints
SCIP> disp prob
STATISTICS
Problem name : a.osil
Variables : 1 (0 binary, 0 integer, 0 implicit integer, 1 continuous)
Constraints : 0 initial, 1 maximal
OBJECTIVE
Sense : minimize
VARIABLES
[continuous] <F>: obj=1, original bounds=[0,+inf]
CONSTRAINTS
[linear] <cons0>: 0 >= 10;
END

Related

pyomo: minimal production time / BIG M

I am looking for a way to map a minimum necessary duty cycle in an optimization model.
After several attempts, however, I have now reached the end of my knowledge and hope for some inspiration here.
The idea is that a variable (binary) mdl.ontime is set so that the sum of successive ontime values is greater than or equal to the minimum duty cycle:
def ontime(mdl,t):
min_on_time = 3 # minimum on time in h
if t < min_on_time: return mdl.ontime[t] == 0
return sum(mdl.ontime[t-i] for i in range(min_on_time)) >= min_on_time
That works so far, if the variable mdl.ontime will not be recognized at all.
Then I tried three different constraints, unfortunately they all gave the same result: CPLEX only finds inf. results.
The first variant was:
def flag(mdl,t):
return mdl.ontime[t] + (mdl.production[t]>=0.1) >= 2
So if mdl.ontime is 1 and mdl.production is greater or equal 0.1 (the assumption is just exact enough), it should be greater or equal 2: a logical addition therm.
The second attemp was quite similar to the first:
def flag(mdl,t):
return (mdl.ontime[t]) >= (mdl.production[t] >= 0.1)
If mdl.ontime is 1, it should be greater or equal mdl.production compared with 0.1.
And the third with a big M variable:
def flag(mdl,t):
bigM = 10**6
return mdl.ontime[t] * bigM >= mdl.production[t]
bigM instead should be great enough in my case...
All of them do not work at all...and I have no idea, why CPLEX returns the error that there is only an infeasible solution.
Basically the model runs if I don't consider the ontime-integration.
Do you guys have any more ideas how I could implement this?
Many greetings,
Mathias
It isn't real clear what the desired relationship is between your variables/constraints. That said, I don't think this is legal. I'm surprised that it isn't popping an error....and if not popping an error, I'm pretty sure it isn't doing what you think:
def flag(mdl,t):
return mdl.ontime[t] + (mdl.production[t]>=0.1) >= 2
You are essentially burying an inferred binary variable in there with the test on mdl.production, which isn't going to work, I believe. You probably need to introduce another variable or such.

Pyomo: Unbounded objective function though bounded

I am currently implementing an optimization problem with pyomo and since now some hours I get the message that my problem is unbounded. After searching for the issue, I came along one term which seems to be unbounded. I excluded this term from the objective function and it shows that it takes a very high negative value, which supports the assumption that it is unbounded to -Inf.
But I have checked the problem further and it is impossible that the term is unbounded, as following code and results show:
model.nominal_cap_storage = Var(model.STORAGE, bounds=(0,None)) #lower bound is 0
#I assumed very high CAPEX for each storage (see print)
dict_capex_storage = {'battery': capex_battery_storage,
'co2': capex_co2_storage,
'hydrogen': capex_hydrogen_storage,
'heat': capex_heat_storage,
'syncrude': capex_syncrude_storage}
print(dict_capex_storage)
>>> {'battery': 100000000000000000, 'co2': 100000000000000000,
'hydrogen': 1000000000000000000, 'heat': 1000000000000000, 'syncrude': 10000000000000000000}
From these assumptions I already assume that it is impossible that the one term can be unbounded towards -Inf as the capacity has the lower bound of 0 and the CAPEX is a positive fixed value. But now it gets crazy. The following term is has the issue of being unbounded:
model.total_investment_storage = Var()
def total_investment_storage_rule(model):
return model.total_investment_storage == sum(model.nominal_cap_storage[storage] * dict_capex_storage[storage] \
for storage in model.STORAGE)
model.total_investment_storage_con = Constraint(rule=total_investment_storage_rule)
If I exclude the term from the objective function, I get following value after the optimization. It seems, that it can take high negative values.
>>>>
Variable total_investment_storage
-1004724108.3426505
So I checked the term regarding the component model.nominal_cap_storage to see the value of the capacity:
model.total_cap_storage = Var()
def total_cap_storage_rule(model):
return model.total_cap_storage == sum(model.nominal_cap_storage[storage] for storage in model.STORAGE)
model.total_cap_storage_con = Constraint(rule=total_cap_storage_rule)
>>>>
Variable total_cap_storage
0.0
I did the same for the dictionary, but made a mistake: I forgot to delete the model.nominal_cap_storage. But the result is confusing:
model.total_capex_storage = Var()
def total_capex_storage_rule(model):
return model.total_capex_storage == sum(model.nominal_cap_storage[storage] * dict_capex_storage[storage] \
for storage in model.STORAGE)
model.total_capex_storage_con = Constraint(rule=total_capex_storage_rule)
>>>>
Variable total_capex_storage
0.0
So my question is why is the term unbounded and how is it possible that model.total_investment_storage and model.total_capex_storage have different solutions though both are calculated equally? Any help is highly appreciated.
I think you are misinterpreting "unbounded." When the solver says the problem is unbounded, that means the objective function value is unbounded based on the variables and constraints in the problem. It has nothing to do with bounds on variables, unless one of those variable bounds prevents the objective from being unbound.
If you want help on above problem, you need to edit and post the full problem, with the objective function, and (if possible) the error. What you have now is a collection of different snippets of different variations of a problem, which isn't really informative on the overall issue.
I solved the problem by setting a lower bound to the term, which takes a negative value:
model.total_investment_storage = Var(bounds=(0, None)
I am still not sure why this term can take negative values but this solved at least my problem

Using Subtraction in a Conditional Statement in Verilog

I'm relatively new to Verilog and I've been working on a project in which I would, in an ideal world, like to have an assignment statement like:
assign isinbufferzone = a > (packetlength-16384) ? 1:0;
The file with this type of line in it will compile, but isinbufferzone doesn't go high when it should. I'm assuming it's not happy with having subtraction in the conditional. I'm able to make the module work by moving stuff around, but the result is more complicated than I think it should need to be and the latency really starts to add up. Does anyone have any thoughts on what the most concise way to do this is? Thank you in advance for your help.
You probably expect isinbufferzone to go high if packetlength is 16384 or less regardless of a, however this is not what happens.
If packetlength is less than 16384, the value packetlength - 16384 is not a negative number −X, but some very large positive number (maybe 232 − X, or 217 − X, I'm not quite sure which, but it doesn't matter), because Verilog does unsigned arithmetic by default. This is called integer overflow.
You could maybe try to solve this by declaring some signals as signed, but in my opinion the safest way is to explicitly handle the overflow case and making sure the subtraction result is only evaluated for packetlength values of 16384 or greater:
assign isinbufferzone = (packetlength < 16384) ? 1 : (a > packetlength - 16384);

SCIP unmodified LP-bound

I am using SCIP 3.0.2 with cplex 12.6 as LP-solver. My model requires Column generation. I already implemented it in CPLEX but since CPLEX can only do CG in the root node I am using SCIP to do Branch-and-Price.
In CPLEX it turned out to be beneficial to turn off heursitics, cuts and preprocessing/probing. I set the following in SCIP:
SCIP_CALL( SCIPsetBoolParam(scip, "lp/presolving", FALSE) );
SCIPsetSeparating(scip, SCIP_PARAMSETTING_OFF, true); //disable cuts
SCIPsetHeuristics(scip, SCIP_PARAMSETTING_OFF, true); //disable heuristics
SCIPsetPresolving(scip, SCIP_PARAMSETTING_OFF, true); //disable presolving
My parameter-file looks as follows:
display/primalbound/active = 1
presolving/maxrounds = 0
separating/maxrounds = 0
separating/maxroundsroot = 0
separating/maxcuts = 0
separating/maxcutsroot = 0
lp/initalgorithm = d
lp/resolvealgorithm = d
lp/fastmip = 1
lp/threads = 1
limits/time = 7200
limits/memory = 2900
limits/absgap = 0
#display/verblevel = 5
#display/freq = 10
To check that the models are the same I solved the CPLEX model in SCIP (without CG) and I obtained the same LP-bound as for the model generated with SCIP but different from the LP-bound when solving with CPLEX.
It seems that SCIP is still using some 'magic' I have not deactivated yet. So my question is what do I have to deactivate to obtain an LP-bound relying just on my model.
I already took a look at the statistics out-put and there are indeed some things that might help to solve the problem:
Constraints #EnfoLP lists 1 for integral (seems strange since cuts are disabled?)
The transformed problem seems to be ok. The statistics-output prints:
Presolved Problem :
Problem name : t_ARLP
Variables : 969 (806 binary, 0 integer, 0 implicit integer, 163 continuous)
Constraints : 9311 initial, 9311 maximal
and before the iterations start I get the following:
LP Solver : row representation of the basis not available -- SCIP parameter lp/rowrepswitch has no effect
transformed problem has 897 variables (806 bin, 0 int, 0 impl, 91 cont) and 9311 constraints
9311 constraints of type < linear >
presolving:
presolving (0 rounds):
0 deleted vars, 0 deleted constraints, 0 added constraints, 0 tightened bounds, 0 added holes, 0 changed sides, 0 changed coefficients
0 implications, 0 cliques
presolved problem has 897 variables (806 bin, 0 int, 0 impl, 91 cont) and 9311 constraints
9311 constraints of type < linear >
Presolving Time: 0.00
I added 72 columns: 91 original +72 added = 163 total. This seems to be ok.
I added the suggested parameters. It seems that domain propagation has not been in use before but there has been strong branching. Unfortunately nothing changed with the parameters.
In addition to adding the parameters I also tried to use SCIP 3.0.1 instead. This improved my bound from 670.194 to 699.203 but this is still quite different from the cplex bound with 754.348. I know that the solvers differ by a lot of numerical parameters but I guess the difference is too large to be caused by these parameters?
There are two further things that might affect the LP bound at the root node: domain propagation and strong branching.
Domain propagation is a sort of node preprocessing and tries to reduce variable domains based on the current local domains and constraints. Strong branching precomputes the LP bounds of potential child nodes to select a good variable to branch on. If one of the child nodes is detected to be infeasible, its domain is reduced.
You can disable domain propagation by setting
propagating/maxrounds = 0
propagating/maxroundsroot = 0
Strong branching can be disabled by setting a high priority to a branching rule which does not apply strong branching. For example, set
branching/pscost/priority = 100000000
in order to enable pure pseudo cost branching.
In general, you should check the statistics for non-zero values in the DomReds columns.
You can just write the internal problem to a file and then compare it to the original:
SCIP> write transproblem
You should also read SCIP's statistics thoroughly to find out what kind of 'magic' SCIP performed:
SCIP> display statistics
I almost forgot about the thread and then stumbled upon it again and thought it might be good to add the answer after finding it myself:
Within the cut callback (unfortunately I did not mention that I used one) I used the method:
SCIPisCutEfficacious
which discarded some of the cuts that are relevant to obtain a true LP bound. Not calling this method slows down the solution process but at least it preserves the result.

Obtain best feasible solution with SCIP

I am using SCIP (SoPlex) to solve a MIP (mixed integer program) provided by a .mps file. I use SCIP via command line as follows:
SCIP> read file.mps
original problem has 1049 variables (471 bin, 0 int, 0 impl, 578 cont) and 638 constraints
SCIP> optimize # so i am using default settings
... some solving information ...
SCIP Status : problem is solved [optimal solution found]
Solving Time (sec) : 0.46
Solving Nodes : 1
Primal Bound : -6.58117502066443e+05 (2 solutions)
Dual Bound : -6.58117502066443e+05
Gap : 0.00 %
[linear] c_2_141>: x_2_73_141[C] - 1000000000 y_2_141[B] <= 0;
violation: right hand side is violated by 236.775818639799
best solution is not feasible in original problem
I do not want to have an infeasible solution – a want the best feasible. For your information: I used CPLEX with the same file and it confirmed that there is an optimal feasible solution with slightly worse obj value (like 0.05 % worse).
I already tried to put emphasis on feasibility with SCIP> set emphasis feasibility but that did not help me – see for yourself:
SCIP Status : problem is solved [optimal solution found]
Solving Time (sec) : 0.42
Solving Nodes : 3 (total of 5 nodes in 3 runs)
Primal Bound : -6.58117502066443e+05 (4 solutions)
Dual Bound : -6.58117502066443e+05
Gap : 0.00 %
[linear] c_2_141>: x_2_73_141[C] - 1000000000 y_2_141[B] <= 0;
violation: right hand side is violated by 236.775818639799
best solution is not feasible in original problem
Kind regards.
EDIT:
In response to the answer of user mattmilten, I have to share that using set numerics feastol 1e-9 alone did not bring a feasible solution, but using a lower tolerance like 1e-10 in combination with set emphasis feasibility, SCIP is able to provide a good feasible solution that is just 0.005 % worse than CPLEX’.
Thanks for your help mattmilten!
You could try to tighten the tolerances, especially the feasibility tolerance:
set numerics feastol 1e-9
The violated constraint contains a very large coefficient. This is likely the cause for this high absolute error. In CPLEX you should also try
display solution quality
to check whether the solution found by CPLEX is also violating the bounds.