SCIP: setting absolute tolerance - scip

I'd like to ask SCIP to solve a problem to within a specified absolute tolerance, ie, it should quit as soon as the difference between the upper and lower bound is small enough. What's the parameter that controls this tolerance?
Oddly enough, I've been unable to find it by perusing the list of all SCIP parameters.

Here they are:
# solving stops, if the relative gap = |(primalbound - dualbound)/dualbound| is below the given value
# [type: real, range: [0,1.79769313486232e+308], default: 0]
limits/gap = 0
# solving stops, if the absolute gap = |primalbound - dualbound| is below the given value
# [type: real, range: [0,1.79769313486232e+308], default: 0]
limits/absgap = 0
"tolerance" usually refers to the allowed violation of the computed solution, i.e. the amount of allowed infeasibility. Apparently, you were looking for the "gap limit".

Related

Pyomo: Unbounded objective function though bounded

I am currently implementing an optimization problem with pyomo and since now some hours I get the message that my problem is unbounded. After searching for the issue, I came along one term which seems to be unbounded. I excluded this term from the objective function and it shows that it takes a very high negative value, which supports the assumption that it is unbounded to -Inf.
But I have checked the problem further and it is impossible that the term is unbounded, as following code and results show:
model.nominal_cap_storage = Var(model.STORAGE, bounds=(0,None)) #lower bound is 0
#I assumed very high CAPEX for each storage (see print)
dict_capex_storage = {'battery': capex_battery_storage,
'co2': capex_co2_storage,
'hydrogen': capex_hydrogen_storage,
'heat': capex_heat_storage,
'syncrude': capex_syncrude_storage}
print(dict_capex_storage)
>>> {'battery': 100000000000000000, 'co2': 100000000000000000,
'hydrogen': 1000000000000000000, 'heat': 1000000000000000, 'syncrude': 10000000000000000000}
From these assumptions I already assume that it is impossible that the one term can be unbounded towards -Inf as the capacity has the lower bound of 0 and the CAPEX is a positive fixed value. But now it gets crazy. The following term is has the issue of being unbounded:
model.total_investment_storage = Var()
def total_investment_storage_rule(model):
return model.total_investment_storage == sum(model.nominal_cap_storage[storage] * dict_capex_storage[storage] \
for storage in model.STORAGE)
model.total_investment_storage_con = Constraint(rule=total_investment_storage_rule)
If I exclude the term from the objective function, I get following value after the optimization. It seems, that it can take high negative values.
>>>>
Variable total_investment_storage
-1004724108.3426505
So I checked the term regarding the component model.nominal_cap_storage to see the value of the capacity:
model.total_cap_storage = Var()
def total_cap_storage_rule(model):
return model.total_cap_storage == sum(model.nominal_cap_storage[storage] for storage in model.STORAGE)
model.total_cap_storage_con = Constraint(rule=total_cap_storage_rule)
>>>>
Variable total_cap_storage
0.0
I did the same for the dictionary, but made a mistake: I forgot to delete the model.nominal_cap_storage. But the result is confusing:
model.total_capex_storage = Var()
def total_capex_storage_rule(model):
return model.total_capex_storage == sum(model.nominal_cap_storage[storage] * dict_capex_storage[storage] \
for storage in model.STORAGE)
model.total_capex_storage_con = Constraint(rule=total_capex_storage_rule)
>>>>
Variable total_capex_storage
0.0
So my question is why is the term unbounded and how is it possible that model.total_investment_storage and model.total_capex_storage have different solutions though both are calculated equally? Any help is highly appreciated.
I think you are misinterpreting "unbounded." When the solver says the problem is unbounded, that means the objective function value is unbounded based on the variables and constraints in the problem. It has nothing to do with bounds on variables, unless one of those variable bounds prevents the objective from being unbound.
If you want help on above problem, you need to edit and post the full problem, with the objective function, and (if possible) the error. What you have now is a collection of different snippets of different variations of a problem, which isn't really informative on the overall issue.
I solved the problem by setting a lower bound to the term, which takes a negative value:
model.total_investment_storage = Var(bounds=(0, None)
I am still not sure why this term can take negative values but this solved at least my problem

Gap tolerance control in Z3 optimization

I would like to use z3 optimize class to get sub-optimal results, while still being able to control how far am I from the optimum result. I am using the C++ API.
As an example, CPLEX has the parameters epgap and epagap for relative and absolute tolerance respectively. It uses the current lower or upper bounds (depending if it is a minimization or maximization) to assess how far (at most) the current solution is from the optimal one.
This leads to shorter run-times for when an approximate solution is already good enough.
Is this possible using the optimize class, or is this something I would need to implement using a solver instance and control the bounds myself?
I am not absolutely certain about this, but I doubt that z3 has such parameters.
For sure, nothing like that appears to be exposed in the command-line interface:
~$ z3 -p
...
[module] opt, description: optimization parameters
dump_benchmarks (bool) (default: false)
dump_models (bool) (default: false)
elim_01 (bool) (default: true)
enable_sat (bool) (default: true)
enable_sls (bool) (default: false)
maxlex.enable (bool) (default: true)
maxres.add_upper_bound_block (bool) (default: false)
maxres.hill_climb (bool) (default: true)
maxres.max_core_size (unsigned int) (default: 3)
maxres.max_correction_set_size (unsigned int) (default: 3)
maxres.max_num_cores (unsigned int) (default: 4294967295)
maxres.maximize_assignment (bool) (default: false)
maxres.pivot_on_correction_set (bool) (default: true)
maxres.wmax (bool) (default: false)
maxsat_engine (symbol) (default: maxres)
optsmt_engine (symbol) (default: basic)
pb.compile_equality (bool) (default: false)
pp.neat (bool) (default: true)
priority (symbol) (default: lex)
rlimit (unsigned int) (default: 0)
solution_prefix (symbol) (default: )
timeout (unsigned int) (default: 4294967295)
...
Alternative #01:
An option is to implement this yourself on top of z3.
I would suggest using the binary search schema (see Optimization in SMT with LA(Q) Cost Functions), otherwise the OMT solver is going to refine only one end of the optimization search interval and this may defeat the intended purpose of your search-termination criteria.
Notice that in order for this approach to be effective, it is important that the internal T-optimizer is invoked over the Boolean assignment of each intermediate model encountered along the search. (I am not sure whether this functionality is exposed at the API level with z3).
You may also want to take a look at the approach based on linear regression used in Puli - A Problem-Specific OMT Solver. If applicable, it may speed-up the optimization search and improve the estimate of the relative distance from the optimal solution.
Alternative #02:
OptiMathSAT may be exposing the functionality you are looking for, both at the API and the command-line level:
~$ optimathsat -help
Optimization search options:
-opt.abort_interval=FLOAT
If greater than zero, an objective is no longer actively optimized as
soon as the current search interval size is smaller than the given
value. Applies to all objective functions. (default: 0)
-opt.abort_tolerance=FLOAT
If greater than zero, an objective is no longer actively optimized as
soon as the ratio among the current search interval size wrt. its
initial size is smaller than the given value. Applies to all
objective functions. (default: 0)
The abort interval is a termination criterion based on the absolute size of the current optimization search interval, while the abort tolerance is a termination criterion based on the relative size of the current optimization search interval with respect to the initial search interval.
Notice that in order to use these termination criteria, the user is expected to:
provide (at least) an initial lower-bound for any minimization objective:
(minimize ... :lower ...)
provide (at least) an initial upper-bound for any maximization objective:
(maximize ... :upper ...)
Furthermore, the tool must be configured to use either Binary or Adaptive search:
-opt.strategy=STR
Sets the optimization search strategy:
- lin : linear search (default)
- bin : binary search
- ada : adaptive search
A lower bound is required to minimize an objective with bin/ada
search strategy. Dual for maximization.
In case neither of these termination criterion is satisfactory to you, you can also implement your own algorithm on top of OptiMathSAT. It is relatively easy to do, thanks to the following option that can be set both via API and command-line:
-opt.no_optimization=BOOL
If true, the optimization search stops at the first (not optimal)
satisfiable solution. (default: false)
When enabled, it makes OptiMathSAT behave like a regular SMT solver, except that when it finds a complete Boolean assignment for which there exists a Model of the input formula, it ensures that the Model is optimal wrt. the objective function and the given Boolean assignment (in other words, it invokes the internal T-optimizer procedure for you).
Some Thoughts.
OMT solvers work differently from most CP solvers. They use infinite-precision arithmetic and the optimization search is guided by the SAT engine. Improving the value of the objective function becomes increasingly difficult because the OMT solver is forced to enumerate a progressively larger number of (possibly total) Boolean assignments while resolving conflicts and back-jumping along the way.
In my opinion, the size of the current search interval is not always a good indicator of the relative difficulty of making progress with the optimization search. There are far too many factors to take into consideration, e.g. the pruning power of conflict clauses involving the objective function, the encoding of the input formula, and so on. This is also one of the reasons why, as far as I have seen, most people in the OMT community simply use a fixed timeout rather than bothering to use any other termination criteria. The only situation in which I have found it to be particularly useful, is when dealing with non-linear optimization (which, however, is not yet publicly available with OptiMathSAT).

Summation iterated over a variable length

I have written an optimization problem in pyomo and need a constraint, which contains a summation that has a variable length:
u_i_t[i, t]*T_min_run - sum (tnewnew in (t-T_min_run+1)..t-1) u_i_t[i,tnewnew] <= sum (tnew in t..(t+T_min_run-1)) u_i_t[i,tnew]
T is my actual timeline and N my machines
usually I iterate over t, but I need to guarantee the machines are turned on for certain amount of time.
def HP_on_rule(model, i, t):
return model.u_i_t[i, t]*T_min_run - sum(model.u_i_t[i, tnewnew] for tnewnew in range((t-T_min_run+1), (t-1))) <= sum(model.u_i_t[i, tnew] for tnew in range(t, (t+T_min_run-1)))
model.HP_on_rule = Constraint(N, rule=HP_on_rule)
I hope you can provide me with the correct formulation in pyomo/python.
The problem is that t is a running variable and I do not know how to implement this in Python. tnew is only a help variable. E.g. t=6 (variable), T_min_run=3 (constant) and u_i_t is binary [00001111100000...] then I get:
1*3 - 1 <= 3
As I said, I do not know how to implement this in my code and the current version is not running.
TypeError: HP_on_rule() missing 1 required positional argument: 't'
It seems like you didn't provide all your arguments to the function rule.
Since t is a parameter of your function, I assume that it corresponds to an element of set T (your timeline).
Then, your last line of your code example should include not only the set N, but also the set T. Try this:
model.HP_on_rule = Constraint(N, T, rule=HP_on_rule)
Please note: Building a Constraint with a "for each" part, you must provide the Pyomo Sets that you want to iterate over at the begining of the call for Constraint construction. As a rule of thumb, your constraint rule function should have 1 more argument than the number of Pyomo Sets specified in the Constraint initilization line.

NLopt with univariate optimization

Anyone know if NLopt works with univariate optimization. Tried to run following code:
using NLopt
function myfunc(x, grad)
x.^2
end
opt = Opt(:LD_MMA, 1)
min_objective!(opt, myfunc)
(minf,minx,ret) = optimize(opt, [1.234])
println("got $minf at $minx (returned $ret)")
But get following error message:
> Error evaluating untitled
LoadError: BoundsError: attempt to access 1-element Array{Float64,1}:
1.234
at index [2]
in myfunc at untitled:8
in nlopt_callback_wrapper at /Users/davidzentlermunro/.julia/v0.4/NLopt/src/NLopt.jl:415
in optimize! at /Users/davidzentlermunro/.julia/v0.4/NLopt/src/NLopt.jl:514
in optimize at /Users/davidzentlermunro/.julia/v0.4/NLopt/src/NLopt.jl:520
in include_string at loading.jl:282
in include_string at /Users/davidzentlermunro/.julia/v0.4/CodeTools/src/eval.jl:32
in anonymous at /Users/davidzentlermunro/.julia/v0.4/Atom/src/eval.jl:84
in withpath at /Users/davidzentlermunro/.julia/v0.4/Requires/src/require.jl:37
in withpath at /Users/davidzentlermunro/.julia/v0.4/Atom/src/eval.jl:53
[inlined code] from /Users/davidzentlermunro/.julia/v0.4/Atom/src/eval.jl:83
in anonymous at task.jl:58
while loading untitled, in expression starting on line 13
If this isn't possible, does anyone know if a univariate optimizer where I can specify bounds and an initial condition?
There are a couple of things that you're missing here.
You need to specify the gradient (i.e. first derivative) of your function within the function. See the tutorial and examples on the github page for NLopt. Not all optimization algorithms require this, but the one that you are using LD_MMA looks like it does. See here for a listing of the various algorithms and which require a gradient.
You should specify the tolerance for conditions you need before you "declare victory" ¹ (i.e. decide that the function is sufficiently optimized). This is the xtol_rel!(opt,1e-4) in the example below. See also the ftol_rel! for another way to specify a different tolerance condition. According to the documentation, for example, xtol_rel will "stop when an optimization step (or an estimate of the optimum) changes every parameter by less than tol multiplied by the absolute value of the parameter." and ftol_rel will "stop when an optimization step (or an estimate of the optimum) changes the objective function value by less than tol multiplied by the absolute value of the function value. " See here under the "Stopping Criteria" section for more information on various options here.
The function that you are optimizing should have a unidimensional output. In your example, your output is a vector (albeit of length 1). (x.^2 in your output denotes a vector operation and a vector output). If you "objective function" doesn't ultimately output a unidimensional number, then it won't be clear what your optimization objective is (e.g. what does it mean to minimize a vector? It's not clear, you could minimize the norm of a vector, for instance, but a whole vector - it isn't clear).
Below is a working example, based on your code. Note that I included the printing output from the example on the github page, which can be helpful for you in diagnosing problems.
using NLopt
count = 0 # keep track of # function evaluations
function myfunc(x::Vector, grad::Vector)
if length(grad) > 0
grad[1] = 2*x[1]
end
global count
count::Int += 1
println("f_$count($x)")
x[1]^2
end
opt = Opt(:LD_MMA, 1)
xtol_rel!(opt,1e-4)
min_objective!(opt, myfunc)
(minf,minx,ret) = optimize(opt, [1.234])
println("got $minf at $minx (returned $ret)")
¹ (In the words of optimization great Yinyu Ye.)

SCIP unmodified LP-bound

I am using SCIP 3.0.2 with cplex 12.6 as LP-solver. My model requires Column generation. I already implemented it in CPLEX but since CPLEX can only do CG in the root node I am using SCIP to do Branch-and-Price.
In CPLEX it turned out to be beneficial to turn off heursitics, cuts and preprocessing/probing. I set the following in SCIP:
SCIP_CALL( SCIPsetBoolParam(scip, "lp/presolving", FALSE) );
SCIPsetSeparating(scip, SCIP_PARAMSETTING_OFF, true); //disable cuts
SCIPsetHeuristics(scip, SCIP_PARAMSETTING_OFF, true); //disable heuristics
SCIPsetPresolving(scip, SCIP_PARAMSETTING_OFF, true); //disable presolving
My parameter-file looks as follows:
display/primalbound/active = 1
presolving/maxrounds = 0
separating/maxrounds = 0
separating/maxroundsroot = 0
separating/maxcuts = 0
separating/maxcutsroot = 0
lp/initalgorithm = d
lp/resolvealgorithm = d
lp/fastmip = 1
lp/threads = 1
limits/time = 7200
limits/memory = 2900
limits/absgap = 0
#display/verblevel = 5
#display/freq = 10
To check that the models are the same I solved the CPLEX model in SCIP (without CG) and I obtained the same LP-bound as for the model generated with SCIP but different from the LP-bound when solving with CPLEX.
It seems that SCIP is still using some 'magic' I have not deactivated yet. So my question is what do I have to deactivate to obtain an LP-bound relying just on my model.
I already took a look at the statistics out-put and there are indeed some things that might help to solve the problem:
Constraints #EnfoLP lists 1 for integral (seems strange since cuts are disabled?)
The transformed problem seems to be ok. The statistics-output prints:
Presolved Problem :
Problem name : t_ARLP
Variables : 969 (806 binary, 0 integer, 0 implicit integer, 163 continuous)
Constraints : 9311 initial, 9311 maximal
and before the iterations start I get the following:
LP Solver : row representation of the basis not available -- SCIP parameter lp/rowrepswitch has no effect
transformed problem has 897 variables (806 bin, 0 int, 0 impl, 91 cont) and 9311 constraints
9311 constraints of type < linear >
presolving:
presolving (0 rounds):
0 deleted vars, 0 deleted constraints, 0 added constraints, 0 tightened bounds, 0 added holes, 0 changed sides, 0 changed coefficients
0 implications, 0 cliques
presolved problem has 897 variables (806 bin, 0 int, 0 impl, 91 cont) and 9311 constraints
9311 constraints of type < linear >
Presolving Time: 0.00
I added 72 columns: 91 original +72 added = 163 total. This seems to be ok.
I added the suggested parameters. It seems that domain propagation has not been in use before but there has been strong branching. Unfortunately nothing changed with the parameters.
In addition to adding the parameters I also tried to use SCIP 3.0.1 instead. This improved my bound from 670.194 to 699.203 but this is still quite different from the cplex bound with 754.348. I know that the solvers differ by a lot of numerical parameters but I guess the difference is too large to be caused by these parameters?
There are two further things that might affect the LP bound at the root node: domain propagation and strong branching.
Domain propagation is a sort of node preprocessing and tries to reduce variable domains based on the current local domains and constraints. Strong branching precomputes the LP bounds of potential child nodes to select a good variable to branch on. If one of the child nodes is detected to be infeasible, its domain is reduced.
You can disable domain propagation by setting
propagating/maxrounds = 0
propagating/maxroundsroot = 0
Strong branching can be disabled by setting a high priority to a branching rule which does not apply strong branching. For example, set
branching/pscost/priority = 100000000
in order to enable pure pseudo cost branching.
In general, you should check the statistics for non-zero values in the DomReds columns.
You can just write the internal problem to a file and then compare it to the original:
SCIP> write transproblem
You should also read SCIP's statistics thoroughly to find out what kind of 'magic' SCIP performed:
SCIP> display statistics
I almost forgot about the thread and then stumbled upon it again and thought it might be good to add the answer after finding it myself:
Within the cut callback (unfortunately I did not mention that I used one) I used the method:
SCIPisCutEfficacious
which discarded some of the cuts that are relevant to obtain a true LP bound. Not calling this method slows down the solution process but at least it preserves the result.