Constraint Handler for lazy constraints and Presolving - scip

I'm using PySCIPOpt (SCIP version 6.0.2, PySCIPOpt version 2.2.3) to solve a mixed integer problem. A constraint handler should be checking and enforcing some of the requirements that are not modeled directly into the problem (lazy constraints).
Problem: the presolving is simplifying the original problem so much (deleting variables) such that constraints that are inserted by the constraint handler (after the presolving) lead to infeasibility.
In the example below are only two binary variables and a set partitioning constraint present. Presolving is able to remove all variables and constraints. The constraint handler restricts the x_0 variable to 0 and the problem gets infeasible. If presolving is turned off, the "correct" solution x_1 = 1 is found.
from pyscipopt import Model, quicksum, Conshdlr, SCIP_RESULT, SCIP_PRESOLTIMING, SCIP_PROPTIMING, SCIP_PARAMSETTING
class ConshdlrNotZero(Conshdlr):
def __init__(self):
pass
def conscheck(self, constraints, solution, checkintegrality, checklprows, printreason, completely):
x = self.data
if self.model.getSolVal(solution, x[0]) > 0.5:
return {"result": SCIP_RESULT.INFEASIBLE}
return {"result": SCIP_RESULT.FEASIBLE}
def consenfolp(self, constraints, nusefulconss, solinfeasible):
x = self.data
if self.model.getSolVal(None, x[0]) > 0.5:
self.model.addCons(x[0] <= 0, name='fix_x0')
return {"result": SCIP_RESULT.CONSADDED}
return {"result": SCIP_RESULT.FEASIBLE}
def conslock(self, constraint, locktype, nlockspos, nlocksneg):
pass
m = Model('test_presolve')
x = dict()
x[0] = m.addVar(vtype='BINARY', obj=0, name="x_%d" % 0)
x[1] = m.addVar(vtype='BINARY', obj=1, name="x_%d" % 1)
m.addCons(quicksum(x_var for x_var in x.values()) == 1, name="set_partitioning")
conshdlr = ConshdlrNotZero()
conshdlr.data = x
m.includeConshdlr(conshdlr, "n0", "please not the x_0 variable",
sepapriority=-1, enfopriority=-1, chckpriority=-1, sepafreq=-1, propfreq=-1,
eagerfreq=-1, maxprerounds=0, delaysepa=False, delayprop=False, needscons=False,
presoltiming=SCIP_PRESOLTIMING.FAST, proptiming=SCIP_PROPTIMING.BEFORELP)
# m.setPresolve(SCIP_PARAMSETTING.OFF)
m.setMinimize()
m.optimize()
m.printSol()
Is there a way to ensure considering the constraint handler in the presolving (except disabling presolving)?
Edit: Disabling dual reductions (set misc/allowdualreds to 0) within presolving works. I'm still wondering why this solves the problem and if there is a better solution.

Dual reductions in presolving mean that they possibly cut off optimal solutions, but leave at least one optimal solution.
Your reasoning seems to exploit knowledge of the original feasible region, so it makes sense that you have to disable dual presolving reductions.

Related

Terminate Scipy solve_ivp on custom predicate

I have an ODE dy/dt = f(y,t), where y is a N dimensional vector, which I would like to solve using the scipy.integrate.solve_ivp function.
However, I would like to stop the integration if a certain predicate g(y,t) evaluates to True. The use case I have here is that I expect the value of y to converge towards some constant value y0 before the end of the integration duration t_end. I am interested in this constant value y0 and would like to save time by terminating the integration once convergence has happened.
I was hoping that I could create an array to store the values of y in the last 5 integration steps, and if they are very close, convergence is believed to have happened.
The event function of solve_ivp does not really help in my case: there is no root that I hope to find, and I am not interested in the t when convergence happens. I am surprised that this seemingly "common" use case of looking for a convergence cannot be done easily, and I can't find similar problems already on Stackoverflow.
If someone has some idea, I would love to hear it.
This is a good candidate for accessing the integrator classes that solve_ivp uses under the hood. If we take the simple function dy/dt = -y with initial condition y(0) = 100. We want to terminate the function when the solution has changed by less than 0.1 over 1 second of simulation, i.e. abs(y(t) - y(t-1)) < 0.1. For this ODE, this occurs at t=-ln(0.1 / (100(e-1)) or t~7.45. We can solve this using RK45 integrator (RK45 docs) as follows:
import numpy as np
from scipy.integrate import RK45
def fun(t, y):
return -y
y0 = [100]
t0 = 0
#max_step used to ensure that we take small enough time steps
rk45 = RK45(fun, t0, y0, t_bound=1000, max_step=0.1)
t = []
y = []
while rk45.status == "running":
t.append(rk45.t)
y.append(rk45.y[0])
if rk45.t > 1.0 and np.abs(np.interp(rk45.t-1, t, y) - rk45.y[0]) < 0.1:
break
rk45.step()
print(f"Final t: {t[-1]:.1f}")
# Because max_step=0.1, t[-11] will be 1 second behind t[-1]
print(f"Time period checked: {t[-1]-t[-11]:.1f}, delta_y: {y[-11]-y[-1]:.1f}")
yields
Final t: 7.5
Time period checked: 1.0, delta_y: 0.1

How to use PySCIPOpt for feasibility-only problem

I have used CVXPY and some of its LP solvers to determine whether a solution to an A*x <= b problem is feasible, and now I would like to try PySCIPOpt. I could not find an example of this in the docs, and I'm having trouble figuring out the right syntax. With CVXPY the code is simply:
def do_cvxpy(A, b, solver):
x = cvxpy.Variable(A.shape[1])
constraints = [A#x <= b] #The # denotes matrix multiplication in CVXPY
obj = cvxpy.Minimize(0)
prob = cvxpy.Problem(obj, constraints)
prob.solve(solver=solver)
return prob.status
I think with PySCIPOpt one cannot use matrix notation as above, but must treat vectors and matrices as collections of scalar variables, each of which has to be added individually, so I tried this:
def do_scip(A, b):
model = Model("XYZ")
x = {}
for i in range(A.shape[1]):
x[i] = model.addVar(vtype="C", name="x(%s)" % i)
model.setObjective(0) #Is this right for a feasibility-only problem?
model.addCons(A*x <= b) #This is certainly the wrong syntax
model.optimize()
return model.getStatus()
Could anyone please help me out with the correct form for the constraint in addCons() for this kind of problem, and confirm that an acceptable way to ask whether a solution is feasible is to simply pass 0 as the objective?
I'm still not positive about the setObjective(0), but at least I can get the code to run without errors by "unpacking" the A matrix and the b vector and adding each element as a constraint:
for i in range(ncols):
for j in range(nrows):
model.addCons(A[j,i]*x[i] <= b[i])
I also discovered that CVXPY actually has an interface to SCIP, but it gives me an error when I try to use it:
getSolObjVal cannot only be called in stage SOLVING without a valid solution
which seems to suggest that the interface cannot be used for feasibility-only problems.

Treatment of constraints in SLSQP optimization with openMDAO

With openMDAO, I am using FD derivatives and trying to solve a non-linearly constrained optimization problem with the SLSQP method. Any time the optimizer arrives at a point that violates one of the constraints, it just crashes with the message:
Optimization FAILED. Positive directional derivative for linesearch
For instance, if I intentionally set the initial point to an unfeasible design point, the optimizer performs 1 iteration and exits with the above error (the same happens when I start from a feasible point, but then optimizer arrives at an unfeasible point after a few iterations).
Based on the answer to the question in In OpenMDAO, is there a way to ensure that the constraints are respected before proceeding with a computation?, I'm assuming that raising the AnalysisError exception will not work in my case, is that correct? Is there any other way to prevent the optimizer from going into unfeasible regions or at least backtrack on the linesearch and try a different direction/distance? Or should the SLSQP method be only used when analytic derivatives are available?
Reproducible test case:
import numpy as np
import openmdao.api as om
class d1(om.ExplicitComponent):
def setup(self):
# Global design variables
self.add_input('r', val= [3,3,3])
self.add_input('T', val= 20)
# Coupling output
self.add_output('M', val=0)
self.add_output('cost', val=0)
def setup_partials(self):
# Finite difference all partials.
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
# define inputs
r = inputs['r']
T = inputs['T'][0]
cost = 174.42 * T * (r[0]**2 + 2*r[1]**2 + r[2]**2 + r[0]*r[1] + r[1]*r[2])
M = 456.19 * T * (r[0]**2 + 2*r[1]**2 + r[2]**2 + r[0]*r[1] + r[1]*r[2]) - 599718
outputs['M'] = M
outputs['cost'] = cost
class MDA(om.Group):
class ObjCmp(om.ExplicitComponent):
def setup(self):
# Global Design Variable
self.add_input('cost', val=0)
# Output
self.add_output('obj', val=0.0)
def setup_partials(self):
# Finite difference all partials.
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
outputs['obj'] = inputs['cost']
class ConCmp(om.ExplicitComponent):
def setup(self):
# Global Design Variable
self.add_input('M', val=0)
# Output
self.add_output('con', val=0.0)
def setup_partials(self):
# Finite difference all partials.
self.declare_partials('*', '*', method='fd')
def compute(self, inputs, outputs):
# assemble outputs
outputs['con'] = inputs['M']
def setup(self):
self.add_subsystem('d1', d1(), promotes_inputs=['r','T'],
promotes_outputs=['M','cost'])
self.add_subsystem('con_cmp', self.ConCmp(), promotes_inputs=['M'],
promotes_outputs=['con'])
self.add_subsystem('obj_cmp', self.ObjCmp(), promotes_inputs=['cost'],
promotes_outputs=['obj'])
# Build the model
prob = om.Problem(model=MDA())
model = prob.model
model.add_design_var('r', lower= [3,3,3], upper= [10,10,10])
model.add_design_var('T', lower= 20, upper= 220)
model.add_objective('obj', scaler=1)
model.add_constraint('con', lower=0)
# Setup the optimization
prob.driver = om.ScipyOptimizeDriver(optimizer='SLSQP', tol=1e-3, disp=True)
prob.setup()
prob.set_solver_print(level=0)
prob.run_driver()
# Printout
print('minimum found at')
print(prob.get_val('T')[0])
print(prob.get_val('r'))
print('constraint')
print(prob.get_val('con')[0])
print('minimum objective')
print(prob.get_val('obj')[0])
Based on your provided test case, the problem here is that your have a really poorly scaled objective and constraint (you also have some very strange coding choices ... which I modified).
Running the OpenMDAO scaling report shows that your objective and constraint values are both around 1e6 in magnitude:
This is quite large, and is the source of your problems. A (very rough) rule of thumb is that your objectives and constraints should be around order 1. Thats not hard and fast rule, but is generally a good starting point. Sometimes other scaling will work better, if you have very very larger or small derivatives ... but there are parts of SQP methods that are sensitive to the scaling of objective and constraint values directly. So trying to keep them roughly in the range of 1 is a good idea.
Adding ref=1e6 to both objective and constraints gave enough resolution for the numerical methods to converge the problem:
Current function value: [0.229372]
Iterations: 8
Function evaluations: 8
Gradient evaluations: 8
Optimization Complete
-----------------------------------
minimum found at
20.00006826587515
[3.61138704 3. 3.61138704]
constraint
197.20821903413162
minimum objective
229371.99547899762
Here is the code I modified (including removing the extra class definitions inside your group that didn't seem to be doing anything):
import numpy as np
import openmdao.api as om
class d1(om.ExplicitComponent):
def setup(self):
# Global design variables
self.add_input('r', val= [3,3,3])
self.add_input('T', val= 20)
# Coupling output
self.add_output('M', val=0)
self.add_output('cost', val=0)
def setup_partials(self):
# Finite difference all partials.
self.declare_partials('*', '*', method='cs')
def compute(self, inputs, outputs):
# define inputs
r = inputs['r']
T = inputs['T'][0]
cost = 174.42 * T * (r[0]**2 + 2*r[1]**2 + r[2]**2 + r[0]*r[1] + r[1]*r[2])
M = 456.19 * T * (r[0]**2 + 2*r[1]**2 + r[2]**2 + r[0]*r[1] + r[1]*r[2]) - 599718
outputs['M'] = M
outputs['cost'] = cost
class MDA(om.Group):
def setup(self):
self.add_subsystem('d1', d1(), promotes_inputs=['r','T'],
promotes_outputs=['M','cost'])
# self.add_subsystem('con_cmp', self.ConCmp(), promotes_inputs=['M'],
# promotes_outputs=['con'])
# self.add_subsystem('obj_cmp', self.ObjCmp(), promotes_inputs=['cost'],
# promotes_outputs=['obj'])
# Build the model
prob = om.Problem(model=MDA())
model = prob.model
model.add_design_var('r', lower= [3,3,3], upper= [10,10,10])
model.add_design_var('T', lower= 20, upper= 220)
model.add_objective('cost', ref=1e6)
model.add_constraint('M', lower=0, ref=1e6)
# Setup the optimization
prob.driver = om.ScipyOptimizeDriver(optimizer='SLSQP', tol=1e-3, disp=True)
prob.setup()
prob.set_solver_print(level=0)
prob.set_val('r', 7.65)
prob.run_driver()
# Printout
print('minimum found at')
print(prob.get_val('T')[0])
print(prob.get_val('r'))
print('constraint')
print(prob.get_val('M')[0])
print('minimum objective')
print(prob.get_val('cost')[0])
Which SLSQP method are you using? There is one implementation in pyOptSparse and one in ScipyOptimizer. The one in pyoptsparse is older and doesn't respect bounds constraints. The one in Scipy is newer and does. (Yes, its very confusing that they have the same name and share some lineage... but are not the same optimizer any more)
You shouldn't raise an analysis error when you go outside the bounds. If you need strict bounds respecting, I suggest using IPopt from within pyoptsparse (if you can get it to compile) or switching to ScipyOptimizerDriver and its SLSQP implementation.
Based on your question, its not totally clear to me if you're talking about bounds constraints or inequality/equality constraints. If its the latter, then then there isn't any optimizer that would guarantee you remain in a feasible region all the time. Interior point methods like IPopt will stay inside the region much better, but not 100% of the time.
In general, with gradient based optimization its pretty critical that you make your problem smooth and continuous even when its outside the constraint areas. If there are parts of the space that you absolutely can not go into, then you need to make those variables into design variables and use bound constraints. This sometimes requires reformulating your problem formulation a little bit, possibly by adding a kind of compatibility constraint that says "design variable = computed_value". Then you can make sure that the design variable is passed into anything that requires the value to be strictly within a bound, and (hopefully) a converged answer will also satisfy your compatibility constraint.
If you provide some kind of a test case or example, I can amend my answer with a more specific suggestion.

Difference of building constraints with BuildAction() vs normal way

As referred in pyomo documentation, BuildAction() is an advance topic and additionally according to the documentation, it is somewhat more efficient to build/solve? constraints with BuildAction().
An example constraint generated with BuildAction():
m.const1 = Constraint([(t, a) for t in m.TIME for a in m.AREA],
noruleinit=True)
def const1_rule(m):
for t in m.TIME:
for a in m.AREA:
lhs = (some_vars_1[t,a])
rhs = (some_vars_2[t,a])
m.const1.add((t,a), (lhs == rhs))
m.const1_build = BuildAction(rule=const1_rule)
So m.const1 = Constraint() builds the pyomo constraint without any rule via noruleinit=True.
Then m.const1_build = BuildAction() runs the function const1_rule, and in this function the constraints are added to m.const1 via .add().
An example constraint generated with normal way:
def const1_rule(m, t, a):
lhs = (some_vars_1[t,a])
rhs = (some_vars_2[t,a])
return lhs == rhs
m.const1 = Constraint(m.TIME, m.AREA, rule=const1_rule)
QUESTIONS:
Another application of BuildAction can be intialization of Pyomo model data from Python data structures, or efficient initialization of Pyomo model data from other Pyomo model data.
1) In what way it is more efficient?
2) What is the difference building the constraints with BuildAction()?
3) Should I use it, yes-no-why?
4) If BuildAction() is better, how would I take advantage of it? (for example; assume there is a different working principle of BuildAction(), and maybe because of that I don't need to create some pyomo Sets or Params.)

Finding out reason of Pyomo model infeasibility

I got a pyomo concrete model with lots of variables and constraints.
Somehow, one of the variable inside my model violates one constraint, which makes my model infeasible:
WARNING: Loading a SolverResults object with a warning status into model=xxxx;
message from solver=Model was proven to be infeasible.
Is there a way to ask the solver, the reason of the infeasibility?
So for example, lets assume I got a variable called x, and if I define following 2 constraints, model will be ofc. infeasible.
const1:
x >= 10
const2:
x <= 5
And what I want to achieve that pointing out the constraints and variable which causes this infeasibility, so that I can fix it. Otherwise with my big model it is kinda hard to get what causing this infeasibility.
IN: write_some_comment
OUT: variable "x" cannot fulfill "const1" and "const2" at the same time.
Many solvers (including IPOPT) will hand you back the value of the variables at solver termination, even if the problem was found infeasible. At that point, you do have some options.
There is contributed code in pyomo.util.infeasible that might help you out. https://github.com/Pyomo/pyomo/blob/master/pyomo/util/infeasible.py
Usage:
from pyomo.util.infeasible import log_infeasible_constraints
...
SolverFactory('your_solver').solve(model)
...
log_infeasible_constraints(model)
I would not trust any numbers that the solver loads into the model after reporting "infeasible." I don't think any solvers come w/ guarantees on the validity of those numbers. Further, unless a package can divine the modeler's intent, it isn't clear how it would list the infeasible constraints. Consider 2 constraints:
C1: x <= 5
C2: x >= 10
X ∈ Reals, or Integers, ...
Which is the invalid constraint? Well, it depends! Point being, it seems an impossible task to unwind the mystery based on values the solver tries.
A possible alternate strategy: Load the model with what you believe to be a valid solution, and test the slack on the constraints. This "loaded solution" could even be a null case where everything is zero'ed out (if that makes sense in the context of the model). It could also be a set of known feasible solutions tried via unit test code.
If you can construct what you believe to be a valid solution (forget about optimal, just something valid), you can (1) load those values, (2) iterate through the constraints in the model, (3) evaluate the constraint and look for negative slack, and (4) report the culprits with values and expressions
An example:
import pyomo.environ as pe
test_null_case = True
m = pe.ConcreteModel('sour constraints')
# SETS
m.T = pe.Set(initialize=['foo', 'bar'])
# VARS
m.X = pe.Var(m.T)
m.Y = pe.Var()
# OBJ
m.obj = pe.Objective(expr = sum(m.X[t] for t in m.T) + m.Y)
# Constraints
m.C1 = pe.Constraint(expr=sum(m.X[t] for t in m.T) <= 5)
m.C2 = pe.Constraint(expr=sum(m.X[t] for t in m.T) >= 10)
m.C3 = pe.Constraint(expr=m.Y >= 7)
m.C4 = pe.Constraint(expr=m.Y <= sum(m.X[t] for t in m.T))
if test_null_case:
# set values of all variables to a "known good" solution...
m.X.set_values({'foo':1, 'bar':3}) # index:value
m.Y.set_value(2) # scalar
for c in m.component_objects(ctype=pe.Constraint):
if c.slack() < 0: # constraint is not met
print(f'Constraint {c.name} is not satisfied')
c.display() # show the evaluation of c
c.pprint() # show the construction of c
print()
else:
pass
# instantiate solver & solve, etc...
Reports:
Constraint C2 is not satisfied
C2 : Size=1
Key : Lower : Body : Upper
None : 10.0 : 4 : None
C2 : Size=1, Index=None, Active=True
Key : Lower : Body : Upper : Active
None : 10.0 : X[foo] + X[bar] : +Inf : True
Constraint C3 is not satisfied
C3 : Size=1
Key : Lower : Body : Upper
None : 7.0 : 2 : None
C3 : Size=1, Index=None, Active=True
Key : Lower : Body : Upper : Active
None : 7.0 : Y : +Inf : True