Using conditional statements for CPLEX resolver - optimization

Is it possible to use in AMPL conditional statements such as "if (...) then..."? Below is shown as I tried to do.
subject to c1a {k in K, o in O, n in N: n!=t[k,o]}:
sum{e in E}
(a[n,e]*x[e,k,o]) -
sum{e in E}
(b[n,e]*x[e,k,o]) =
(if (r[n,k]==1 and f[n,o]==1)
then d[k,o]*(1-f[k,o])
else 0);
My ampl returning to me bug as follows:
CPLEX 11.2.0: Constraint _scon[1] is not convex quadratic since it is an equality constraint.
Do you have any idea ho to resolve this problem?

It is possible to use if-then-else expression with CPLEX if the condition (the expression between if and then) doesn't contain variables. CPLEX also supports so called "indicator constraints" (see here for more details) which use implication operator (==>) and are somewhat similar to if-then-else, but allow variables in the condition.
Regarding your example, it is not clear which names correspond to variables and which to constraints but the error suggests that the problem is not due to if-then-else, but because you have a quadratic constraint in the form not supported by CPLEX (see the section Quadratic Constraints on page 33 of ILOG AMPL CPLEX System User's Guide for the information about the accepted form).

you may change your solver cplex only deal with convex and quadratic constraints and it's used byt default in Ampl resolution, so you can try to repload your mod and dat files and then choose another solver as follow:
ampl: option solver " ipopt";
ampl: solve;
or
ampl: option solver " couenne";
ampl: solve;

Related

Model is infeasible in Gurobi although it has a feasible solution

I am attempting to solve a non-convex quadratic optimization problem using Gurobi, but I have encountered an issue. Specifically, I have a specific objective function; however, I am only interested in finding a feasible solution. To do this, I tried two ways:
1- set my specific objective function as the model objective and set the parameter "SolutionLimit" to 1. This works fine, and Gurobi gives me a feasible solution.
2- give Gurobi no objective function (or set the objective to some arbitrary number like 0). In this case, Gurobi returns no feasible solution. The log it prints says:
Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (1.5757e+01) exceeds tolerance
(model may be infeasible or unbounded - try turning presolve off)
Best objective -0.000000000000e+00, best bound -0.000000000000e+00, gap 0.0000%
I checked the solution it returned, and it is infeasible. I want the second method to work too. I have attempted to modify the solver parameters (such as "m.ModelSense = GRB.MAXIMIZE," "m.params.MIPFocus = 3," "m.params.NoRelHeurTime = 200," "m.params.DualReductions = 0," "m.params.Presolve = 2," and "m.params.Crossover = 0") in an effort to resolve this issue but have been unsuccessful. Are there any other parameters that I can adjust in order to successfully solve this problem?
This model has numerical issues; to understand more, please see Guidelines for Numerical Issues in the Gurobi Reference Manual.

Defining OR-constraint in mixed-integer problem with SCIP

I'm trying to use the python interface of SCIP tool (https://github.com/scipopt/PySCIPOpt) to solve a mixed-integer optimization problem.
I want to define an OR-constraint with three constraints, but only one of them must be satisfied.
For example, I want to minimize a variable x with three constraints x>=1, x>=2, x>=3, but only one of them must be valid, and then minimize the value of x. Of course the result should be x=1.
However the OR-constraint API addConsOr requires both the constraint list and result variable (resvar, resultant variable of the operation). While I can provide the list of constraints, I don't know the meaning of result variable in the second function parameter. When I set the second parameter to a new variable, the following code cannot run and result in segmentation fault.
from pyscipopt import Model
model = Model()
x = model.addVar(vtype = "I")
b = model.addVar(vtype="B")
model.addConsOr([x>=1, x>=2, x>=3], b)
model.setObjective(x, "minimize")
model.optimize()
print("Optimal value:", model.getObjVal())
Also, setting the second variable to True also gets segmentation fault.
model.addConsOr([x>=1, x>=2, x>=3], True)
What you are describing is not an OR-constraint. An or-constraint is a constraint that takes into account a set of binary variables and gets the result as an OR of these values, as explained in the SCIP documentation.
What you want is a general disjunctive constraint. Those exist in SCIP as SCIPcreateConsDisjunction but are not wrapped in the Python API yet. Fortunately, you can extend the API yourself quite easily. Simply add the correct function to scip.pxd and define the wrapper in scip.pyx. Just look at how it is done for the existing constraint types and do it the same way. The people over at the PySCIPopt GitHub will be happy if you create a pull-request with your changes.

Redefining a variable

I am using AMPL for the optimization of my model, and just have started with that project.
I have two variables, say A and B that I utilize in my objective function:
A[d,t]*costA-B[d,t]*costB
Later on I have the following constraint:
G[d,t]-U[d,t]-R[d,t]=A[d,t]
Here I realized that I can use just A, but the problem is, depending on whether or not this variable will be positive or negative I should use costA or costB.
My question is, can I redefine A[d,t] as B[d,t] if A[d,t] is less than 0? And if I can, how can I do it? Or is there any other way?
I think what you are after is something like (in some math-like notation):
min sum((d,t), APlus[d,t]*CostA + AMin[d,t]*CostB)
s.t. A[d,t] = APlus[d,t]-AMin[d,t]
positive variables APlus,AMin
This is called "variable splitting".

Maximum Likelihood Estimation of a log function with sevaral parameters

I am trying to find out the parameters for the function below:
$$
\log L(\alpha,\beta,v) = v/\beta(e^{-\beta T} -1) + \alpha/\beta \sum_{i=1}^{n}(e^{-\beta(T-t_i)} -1) + \sum_{i=1}^{N}log(v e^{-\beta t_i} + \alpha \sum_{j=1}^{jmax(t_i)} e^{-\beta(t_i - t_j)}).
$$
However, the conventional methods like fmin, fminsearch are not converging properly. Any suggestions on any other methods or open libraries which I can use?
I was trying CVXPY, but they don't support the division by a variable in the expression.
The problem may not be convex (I have not verified this but it could be why CVXPY refused it). We don't have the data so we cannot try things out, but I can give some general advice:
Provide exact gradients (and 2nd derivatives if needed) or use a modeling system with automatic differentiation. Especially first derivatives should be preferably quite precise. With finite differences you may lose half the precision.
Provide a good starting point. May be using an alternative estimation method.
Some solvers can use bounds on the variables to restrict the feasible region where functions will be evaluated. This can be used to restrict the search to interesting areas only and also to protect operations like division and log functions.

SAT Solving for Optimization

Suppose you have a CNF formula with some variables marked special.
Is there a way to make a SAT Solver (say, minisat) find a solution maximizing the number of special variables assigned to true?
What you (I) want is called Partial Max Sat. There is a solver called qmaxsat, which seems to work well enough.
Not sure if all of these can handle the indication of special variables, but at least wikipedia gives some direction for the search:
There are several solvers submitted to the last Max-SAT Evaluations:
Branch and Bound based: Clone, MaxSatz (based on Satz), IncMaxSatz, IUT_MaxSatz, WBO.
Satisfiability based: SAT4J, QMaxSat.
Unsatisfiability based: msuncore, WPM1, PM2.
Checking the description for all of them should be managable.
You can use a PBC solver such as minisat+ http://minisat.se/MiniSat+.html
They solve regular CNF files with additional constraints called pseudo Boolean constraints
Minisat+ also supports optimization of such constraints
and from my understanding it solves your problem
Let x1, .... xn be the variables you want to maximize the number o truth assignments
Then you can define the constraints
maximize +1 x1 ..... +1 xn
minisat+ solves such optimization problems