Gurobi's optimal solution violates constraint by 0.29% - gurobi

We are solving a large-scale MIQCQP problem:
Variables: 7.3K (= 3.7K as continuous + 3.6K as interger)
Objective: Linear
Constraints: 14.8K Linear Constraints + 1 Quadratic Constraint (Q matrix size = 3.7K * 3.7K)
Framework: Cvxpy (version: 1.1.13)
Solver: Gurobi (version: 9.0.3)
Gurobi finished optimization and returns the optimal solutions well, but those optimal variables violates the quadratic constraint by 0.014 (in percentage, its 0.29%).
Note: The violation is higher then default feasibility tolerance (=1e-6) of Gurobi.

Related

Mosek runs indefinitely on large MIQCQP

We have a large-scale MIQCQP problem. Problem size:
Decision vars: ~9K (with 3K continuous and 6K integral vars)
Objective: 1 Linear expression
Constraints (linear): 35K linear constraints (9K lower bound + 9K upper bound + remaining inequality constraints)
Constraints (Quadratic): 1 quad constraint (with Q matrix size as 3K*3K, which is PSD)
When we use Mosek (via Cvxpy), it runs indefinitely (in the branch & bound logic). Moreover, from the mosek logs: BEST_INT_OBJ and REL_GAP(%) are displayed NA throughout.
Since this problem contains proprietary data, its difficult to share it.
Are there any general tips or tricks to speed up the problem?
(Weirdly, Gurobi can solve the same problem in less then a minute)

CVXPY with MOSEK solver: how do I find the constraints corresponding to the Mosek "index"?

I am solving an SDP in cvxpy with MOSEK as a solver.
My problem is infeasible, and MOSEK has the nice feature that it provides an "Infeasibility Report". In my case, the report looks like this:
MOSEK PRIMAL INFEASIBILITY REPORT.
Problem status: The problem is primal infeasible
The following constraints are involved in the primal infeasibility.
Index Name Lower bound Upper bound Dual lower Dual upper
37 none -0.000000e+00 0.000000e+00 2.647059e-03
406 none 3.000000e+02 0.000000e+00 6.250000e-04
2364 none -0.000000e+00 0.000000e+00 6.183824e-03
2980 none -8.100000e-01 0.000000e+00 1.000000e+00
3049 -0.000000e+00 -0.000000e+00 0.000000e+00 4.235294e+00
3052 -0.000000e+00 -0.000000e+00 0.000000e+00 1.000000e+00
I would like to find out which constrains this report is referring to. My constraint list in cvxpy only contains 105 constraints, but many of those are matrix or vector constraints. This explains why the index reported by MOSEK are up to 3052. However, it makes it hard to find out which of my constraints are listed in the report.
Is there a way to find out which of my cvxpy constraints are reported by MOSEK?
I was using the Mosek via its Cvxpy Interface and facing the same issue.
My hypothesis is ordering of constraint Mosek's Infeasibility Report is exactly same as in Cvxpy because:
I had tested it on 2 sample infeasible problems (i.e. knew aprior as to which constraint are causing infeasibility) and found the hypothesis holds.
I had taken a quick look at Cvxpy to Mosek conversion code in cvxpy codebase and found cvxpy doesn't change constraint ordering.
So, the conclusion is the hypothesis.
Please Note: The conclusion is based on quite small test set + naive understanding of cvxpy codebase, so there exists minor chances that it may be wrong.

Which solver can solve Fractional Linear (Costfunction) with quadratical non-convex equality constraints?

First of all, I am a noob in optimization. I have the following problem:
I have the optimization vector x=(x1, x2, x3, x4, x5, x6). The cost function is:
min. (x3+x4)/x6
The constraints are:
- quadratically equality constraints: e.g.:
k1*x5^2 + k2*x6 = k3*x5 + k4*x5 + k5*x1^2
- xmin < x < xmax
- some other linear constraints...
My biggest problem is to find a suitable solver for this problem. I already found the concept of Fractional Linear Programming by Boyd: https://web.stanford.edu/~boyd/cvxbook/bv_cvxslides.pdf (4-20)
However, it requires linear constraints. I also found heuristic methods to solve quadratic equality constrained problems: https://pdfs.semanticscholar.org/6008/57c54df025e732238425cf55f55997b4a67c.pdf
https://web.stanford.edu/~boyd/papers/pdf/qcqp.pdf
However, I think they are not suitable to combine them with linear fractional programming.
I would be very glad if someone could mention any solution to this problem.
best regards
Leo
I tried to linearize the constraints around different random points and took the result with the lowest costs. However, the solution does not fullfil the quafratic equallity constraints.

AMPL IPOPT gives wrong optimal solution while solve result is "solved"

I am trying to solve a very simple optimization problem in AMPL with IPOPT as follow:
var x1 >= 0 ;
minimize obj: -(x1^2)+x1;
obviously the problem is unbounded. but IPOPT gives me:
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************
This is Ipopt version 3.12.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 1
Total number of variables............................: 1
variables with only lower bounds: 1
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 9.8999902e-03 0.00e+00 2.00e-02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.5346023e-04 0.00e+00 1.50e-09 -3.8 9.85e-03 - 1.00e+00 1.00e+00f 1
2 1.7888952e-06 0.00e+00 1.84e-11 -5.7 1.52e-04 - 1.00e+00 1.00e+00f 1
3 -7.5005506e-09 0.00e+00 2.51e-14 -8.6 1.80e-06 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 3
(scaled) (unscaled)
Objective...............: -7.5005505996934397e-09 -7.5005505996934397e-09
Dual infeasibility......: 2.5091040356528538e-14 2.5091040356528538e-14
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 2.4994494940593761e-09 2.4994494940593761e-09
Overall NLP error.......: 2.4994494940593761e-09 2.4994494940593761e-09
Number of objective function evaluations = 4
Number of objective gradient evaluations = 4
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 3
Total CPU secs in IPOPT (w/o function evaluations) = 0.001
Total CPU secs in NLP function evaluations = 0.000
EXIT: Optimal Solution Found.
Ipopt 3.12.4: Optimal Solution Found
suffix ipopt_zU_out OUT;
suffix ipopt_zL_out OUT;
ampl: display x1;
x1 = 0
when I change the solver to Gurobi, it gives this message:
Gurobi 6.5.0: unbounded; variable.unbdd returned.
which is what I expected.
I can not understand why it happens and now I don't know if I need to check it for all the problem that I am trying to solve to not converging to the the wrong optimal solution. As it is a super simple example it is a little bit strange.
I would appreciate if anybody can help me with this.
Thanks
You've already identified the basic problem, but elaborating a little on why these two solvers give different results:
IPOPT is designed to cover a wide range of optimisation problems, so it uses some fairly general numeric optimisation methods. I'm not familiar with the details of IPOPT but usually this sort of approach relies on picking a starting point, looking at the curvature of the objective function in the neighbourhood of that starting point, and following the curvature "downhill" until they find a local optimum. Different starting points can lead to different results. In this case IPOPT is probably defaulting to zero for the starting point, so it's right on top of that local minimum. As Erwin's suggested, if you specify a different starting point it might find the unboundedness.
Gurobi is designed specifically for quadratic/linear problems, so it uses very different methods which aren't susceptible to local-minimum issues, and it will probably be much more efficient for quadratics. But it doesn't support more general objective functions.
I think I understand why it happened. the objective function
-(x1^2)+x1;
is not convex. therefore the given solution is local optimum.

How to frame a large scale optimization in python calling a SCIP solver

I'm trying to use SCIP through python and I have installed SCIP optimization suite 3.2.1. I have problem framing my optimization question through PYSCIPOPT. As I have 2000+ variables to solve, I am wondering can I use matrix notation to frame the question in python?
No, this is not possible, because SCIP is constraint based and does not rely on a central matrix structure. A problem with 2000 variables is not at all large, by the way and should be processed within a a second.
This is how you would transform a quadratic constraint matrix Q of size 2:
Q = [a b;c d], x = [x1; x2]
x'Qx = ax1^2 + dx2^2 + (b+c)x1x2
This can then be passed to SCIP with the addCons() method.