CVXPY with MOSEK solver: how do I find the constraints corresponding to the Mosek "index"? - optimization

I am solving an SDP in cvxpy with MOSEK as a solver.
My problem is infeasible, and MOSEK has the nice feature that it provides an "Infeasibility Report". In my case, the report looks like this:
MOSEK PRIMAL INFEASIBILITY REPORT.
Problem status: The problem is primal infeasible
The following constraints are involved in the primal infeasibility.
Index Name Lower bound Upper bound Dual lower Dual upper
37 none -0.000000e+00 0.000000e+00 2.647059e-03
406 none 3.000000e+02 0.000000e+00 6.250000e-04
2364 none -0.000000e+00 0.000000e+00 6.183824e-03
2980 none -8.100000e-01 0.000000e+00 1.000000e+00
3049 -0.000000e+00 -0.000000e+00 0.000000e+00 4.235294e+00
3052 -0.000000e+00 -0.000000e+00 0.000000e+00 1.000000e+00
I would like to find out which constrains this report is referring to. My constraint list in cvxpy only contains 105 constraints, but many of those are matrix or vector constraints. This explains why the index reported by MOSEK are up to 3052. However, it makes it hard to find out which of my constraints are listed in the report.
Is there a way to find out which of my cvxpy constraints are reported by MOSEK?

I was using the Mosek via its Cvxpy Interface and facing the same issue.
My hypothesis is ordering of constraint Mosek's Infeasibility Report is exactly same as in Cvxpy because:
I had tested it on 2 sample infeasible problems (i.e. knew aprior as to which constraint are causing infeasibility) and found the hypothesis holds.
I had taken a quick look at Cvxpy to Mosek conversion code in cvxpy codebase and found cvxpy doesn't change constraint ordering.
So, the conclusion is the hypothesis.
Please Note: The conclusion is based on quite small test set + naive understanding of cvxpy codebase, so there exists minor chances that it may be wrong.

Related

Getting "DUAL_INFEASIBLE" when solving a very simple linear programming problem

I am solving a simple LP problem using Gurobi with dual simplex and presolve. I get the model is unbounded but I couldn't see why such a model is unbounded. Can anyone help to tell me where goes wrong?
I attached the log and also the content in the .mps file.
Thanks very much in advance.
Kind regards,
Hongyu.
The output log and .mps file:
Link to the .mps file: https://studntnu-my.sharepoint.com/:u:/g/personal/hongyuzh_ntnu_no/EV5CBhH2VshForCL-EtPvBUBiFT8uZZkv-DrPtjSFi8PGA?e=VHktwf
Gurobi Optimizer version 9.5.2 build v9.5.2rc0 (mac64[arm])
Thread count: 8 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 1 rows, 579 columns and 575 nonzeros
Coefficient statistics:
Matrix range [3e-02, 5e+01]
Objective range [7e-01, 5e+01]
Bounds range [0e+00, 0e+00]
RHS range [7e+03, 7e+03]
Iteration Objective Primal Inf. Dual Inf. Time
0 handle free variables 0s
Solved in 0 iterations and 0.00 seconds (0.00 work units)
Unbounded model
The easiest way to debug this is to put a bound on the objective, so the model is no longer unbounded. Then inspect the solution. This is a super easy trick that somehow few people know about.
When we do this with a bound of 100000, we see:
phi = 100000.0000
gamma[11] = -1887.4290
(the rest zero). Indeed we can make gamma[11] as negative as we want to obey R0. Note that gamma[11] is not in the objective.
More advice: It is also useful to write out the LP file of the model and study that carefully. You probably would have caught the error and that would have prevented this post.

Gurobi's optimal solution violates constraint by 0.29%

We are solving a large-scale MIQCQP problem:
Variables: 7.3K (= 3.7K as continuous + 3.6K as interger)
Objective: Linear
Constraints: 14.8K Linear Constraints + 1 Quadratic Constraint (Q matrix size = 3.7K * 3.7K)
Framework: Cvxpy (version: 1.1.13)
Solver: Gurobi (version: 9.0.3)
Gurobi finished optimization and returns the optimal solutions well, but those optimal variables violates the quadratic constraint by 0.014 (in percentage, its 0.29%).
Note: The violation is higher then default feasibility tolerance (=1e-6) of Gurobi.

AMPL IPOPT gives wrong optimal solution while solve result is "solved"

I am trying to solve a very simple optimization problem in AMPL with IPOPT as follow:
var x1 >= 0 ;
minimize obj: -(x1^2)+x1;
obviously the problem is unbounded. but IPOPT gives me:
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************
This is Ipopt version 3.12.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 1
Total number of variables............................: 1
variables with only lower bounds: 1
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 9.8999902e-03 0.00e+00 2.00e-02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.5346023e-04 0.00e+00 1.50e-09 -3.8 9.85e-03 - 1.00e+00 1.00e+00f 1
2 1.7888952e-06 0.00e+00 1.84e-11 -5.7 1.52e-04 - 1.00e+00 1.00e+00f 1
3 -7.5005506e-09 0.00e+00 2.51e-14 -8.6 1.80e-06 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 3
(scaled) (unscaled)
Objective...............: -7.5005505996934397e-09 -7.5005505996934397e-09
Dual infeasibility......: 2.5091040356528538e-14 2.5091040356528538e-14
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 2.4994494940593761e-09 2.4994494940593761e-09
Overall NLP error.......: 2.4994494940593761e-09 2.4994494940593761e-09
Number of objective function evaluations = 4
Number of objective gradient evaluations = 4
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 3
Total CPU secs in IPOPT (w/o function evaluations) = 0.001
Total CPU secs in NLP function evaluations = 0.000
EXIT: Optimal Solution Found.
Ipopt 3.12.4: Optimal Solution Found
suffix ipopt_zU_out OUT;
suffix ipopt_zL_out OUT;
ampl: display x1;
x1 = 0
when I change the solver to Gurobi, it gives this message:
Gurobi 6.5.0: unbounded; variable.unbdd returned.
which is what I expected.
I can not understand why it happens and now I don't know if I need to check it for all the problem that I am trying to solve to not converging to the the wrong optimal solution. As it is a super simple example it is a little bit strange.
I would appreciate if anybody can help me with this.
Thanks
You've already identified the basic problem, but elaborating a little on why these two solvers give different results:
IPOPT is designed to cover a wide range of optimisation problems, so it uses some fairly general numeric optimisation methods. I'm not familiar with the details of IPOPT but usually this sort of approach relies on picking a starting point, looking at the curvature of the objective function in the neighbourhood of that starting point, and following the curvature "downhill" until they find a local optimum. Different starting points can lead to different results. In this case IPOPT is probably defaulting to zero for the starting point, so it's right on top of that local minimum. As Erwin's suggested, if you specify a different starting point it might find the unboundedness.
Gurobi is designed specifically for quadratic/linear problems, so it uses very different methods which aren't susceptible to local-minimum issues, and it will probably be much more efficient for quadratics. But it doesn't support more general objective functions.
I think I understand why it happened. the objective function
-(x1^2)+x1;
is not convex. therefore the given solution is local optimum.

presolved in MILP

Presolve: All rows and columns removed
Iteration Objective Primal Inf. Dual Inf. Time
0 9.9086144e-01 0.000000e+00 0.000000e+00 0s
16 9.9086144e-01 0.000000e+00 0.000000e+00 0s
Solved in 16 iterations and 0.00 seconds
Optimal objective 9.908614362e-01
Gurobi 5.5.0: optimal solution; objective 0.9908614362
16 simplex iterations
whats the problem and I cannt find it for two weeks. I can test the model right under small network data file. But when I test under larger network with more nodes, it occurs this kind of problems shown above.

Practical solver for convex QCQP?

I am working with a convex QCQP as the following:
Min e'Ie
z'Iz=n
[some linear equalities and inequalities that contain variables w,z, and e]
w>=0, z in [0,1]^n
So the problem has only one quadratic constraint, except the objective, and some variables are nonnegative. The matrices of both quadratic forms are identity matrices, thus are positive definite.
I can move the quadratic constraint to the objective but it must have the negative sign so the problem will be nonconvex:
min e'Ie-z'Iz
The size of the problem can be up to 10000 linear constraints, with 100 nonnegative variables and almost the same number of other variables.
The problem can be rewritten as an MIQP as well, as z_i can be binary, and z'Iz=n can be removed.
So far, I have been working with CPLEX via AIMMS for MIQP and it is very slow for this problem. Using the QCQP version of the problem with CPLEX, MINOS, SNOPT and CONOPT is hopeless as they either cannot find a solution, or the solution is not even close to an approximation that I know a priori.
Now I have three questions:
Do you know any method/technique to get rid of the quadratic constraint as this form without going to MIQP?
Is there any "good" solver for this QCQP? by good, I mean a solver that efficiently finds the global optimum in a resonable time.
Do you think using SDP relaxation can be a solution to this problem? I have never solved an SDP problem in reallity, so I do not know how efficient SDP version can be. Any advice?
Thanks.