presolved in MILP - ampl

Presolve: All rows and columns removed
Iteration Objective Primal Inf. Dual Inf. Time
0 9.9086144e-01 0.000000e+00 0.000000e+00 0s
16 9.9086144e-01 0.000000e+00 0.000000e+00 0s
Solved in 16 iterations and 0.00 seconds
Optimal objective 9.908614362e-01
Gurobi 5.5.0: optimal solution; objective 0.9908614362
16 simplex iterations
whats the problem and I cannt find it for two weeks. I can test the model right under small network data file. But when I test under larger network with more nodes, it occurs this kind of problems shown above.

Related

Training with Roboflow-Train-YOLOv5 stops with a '^C'

Running Roboflow's notebook, 'Roboflow-Train-YOLOv5', stops after completion the epochs loop.
Instead a reporting the results, I get the following lines, with a ^C at the end of the 3rd line
from the end.
I would like to know the reason for the failure, and there is a way to fix it.
10 epochs completed in 0.191 hours.
Optimizer stripped from runs/train/yolov5s_results2/weights/last.pt, 14.9MB
Optimizer stripped from runs/train/yolov5s_results2/weights/best.pt, 14.9MB
Validating runs/train/yolov5s_results2/weights/best.pt...
Fusing layers...
my_YOLOv5s summary: 213 layers, 7015519 parameters, 0 gradients, 15.8 GFLOPs
Class Images Labels P R mAP#.5 mAP#.5:.95: 20% 1/5 [00:01<00:04, 1.03s/it]^C
CPU times: user 7.01 s, sys: 830 ms, total: 7.84 s
Wall time: 12min 31s
My Colab plan is Colab pro, so I guess it is not a problem of resources.

Getting "DUAL_INFEASIBLE" when solving a very simple linear programming problem

I am solving a simple LP problem using Gurobi with dual simplex and presolve. I get the model is unbounded but I couldn't see why such a model is unbounded. Can anyone help to tell me where goes wrong?
I attached the log and also the content in the .mps file.
Thanks very much in advance.
Kind regards,
Hongyu.
The output log and .mps file:
Link to the .mps file: https://studntnu-my.sharepoint.com/:u:/g/personal/hongyuzh_ntnu_no/EV5CBhH2VshForCL-EtPvBUBiFT8uZZkv-DrPtjSFi8PGA?e=VHktwf
Gurobi Optimizer version 9.5.2 build v9.5.2rc0 (mac64[arm])
Thread count: 8 physical cores, 8 logical processors, using up to 8 threads
Optimize a model with 1 rows, 579 columns and 575 nonzeros
Coefficient statistics:
Matrix range [3e-02, 5e+01]
Objective range [7e-01, 5e+01]
Bounds range [0e+00, 0e+00]
RHS range [7e+03, 7e+03]
Iteration Objective Primal Inf. Dual Inf. Time
0 handle free variables 0s
Solved in 0 iterations and 0.00 seconds (0.00 work units)
Unbounded model
The easiest way to debug this is to put a bound on the objective, so the model is no longer unbounded. Then inspect the solution. This is a super easy trick that somehow few people know about.
When we do this with a bound of 100000, we see:
phi = 100000.0000
gamma[11] = -1887.4290
(the rest zero). Indeed we can make gamma[11] as negative as we want to obey R0. Note that gamma[11] is not in the objective.
More advice: It is also useful to write out the LP file of the model and study that carefully. You probably would have caught the error and that would have prevented this post.

CVXPY with MOSEK solver: how do I find the constraints corresponding to the Mosek "index"?

I am solving an SDP in cvxpy with MOSEK as a solver.
My problem is infeasible, and MOSEK has the nice feature that it provides an "Infeasibility Report". In my case, the report looks like this:
MOSEK PRIMAL INFEASIBILITY REPORT.
Problem status: The problem is primal infeasible
The following constraints are involved in the primal infeasibility.
Index Name Lower bound Upper bound Dual lower Dual upper
37 none -0.000000e+00 0.000000e+00 2.647059e-03
406 none 3.000000e+02 0.000000e+00 6.250000e-04
2364 none -0.000000e+00 0.000000e+00 6.183824e-03
2980 none -8.100000e-01 0.000000e+00 1.000000e+00
3049 -0.000000e+00 -0.000000e+00 0.000000e+00 4.235294e+00
3052 -0.000000e+00 -0.000000e+00 0.000000e+00 1.000000e+00
I would like to find out which constrains this report is referring to. My constraint list in cvxpy only contains 105 constraints, but many of those are matrix or vector constraints. This explains why the index reported by MOSEK are up to 3052. However, it makes it hard to find out which of my constraints are listed in the report.
Is there a way to find out which of my cvxpy constraints are reported by MOSEK?
I was using the Mosek via its Cvxpy Interface and facing the same issue.
My hypothesis is ordering of constraint Mosek's Infeasibility Report is exactly same as in Cvxpy because:
I had tested it on 2 sample infeasible problems (i.e. knew aprior as to which constraint are causing infeasibility) and found the hypothesis holds.
I had taken a quick look at Cvxpy to Mosek conversion code in cvxpy codebase and found cvxpy doesn't change constraint ordering.
So, the conclusion is the hypothesis.
Please Note: The conclusion is based on quite small test set + naive understanding of cvxpy codebase, so there exists minor chances that it may be wrong.

How to display optimal variable values of a class-type Pyomo model?

I am a new Pyomo/Python user, and I am just wondering how to display the optimal variable values in a class-type Pyomo model.
I have just tried the standard example from Pyomo example library, the "min-cost-flow model". The code is available in https://github.com/Pyomo/PyomoGallery/wiki/Min-cost-flow
At the bottom of the code, it says:
sp = MinCostFlow('nodes.csv', 'arcs.csv')
sp.solve()
print('\n\n---------------------------')
print('Cost: ', sp.m.OBJ())
and the output is
Academic license - for non-commercial use only
Read LP format model from file.
Reading time = 0.00 seconds
x8: 7 rows, 8 columns, 16 nonzeros
No parameters matching 'mip_tolerances_integrality' found
No parameters matching 'mip_tolerances_mipgap' found
Optimize a model with 7 rows, 8 columns and 16 nonzeros
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e+00, 5e+00]
Bounds range [0e+00, 0e+00]
RHS range [1e+00, 1e+00]
Presolve removed 7 rows and 8 columns
Presolve time: 0.00s
Presolve: All rows and columns removed
Iteration Objective Primal Inf. Dual Inf. Time
0 5.0000000e+00 0.000000e+00 0.000000e+00 0s
Solved in 0 iterations and 0.01 seconds
Optimal objective 5.000000000e+00
I can only get the optimal objective, but how about the optimal variable values? I have also searched the documentation, which told me to use something like:
print("x[2]=",pyo.value(model.x[2])).
But it did not work for class-type models like the min-cost-flow model.
I also tried to modify the function definition in the class:
def solve(self):
"""Solve the model."""
solver = pyomo.opt.SolverFactory('gurobi')
results = solver.solve(self.m, tee=True, keepfiles=False, options_string="mip_tolerances_integrality=1e-9, mip_tolerances_mipgap=0")
print('\n\n---------------------------')
print('First Variable: ', self.m.Y[0])
But it did not work as well. The output is:
KeyError: "Index '0' is not valid for indexed component 'Y'"
Can you help me with this? Thanks!
Gabriel
The most straightforward way to display the model results after solution is to use the model.display() function. In your case, self.m.display().
The display() function works on Var objects as well, so if you have a variable self.m.x, you could do self.m.x.display().

AMPL IPOPT gives wrong optimal solution while solve result is "solved"

I am trying to solve a very simple optimization problem in AMPL with IPOPT as follow:
var x1 >= 0 ;
minimize obj: -(x1^2)+x1;
obviously the problem is unbounded. but IPOPT gives me:
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************
This is Ipopt version 3.12.4, running with linear solver mumps.
NOTE: Other linear solvers might be more efficient (see Ipopt documentation).
Number of nonzeros in equality constraint Jacobian...: 0
Number of nonzeros in inequality constraint Jacobian.: 0
Number of nonzeros in Lagrangian Hessian.............: 1
Total number of variables............................: 1
variables with only lower bounds: 1
variables with lower and upper bounds: 0
variables with only upper bounds: 0
Total number of equality constraints.................: 0
Total number of inequality constraints...............: 0
inequality constraints with only lower bounds: 0
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 9.8999902e-03 0.00e+00 2.00e-02 -1.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.5346023e-04 0.00e+00 1.50e-09 -3.8 9.85e-03 - 1.00e+00 1.00e+00f 1
2 1.7888952e-06 0.00e+00 1.84e-11 -5.7 1.52e-04 - 1.00e+00 1.00e+00f 1
3 -7.5005506e-09 0.00e+00 2.51e-14 -8.6 1.80e-06 - 1.00e+00 1.00e+00f 1
Number of Iterations....: 3
(scaled) (unscaled)
Objective...............: -7.5005505996934397e-09 -7.5005505996934397e-09
Dual infeasibility......: 2.5091040356528538e-14 2.5091040356528538e-14
Constraint violation....: 0.0000000000000000e+00 0.0000000000000000e+00
Complementarity.........: 2.4994494940593761e-09 2.4994494940593761e-09
Overall NLP error.......: 2.4994494940593761e-09 2.4994494940593761e-09
Number of objective function evaluations = 4
Number of objective gradient evaluations = 4
Number of equality constraint evaluations = 0
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 0
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 3
Total CPU secs in IPOPT (w/o function evaluations) = 0.001
Total CPU secs in NLP function evaluations = 0.000
EXIT: Optimal Solution Found.
Ipopt 3.12.4: Optimal Solution Found
suffix ipopt_zU_out OUT;
suffix ipopt_zL_out OUT;
ampl: display x1;
x1 = 0
when I change the solver to Gurobi, it gives this message:
Gurobi 6.5.0: unbounded; variable.unbdd returned.
which is what I expected.
I can not understand why it happens and now I don't know if I need to check it for all the problem that I am trying to solve to not converging to the the wrong optimal solution. As it is a super simple example it is a little bit strange.
I would appreciate if anybody can help me with this.
Thanks
You've already identified the basic problem, but elaborating a little on why these two solvers give different results:
IPOPT is designed to cover a wide range of optimisation problems, so it uses some fairly general numeric optimisation methods. I'm not familiar with the details of IPOPT but usually this sort of approach relies on picking a starting point, looking at the curvature of the objective function in the neighbourhood of that starting point, and following the curvature "downhill" until they find a local optimum. Different starting points can lead to different results. In this case IPOPT is probably defaulting to zero for the starting point, so it's right on top of that local minimum. As Erwin's suggested, if you specify a different starting point it might find the unboundedness.
Gurobi is designed specifically for quadratic/linear problems, so it uses very different methods which aren't susceptible to local-minimum issues, and it will probably be much more efficient for quadratics. But it doesn't support more general objective functions.
I think I understand why it happened. the objective function
-(x1^2)+x1;
is not convex. therefore the given solution is local optimum.