What is primal infeasible solution in GAMS? - gams-math

I am using GAMS for solving a MILP problem which includes binary variables. However there is a problem in the solution. Surprisingly, I have seen that one of the binary variables in the solution has a found value of "-1" another one "2". That is not acceptable. I do not know what happened. GAMS gives me the message primal infeasible.

Related

MIP status (119): Integer infeasible or unbounded in GAMS CPLEX

I am trying to solve a MIP problem in GAMS by using CPLEX Solver.
The nature of the problem is large which involves large number of constraints, variables and equation.
The model is successfully completed, however, it is not displaying any output result due to infeasible problem in some equations.
this is what I want from my model,
MODEL Stochastic /all/;
OPTION optcr=0;
OPTION mip=CPLEX;
SOLVE Stochastic using mip maximizing z1;
And this is what I got.
MIP status(119): integer infeasible or unbounded
Cplex Time: 0.00sec (det. 2.73 ticks)
Problem is integer infeasible.
No solution returned
I checked through the .lst file to know which equations are causing infeasibility and I found out many of them including infeasibility in my objective function.
I am not sure how can I remove infeasibility from my problem?
I would like to ask for suggestions and recommendations.
I have been to some online posts about the problem such as (https://www.researchgate.net/post/I-am-using-GAMS-in-MINLP-and-it-results-in-an-infeasible-solution-any-help), but it didn't helped though.
Note: I am using GAMS IDE version 24.
There is no general, easy solution for this problem. But since you use Cplex as your solver, you could try its IIS option (see https://www.gams.com/latest/docs/S_CPLEX.html#CPLEXiis) which can help to identify groups of conflicting constraints. That could give you at least a handle for a more detailed analysis of your problem.

On the iterative implementation of mosekopt for large linear programs

I have to solve a linear program with a very large number of constraints. I use MOSEK (mosekopt, with MSK_IPAR_INTPNT_BASIS set equal to MSK_BI_NEVER to save time).
The solver takes time to solve the program due to the large dimension.
I thought about manually coding the following iterative procedure:
Take a random subset of constraints and solve the restricted linear program.
If a solution of the restricted linear program does not exist, stop.
If a solution of the restricted linear program exists, check if it is a solution of the original linear program. If yes, stop. If not, repeat from 1. with a larger set of constraints that includes the constraints of this iteration.
The procedure does not seem to produce a notable saving of time. I wonder whether this is because 1.,2.,3. are essentially what the solver does without needing my input. Could you advise?
Could I do improve things if, when moving from 3. to 1., I supply to mosekopt the old solution of the restricted linear program?
This may or may not be faster, than using Mosek on the complete problem. At least theoretical your approach is inferior.
You say nothing of the dimension of the problem that would be interesting to know.
Or how long it takes to solve the complete problem.
One issue tricky is how many and which constraints you are adding in 3. That will be very important.

How does SCIP calculate vanillafullstrongbranching-scores?

I want to do some experiments on branchingrules in SCIP (using the python interface).
To be sure some basics of my code work, I tried to mirror the vanillafullstrongbranching of SCIP using the dive functionality.
This works mostly as expected, but I get weird results for nodes that have at least one infeasible child. I dug through PySCIPOpt and the C-code of SCIP and would have expected a big number as the sb score in case (at least) one child is infeasible, as the LP-solver returns a big number for infeasible problems and SCIP is using the product-score by default. Instead of big numbers I get numbers just big enough to be bigger than the other scores.
My questions is: What am I missing in the code? Where and how does SCIP treat the score for infeasible children differently.
As quite some code is required to reproduce this, I created a gist, that makes use of a set-cover instance I also uploaded to github.
I'm pretty sure the strongbranching score never exceeds the cutoffbound of the lp, so infeasible nodes would get a score of the cutoffbound.

Is it normal to get different answers in different versions of GAMS?

I solved a model with two different versions of GAMS. (Version 24.2 and Version 27.3)
The answers I get from Version 24.2 are different from the answers from Version 27.3 !
Is this normal?
Which answer can be trusted?
Thanks!
I have experienced this with a MIP model. Actually, I obtained different optimal objective values (in the optimality) when running it with different solvers or declaring variables in a different order without changing anything in the model. This erratic behavior can happen when you have numerical issues, which can be a consequence of a large matrix coefficient range, for example.
If this is the case, first you should try to reformulate your model (see an useful guideline here: Guidelines for Numerical Issues).
If reformulating is not possible or not enough to solve the issue, my suggestion is changing the solver resolution options. Some solvers such as Gurobi and CPLEX have options that help dealing with numerical issues (e.g. numericfocus, scaleflag, etc. for Gurobi). You should look for the respective helpful options according to the solver you are using.

using Bonmin Counne and Ipopt for NLP

I want to just be sure that I am eligible to use Bonmin and Couenne for solving just the NLP problem (Still I do not have integer variable) and I am eager to obtain global optimum not local. I also read that Ipopt first search for the global answer and if it does not find that it will provide a local answer. How I can understand my answer is a global answer when I using Ipopt. Also, I want to what is the best NLP and MINLP open source pythonic solvers for these issues that can be merged with Pyomo?
The main reason for my question is the following output using Bonmin:
NOTE: You are using Ipopt by default with the MUMPS linear solver.
Other linear solvers might be more efficient (see Ipopt documentation).
Regards
Some notes:
(1) "Ipopt first search for the global answer and if it does not find that it will provide a local answer" This is probably not how I would phrase it. IPOPT finds local solutions. For some problems these will be the global solution. For convex problems, this is always the case (except for numerical issues).
(2) Bonmin is a local MINLP solver, Couenne is a global NLP/MINLP solver. Typically Bonmin can solve larger problems than Couenne, but you get local solutions.
(3) "NOTE: You are using Ipopt by default with the MUMPS linear solver. Other linear solvers might be more efficient (see Ipopt documentation)." This is just a notification that you are using IPOPT with linear algebra routines from MUMPS. There are other linear sub-solvers that IPOPT can use and that may perform better on large problems. Often the HARWELL routines (typically called MAnn) give better performance. MUMPS is free while the Harwell routines require a license.
In a follow-up answer (well it is not answer at all) it is stated:
Regarding Ipopt how I can understand that it is finding the global
solution or local optimum? the code will notify that? Regarding to
Bonmin according to AMPL page AMPL It provides the global solution for
the convex problem " Finds globally optimal solutions to convex
nonlinear problems in continuous and discrete variables, and may be
applied heuristically to nonconvex problems." And you were saying that
it is obtained the local solution, I am a bit confused on this part.
But the general question about all those codes is that how I can find
out that the answer is global optimum?
(a) Ipopt does not know if a solution is a local or a global optimal solution. For convex problems a local optimum is a global optimal solution. You will need to convince yourself the problem you pass on to Ipopt is convex (Ipopt will not do this for you).
(b) Bonmin: the same: if the problem is convex it will find global solutions. Otherwise you will get a local solution. You will get no notification whether a solution is a global solution: Bonmin does not know if a solution is a global optimum.
(c) When looking for guaranteed global solutions you can use a local solver only when the problem is convex. For other problems you need a global solver. Another approach is to use a multi-start algorithm with a local solver. That gives you confidence that you are not ending up with a bad local optimum.
If possible, I suggest to discuss this with your teacher. These concepts are important to understand (and most solver manuals assume you know about them).