Get infeasibilities with IBM cplex feasopt python's interface - optimization

I am using IBM CPLEX python's API to solve a linear program.
The linear program I am solving turned out to be infeasible, so I am using feasopt() from CPLEX to relax the problem.
I could get a feasible solution through my_prob.feasopt(my_prob.feasopt.all_constraints()), where feasopt relaxes all the constraints.
But I am interested in getting the amount of relaxation for each constraint. Particularly, in the documentation it says In addition to that conventional solution vector, FeasOpt also produces a vector of values that provide useful information about infeasible constraints and variables.
I am interested in getting this vector.

I believe you are looking for the methods available under the Cplex.solution.infeasibility interface.
Example usage:
# query the infeasibilities for all linear constraints
rowinfeas = my_prob.solution.infeasibility.linear_constraints(
my_prob.solution.get_values())

Related

Transform an optimisation problem for MOSEK

I would like to use Mosek to solve the following problem:
The constraint is convex. In the guidance of the problems that Mosek can solve I could not find a "close" example. Hence, I wonder: (1) Is Mosek suitable to solve the problem above? (2) If yes, how can I readapt the problem above to be solved by Mosek? (3) If not, could you suggest an alternative solver I might use?
Yes, the upper bound on softplus function, or more general log-sum-exp, can be modeled with the exponential cone like here https://docs.mosek.com/modeling-cookbook/expo.html#softplus-function
Here is an example where log-sum-exp is used in a bigger problem https://docs.mosek.com/latest/pythonfusion/case-studies-logistic.html#doc-case-studies-logistic
Many modeling tools that can use Mosek as a solver will have a log_sum_exp atom available directly, for instance see https://www.cvxpy.org/tutorial/functions/index.html

How does Constrained Nonlinear Optimization VI works? (Theory)

I am trying to get the theory behind LabVIEW's Constrained Nonlinear Optimization VI. There description provides how to use it but not which optimization algorithms works behind it.
Here is an overview of the optimization algorithms but it simply states
Solves a general nonlinear optimization problem with nonlinear equality constraint and nonlinear inequality constraint bounds using a sequential quadratic programming method.
I suspect that it is a wrapper for multiple algorithms depending on the inputs... I want to know if it uses a Levenberg-Marquardt or a Downhill-Simplex or other theory. It is not even said if it is trust-region or line search and how the bounds are ensured (e.g. by reflection)... In other languages, the documentation often refers to a paper from which I can take the original theory. This is what I am looking for. Can anyone help (or do I have to contact the NI support)? Thx
(using LabVIEW 2017 and 2018 32bit)

using Bonmin Counne and Ipopt for NLP

I want to just be sure that I am eligible to use Bonmin and Couenne for solving just the NLP problem (Still I do not have integer variable) and I am eager to obtain global optimum not local. I also read that Ipopt first search for the global answer and if it does not find that it will provide a local answer. How I can understand my answer is a global answer when I using Ipopt. Also, I want to what is the best NLP and MINLP open source pythonic solvers for these issues that can be merged with Pyomo?
The main reason for my question is the following output using Bonmin:
NOTE: You are using Ipopt by default with the MUMPS linear solver.
Other linear solvers might be more efficient (see Ipopt documentation).
Regards
Some notes:
(1) "Ipopt first search for the global answer and if it does not find that it will provide a local answer" This is probably not how I would phrase it. IPOPT finds local solutions. For some problems these will be the global solution. For convex problems, this is always the case (except for numerical issues).
(2) Bonmin is a local MINLP solver, Couenne is a global NLP/MINLP solver. Typically Bonmin can solve larger problems than Couenne, but you get local solutions.
(3) "NOTE: You are using Ipopt by default with the MUMPS linear solver. Other linear solvers might be more efficient (see Ipopt documentation)." This is just a notification that you are using IPOPT with linear algebra routines from MUMPS. There are other linear sub-solvers that IPOPT can use and that may perform better on large problems. Often the HARWELL routines (typically called MAnn) give better performance. MUMPS is free while the Harwell routines require a license.
In a follow-up answer (well it is not answer at all) it is stated:
Regarding Ipopt how I can understand that it is finding the global
solution or local optimum? the code will notify that? Regarding to
Bonmin according to AMPL page AMPL It provides the global solution for
the convex problem " Finds globally optimal solutions to convex
nonlinear problems in continuous and discrete variables, and may be
applied heuristically to nonconvex problems." And you were saying that
it is obtained the local solution, I am a bit confused on this part.
But the general question about all those codes is that how I can find
out that the answer is global optimum?
(a) Ipopt does not know if a solution is a local or a global optimal solution. For convex problems a local optimum is a global optimal solution. You will need to convince yourself the problem you pass on to Ipopt is convex (Ipopt will not do this for you).
(b) Bonmin: the same: if the problem is convex it will find global solutions. Otherwise you will get a local solution. You will get no notification whether a solution is a global solution: Bonmin does not know if a solution is a global optimum.
(c) When looking for guaranteed global solutions you can use a local solver only when the problem is convex. For other problems you need a global solver. Another approach is to use a multi-start algorithm with a local solver. That gives you confidence that you are not ending up with a bad local optimum.
If possible, I suggest to discuss this with your teacher. These concepts are important to understand (and most solver manuals assume you know about them).

irreducible infeasible set (IIS) in gurobi, from minizinc

Is there a way to get the IIS from Gurobi if I use it via the minizinc interface (i.e., mzn-gurobi) ?
Thanks,
Ofer
Currently no such option exists for mzn-gurobi. All available options can be seen by checking the help output: mzn-gurobi -h. Generally the options are for linear solvers (CBC, CPLEX, Gurobi) are shared. If you are missing this functionality, I would suggest making a feature request on the MiniZinc repository. (Note that this functionality wouldn't be able to point to the constraints in the MiniZinc model, only the generated FlatZinc constraints)
What is in development within MiniZinc are Minimal Unsatisfiable Sets, which in my understanding are the same. A special kind of MiniZinc solver is in development that will give a subset of constraints, in MiniZinc, that violate a model. Although it seems development is going strong, it might be a while before this program will be released. If you have an immediate need for such a tool, you can try contacting the MiniZinc Team.

Support Vector Machine Primal Form Implementation

I am currently working on a support vector machine (SVM) project. The version of SVM that I am working on is Linear SVM in Primal Form and I am having hard time understanding where to start.
In general, I think I understand the theory; basically I need to minimize norm of w under certain constraint. And the Lagrangian function will be my objective function to be minimized (after Lagrange multiplier is applied).
The things that I don't understand is that I was told from my professor that we will be using Quasi-Newton method along with BFGS update. I have tried 2D and 3D case for Newton's method and I think I have good grasp of the algorithm, but I don't see how Quasi-Newton method is applied to find the coefficients alpha. Also, many literature that I read so far tells to apply Quadratic programming to find the coefficients.
How is the iterative algorithm of Quasi-Newton related to finding coefficients of w...? And how is quadratic programming related to Quasi-Newton? Can anyone please walk me through what is going on?
You are cunfusing many things here
"alpha coefficients" are only in the dual form, so you do not find them in your case
"apply Quadratic programming", quadratic programming is a problem, not a solution. you cannot "apply QP", you can only solve a QP, which in your case will be solved using quasi-newton method
"how is (...) related to finding coefficientss of w" exactly the same way, as this optimization technique is related to finding the optimal coefficients of any function. You are going to minimize the function of w, so applying any optimization technique (in particular quasi-netwton) will lead to solution expressed as w coefficients