I want to just be sure that I am eligible to use Bonmin and Couenne for solving just the NLP problem (Still I do not have integer variable) and I am eager to obtain global optimum not local. I also read that Ipopt first search for the global answer and if it does not find that it will provide a local answer. How I can understand my answer is a global answer when I using Ipopt. Also, I want to what is the best NLP and MINLP open source pythonic solvers for these issues that can be merged with Pyomo?
The main reason for my question is the following output using Bonmin:
NOTE: You are using Ipopt by default with the MUMPS linear solver.
Other linear solvers might be more efficient (see Ipopt documentation).
Regards
Some notes:
(1) "Ipopt first search for the global answer and if it does not find that it will provide a local answer" This is probably not how I would phrase it. IPOPT finds local solutions. For some problems these will be the global solution. For convex problems, this is always the case (except for numerical issues).
(2) Bonmin is a local MINLP solver, Couenne is a global NLP/MINLP solver. Typically Bonmin can solve larger problems than Couenne, but you get local solutions.
(3) "NOTE: You are using Ipopt by default with the MUMPS linear solver. Other linear solvers might be more efficient (see Ipopt documentation)." This is just a notification that you are using IPOPT with linear algebra routines from MUMPS. There are other linear sub-solvers that IPOPT can use and that may perform better on large problems. Often the HARWELL routines (typically called MAnn) give better performance. MUMPS is free while the Harwell routines require a license.
In a follow-up answer (well it is not answer at all) it is stated:
Regarding Ipopt how I can understand that it is finding the global
solution or local optimum? the code will notify that? Regarding to
Bonmin according to AMPL page AMPL It provides the global solution for
the convex problem " Finds globally optimal solutions to convex
nonlinear problems in continuous and discrete variables, and may be
applied heuristically to nonconvex problems." And you were saying that
it is obtained the local solution, I am a bit confused on this part.
But the general question about all those codes is that how I can find
out that the answer is global optimum?
(a) Ipopt does not know if a solution is a local or a global optimal solution. For convex problems a local optimum is a global optimal solution. You will need to convince yourself the problem you pass on to Ipopt is convex (Ipopt will not do this for you).
(b) Bonmin: the same: if the problem is convex it will find global solutions. Otherwise you will get a local solution. You will get no notification whether a solution is a global solution: Bonmin does not know if a solution is a global optimum.
(c) When looking for guaranteed global solutions you can use a local solver only when the problem is convex. For other problems you need a global solver. Another approach is to use a multi-start algorithm with a local solver. That gives you confidence that you are not ending up with a bad local optimum.
If possible, I suggest to discuss this with your teacher. These concepts are important to understand (and most solver manuals assume you know about them).
Related
I am using IBM CPLEX python's API to solve a linear program.
The linear program I am solving turned out to be infeasible, so I am using feasopt() from CPLEX to relax the problem.
I could get a feasible solution through my_prob.feasopt(my_prob.feasopt.all_constraints()), where feasopt relaxes all the constraints.
But I am interested in getting the amount of relaxation for each constraint. Particularly, in the documentation it says In addition to that conventional solution vector, FeasOpt also produces a vector of values that provide useful information about infeasible constraints and variables.
I am interested in getting this vector.
I believe you are looking for the methods available under the Cplex.solution.infeasibility interface.
Example usage:
# query the infeasibilities for all linear constraints
rowinfeas = my_prob.solution.infeasibility.linear_constraints(
my_prob.solution.get_values())
I am trying to solve an optimisation problem consisting in finding the global maximum of a high dimensional (10+) monotonic function (as in monotonic in every direction). The constraints are such that they cut the search space with planes.
I have coded the whole thing in pyomo and I am using the ipopt solver. In most cases, I am confident it converges successfully to the global optimal. But if I play a bit with the constraints I see that it sometimes converges to a local minima.
It looks like a exploration-exploitation trade-off.
I have looked into the options that can be passed to ipopt and the list is so long that I cannot understand which parameters to play with to help with the convergence to the global minima.
edit:
Two hints of a solution:
my variables used to be defined with very infinite bounds, e.g. bounds=(0,None) to move on the infinite half-line. I have enforced two finite bounds on them.
I am now using multiple starts with:
opt = SolverFactory('multistart')
results = opt.solve(self.model, solver='ipopt', strategy='midpoint_guess_and_bound')
So far this has made my happy with the convergence.
Sorry, IPOPT is a local solver. If you really want to find global solutions, you can use a global solver such as Baron, Couenne or Antigone. There is a trade-off: global solvers are slower and may not work for large problems.
Alternatively, you can help local solvers with a good initial point. Be aware that active set methods are often better in this respect than interior point methods. Sometimes multistart algorithms are used to prevent bad local optima: use a bunch of different starting points. Pyomo has some facilities to do this (see the documentation).
As far as i know Gurobi resumes optimizing where it left after calling Model.Terminate() and then calling Model.Optimize() again. So I can terminate and get the best solution so far and then proceed.Now I want to do the same, but since I want to use parts of the suboptimal solution I need to set some variables to fixed values before I call Model.Optimize() again and optimize the rest of the model. How can i do this so that gurobi does not start all over again?
First, it sounds like you're describing a mixed-integer program (MIP); model modification is different for continuous optimization (linear programming, quadratic programming).
When you modify a MIP model, the tree information is no longer helpful. Instead, you must resolve the continuous (LP) relaxation and create a new branch-and-cut tree. However, the prior solution may still be used as a MIP start, which can reduce the solve time for the second model.
However, your method may be redundant with the RINS algorithm, which is an automatic feature of Gurobi MIP. You can control the behavior of RINS via the parameters RINS, SubMIPNodes and Heuristics.
I am trying to solve some problems that can be mapped in convex optimisation problem.
In particular is for analysis of quantum state tomography data.
In Matlab there are some tools to help you do this, like SeDuMi or CVX
http://sedumi.ie.lehigh.edu
http://cvxr.com/cvx/
But I could not find anything similar in Mathematica, on the web or in the forums.
Does anybody know if there is an easy way of implementing this kind of algorithm in Mathematica?
I would like to avoid to be forced to switch to Matlab to solve this problem. Nothing against it, but I have most of the programming for this state tomography developed in Mathematica.
Thank you very much.
I had also some troubles with Mathematica in
optimization, exactly on convex problems.
I suggest you export to CVX, which will require
some work because it wants the problem in matrix notation.
Otherwise, to remain with the algebraic formulation,
you could try with Maple, which has, as far
as I can tell, better optimizers than Mathematica.
(check the doc to have an idea)
I have a simple optimization problem and am looking for java software for that.
The Apache math optimization software looks just like what I want but I cant find documentation to suit my needs (where those needs are to useful to a beginner / non maths professional!)
Does anyone know of a worked, simple, example?
In case it helps, the problem is that I want to find the max r where
r1 = s1 * m1
r2 = s2 * m2
and there are some constraints and formula for defining the relationship between the variables. The Excel Solver works fine for this problem. I got LPSolve working great, but this problem requires a multiplication of s and m, so I understand LPSolve cant help as this makes the problem non linear.
I recently ported the derivative-free non-linear constrained optimization code COBYLA2 to Java. Since it does not explicitly rely on derivatives, the algorithm may require quite a few iterations for larger problems. Nonetheless, you are able to formulate your problem with both a non-linear objective function and (potentially) non-linear constraints.
You can read more about it and download the source code from here.
I am not aware of a simple Java-based NLP solver. (I did find an example of Quadratic programming (QP) in Apache Math Works, but it doesn't qualify since you asked for a non-math professional example.)
I have two suggestions for you to solve your non-linear program:
1.. Excel's Solver does have the ability to tackle non-linear problems. (Don't use LPSOLVE.) In fact, NLP is the default mode in Solver.
Here are two links to using Excel to solve NLPs: Example 1 - Step by step Solver walk-through that covers NLP and
Example 2 - A General Neural network example in Excel
Also for Excel, I like Paul Jensen's (utexas) ORMM Add-in's.
He has a module called Teach NLP. Chapter 10 of his book deals with NLP and is available from his site.
2.. If you are going to be doing even some amount of data analysis, then I recommend investing a few hours to download and learn the basics of R.
R has numerous packages and libraries for optimization. optim() and nlme are relavant for solving non-linear programs.
Just for completeness, I mention SAS, MATLAB and CPLEX as other options. If you have access to any of these, they all do a very good job with solving non-linear programs.
Hope these pointers help.