Is it normal to get different answers in different versions of GAMS? - gams-math

I solved a model with two different versions of GAMS. (Version 24.2 and Version 27.3)
The answers I get from Version 24.2 are different from the answers from Version 27.3 !
Is this normal?
Which answer can be trusted?
Thanks!

I have experienced this with a MIP model. Actually, I obtained different optimal objective values (in the optimality) when running it with different solvers or declaring variables in a different order without changing anything in the model. This erratic behavior can happen when you have numerical issues, which can be a consequence of a large matrix coefficient range, for example.
If this is the case, first you should try to reformulate your model (see an useful guideline here: Guidelines for Numerical Issues).
If reformulating is not possible or not enough to solve the issue, my suggestion is changing the solver resolution options. Some solvers such as Gurobi and CPLEX have options that help dealing with numerical issues (e.g. numericfocus, scaleflag, etc. for Gurobi). You should look for the respective helpful options according to the solver you are using.

Related

Always enabling numeric instability correction with Gurobi

We run large-scale optimization problems on regular-basis using Cvxpy+Gurobi.
Our optimization problem sometimes (~10% of the time) becomes numerically unstable.
Fortunately, the issue automatically resolves after re-running the optimization problem with setting of Gurobi's numeric instability correction parameter, i.e. NumericFocus=3.
We were curious on:
To avoid re-running it the second time, can we always by-default enable the numeric instability correction parameter NumericFocus=3?
Other than slightly higher runtime, is other any other downside also?
If you haven't done so already, please read Guidelines for Numerical Issues in the Gurobi Optimizer Reference Manual. In short, there is far more to numerical issues than just setting a magic parameter. If you are a commercial customer, you can contact Gurobi Support for specific guidance on your models.

Transform an optimisation problem for MOSEK

I would like to use Mosek to solve the following problem:
The constraint is convex. In the guidance of the problems that Mosek can solve I could not find a "close" example. Hence, I wonder: (1) Is Mosek suitable to solve the problem above? (2) If yes, how can I readapt the problem above to be solved by Mosek? (3) If not, could you suggest an alternative solver I might use?
Yes, the upper bound on softplus function, or more general log-sum-exp, can be modeled with the exponential cone like here https://docs.mosek.com/modeling-cookbook/expo.html#softplus-function
Here is an example where log-sum-exp is used in a bigger problem https://docs.mosek.com/latest/pythonfusion/case-studies-logistic.html#doc-case-studies-logistic
Many modeling tools that can use Mosek as a solver will have a log_sum_exp atom available directly, for instance see https://www.cvxpy.org/tutorial/functions/index.html

using Bonmin Counne and Ipopt for NLP

I want to just be sure that I am eligible to use Bonmin and Couenne for solving just the NLP problem (Still I do not have integer variable) and I am eager to obtain global optimum not local. I also read that Ipopt first search for the global answer and if it does not find that it will provide a local answer. How I can understand my answer is a global answer when I using Ipopt. Also, I want to what is the best NLP and MINLP open source pythonic solvers for these issues that can be merged with Pyomo?
The main reason for my question is the following output using Bonmin:
NOTE: You are using Ipopt by default with the MUMPS linear solver.
Other linear solvers might be more efficient (see Ipopt documentation).
Regards
Some notes:
(1) "Ipopt first search for the global answer and if it does not find that it will provide a local answer" This is probably not how I would phrase it. IPOPT finds local solutions. For some problems these will be the global solution. For convex problems, this is always the case (except for numerical issues).
(2) Bonmin is a local MINLP solver, Couenne is a global NLP/MINLP solver. Typically Bonmin can solve larger problems than Couenne, but you get local solutions.
(3) "NOTE: You are using Ipopt by default with the MUMPS linear solver. Other linear solvers might be more efficient (see Ipopt documentation)." This is just a notification that you are using IPOPT with linear algebra routines from MUMPS. There are other linear sub-solvers that IPOPT can use and that may perform better on large problems. Often the HARWELL routines (typically called MAnn) give better performance. MUMPS is free while the Harwell routines require a license.
In a follow-up answer (well it is not answer at all) it is stated:
Regarding Ipopt how I can understand that it is finding the global
solution or local optimum? the code will notify that? Regarding to
Bonmin according to AMPL page AMPL It provides the global solution for
the convex problem " Finds globally optimal solutions to convex
nonlinear problems in continuous and discrete variables, and may be
applied heuristically to nonconvex problems." And you were saying that
it is obtained the local solution, I am a bit confused on this part.
But the general question about all those codes is that how I can find
out that the answer is global optimum?
(a) Ipopt does not know if a solution is a local or a global optimal solution. For convex problems a local optimum is a global optimal solution. You will need to convince yourself the problem you pass on to Ipopt is convex (Ipopt will not do this for you).
(b) Bonmin: the same: if the problem is convex it will find global solutions. Otherwise you will get a local solution. You will get no notification whether a solution is a global solution: Bonmin does not know if a solution is a global optimum.
(c) When looking for guaranteed global solutions you can use a local solver only when the problem is convex. For other problems you need a global solver. Another approach is to use a multi-start algorithm with a local solver. That gives you confidence that you are not ending up with a bad local optimum.
If possible, I suggest to discuss this with your teacher. These concepts are important to understand (and most solver manuals assume you know about them).

Examples of apache math optimization

I have a simple optimization problem and am looking for java software for that.
The Apache math optimization software looks just like what I want but I cant find documentation to suit my needs (where those needs are to useful to a beginner / non maths professional!)
Does anyone know of a worked, simple, example?
In case it helps, the problem is that I want to find the max r where
r1 = s1 * m1
r2 = s2 * m2
and there are some constraints and formula for defining the relationship between the variables. The Excel Solver works fine for this problem. I got LPSolve working great, but this problem requires a multiplication of s and m, so I understand LPSolve cant help as this makes the problem non linear.
I recently ported the derivative-free non-linear constrained optimization code COBYLA2 to Java. Since it does not explicitly rely on derivatives, the algorithm may require quite a few iterations for larger problems. Nonetheless, you are able to formulate your problem with both a non-linear objective function and (potentially) non-linear constraints.
You can read more about it and download the source code from here.
I am not aware of a simple Java-based NLP solver. (I did find an example of Quadratic programming (QP) in Apache Math Works, but it doesn't qualify since you asked for a non-math professional example.)
I have two suggestions for you to solve your non-linear program:
1.. Excel's Solver does have the ability to tackle non-linear problems. (Don't use LPSOLVE.) In fact, NLP is the default mode in Solver.
Here are two links to using Excel to solve NLPs: Example 1 - Step by step Solver walk-through that covers NLP and
Example 2 - A General Neural network example in Excel
Also for Excel, I like Paul Jensen's (utexas) ORMM Add-in's.
He has a module called Teach NLP. Chapter 10 of his book deals with NLP and is available from his site.
2.. If you are going to be doing even some amount of data analysis, then I recommend investing a few hours to download and learn the basics of R.
R has numerous packages and libraries for optimization. optim() and nlme are relavant for solving non-linear programs.
Just for completeness, I mention SAS, MATLAB and CPLEX as other options. If you have access to any of these, they all do a very good job with solving non-linear programs.
Hope these pointers help.

How do you find the most discriminant terms in binary document classification?

I want to use feature selection to find the terms in a document that are most useful for a binary classification task.
I've been looking around:
This mentions Mutual Information and the chi-squared test metric
http://nlp.stanford.edu/IR-book/html/htmledition/feature-selection-1.html
MATLAB has a number of functions as well:
http://www.mathworks.com/help/toolbox/stats/brj0qbu.html
Feature Selection in MATLAB
Of the above, relieff and rankfeatures look promising.
I do not know if my data follows a normal distribution. Any thoughts on which technique performs the best? Are there any newer methods you would suggest? The focus is to increase classification accuracy.
Thank you!
Since the answer is highly dependent on the nature of your data, I'd suggest playing with several options, possibly using a hold-out set for verification.
The easiest path would probably be to use Weka or RapidMiner for experimenting. Choosing from the plethora of options provided by them, you'll probably get acquainted with several other methods.
Having said that, I have found Mutual Information/Infogain to be useful on a large variety of problems.