Unconstrained Optimization using Gradient and (sparse) Hessian - optimization

I'm looking for a C++ optimization package that can do multivariate unconstrained optimization using gradient and Hessian information. I'm doing it now in Matlab using fminunc with the 'GradObj', 'Hessian', and 'HessPattern' options. My Hessian is very sparse so a package that takes that into account would be preferable.
Are there any alternatives to Matlab for this? Open-source or closed-source are both fine. C++ is preferable but I'm flexible.

Have you considered compiling the MATLAB library into a .dll or .exe that R can reference? MATLAB has this capability.

You could just ditch the Hessian and use a BFGS approach, like libLBFGS. These quasi-Newton methods are usually pretty good.

As I understand, what you need is an efficient linear algebra library. Consider, for example, uBLAS

I recently encountered the trustOptim package in R. It is useful in case the Hessian is sparse. As far as I know, the core of that package is written in C++ and interfaced with R using Rcpp. It's open source as well.

Related

Get infeasibilities with IBM cplex feasopt python's interface

I am using IBM CPLEX python's API to solve a linear program.
The linear program I am solving turned out to be infeasible, so I am using feasopt() from CPLEX to relax the problem.
I could get a feasible solution through my_prob.feasopt(my_prob.feasopt.all_constraints()), where feasopt relaxes all the constraints.
But I am interested in getting the amount of relaxation for each constraint. Particularly, in the documentation it says In addition to that conventional solution vector, FeasOpt also produces a vector of values that provide useful information about infeasible constraints and variables.
I am interested in getting this vector.
I believe you are looking for the methods available under the Cplex.solution.infeasibility interface.
Example usage:
# query the infeasibilities for all linear constraints
rowinfeas = my_prob.solution.infeasibility.linear_constraints(
my_prob.solution.get_values())

Robust regression in scilab

For the aim of a robust linear regression, i want to realize a M-Estimator with Geman-McLure loss function
The class of M-Estimators are presented in this document and Geman-McLure can be found at page 13.
To solve the minimization problem, Iteratively reweighted least squares is recommended. How can i implement this procedure in scilab? May i use optim?
From the site of the document you linked there are also some Matlab demo files available in a zip. There are two files in this zip that I think are important:
utvisToolbox/tutorials/lineTut/robustDemo.m
utvisToolbox/tutorials/lineTut/robustIteration.m
In the robustDemo.m file there is a implementation of the Robust M-estimation Algorithm.
To answer your question how to implement this in SciLab; You could start by converting these matlab files to scilab using mfile2sci. At least in the
sampleRobustObj and robustIteration functions, only basic Matlab stuff is used which should be convertible by mfile2sci.

FSM framework required

Please recommend framework for Finite State Machine creation and simulation. I am aware of Stateflow package in Matlab, but are there any other good choices? It shouldn't be only Matlab. Frameworks on Java, R or Python are also ok.
What I am basically trying to do is to evolve automata for binary sequence prediction problem, like shown in this article
Thanks.
Consider Ragel. It has a manual and a good amount of examples; I find the documentation superior to that of AT&T Research's FSM Tools (which consisted of a couple manpages and sparse examples).

Examples of apache math optimization

I have a simple optimization problem and am looking for java software for that.
The Apache math optimization software looks just like what I want but I cant find documentation to suit my needs (where those needs are to useful to a beginner / non maths professional!)
Does anyone know of a worked, simple, example?
In case it helps, the problem is that I want to find the max r where
r1 = s1 * m1
r2 = s2 * m2
and there are some constraints and formula for defining the relationship between the variables. The Excel Solver works fine for this problem. I got LPSolve working great, but this problem requires a multiplication of s and m, so I understand LPSolve cant help as this makes the problem non linear.
I recently ported the derivative-free non-linear constrained optimization code COBYLA2 to Java. Since it does not explicitly rely on derivatives, the algorithm may require quite a few iterations for larger problems. Nonetheless, you are able to formulate your problem with both a non-linear objective function and (potentially) non-linear constraints.
You can read more about it and download the source code from here.
I am not aware of a simple Java-based NLP solver. (I did find an example of Quadratic programming (QP) in Apache Math Works, but it doesn't qualify since you asked for a non-math professional example.)
I have two suggestions for you to solve your non-linear program:
1.. Excel's Solver does have the ability to tackle non-linear problems. (Don't use LPSOLVE.) In fact, NLP is the default mode in Solver.
Here are two links to using Excel to solve NLPs: Example 1 - Step by step Solver walk-through that covers NLP and
Example 2 - A General Neural network example in Excel
Also for Excel, I like Paul Jensen's (utexas) ORMM Add-in's.
He has a module called Teach NLP. Chapter 10 of his book deals with NLP and is available from his site.
2.. If you are going to be doing even some amount of data analysis, then I recommend investing a few hours to download and learn the basics of R.
R has numerous packages and libraries for optimization. optim() and nlme are relavant for solving non-linear programs.
Just for completeness, I mention SAS, MATLAB and CPLEX as other options. If you have access to any of these, they all do a very good job with solving non-linear programs.
Hope these pointers help.

Artificial Intelligence Compiler

I was wondering, is it possible to use Artificial Intelligence to make compilers better?
Things I could imagine if it was possible -
More specific error messages
Improving compiler optimizations, so the compiler could actually understand what you're trying to do, and do it better
If it is possible, are there any research projects on this subject?
You should look at MILEPOST GCC -
MILEPOST GCC is the first practical attept to build machine learning enabled open-source self-tuning production (and research) compiler that can adapt to any architecture using iterative feedback-directed compilation, machine learning and collective optimizatio
An optimizing compiler is actually a very complex expert system and Expert systems is one of the oldest branches of artificial intelligence.
Are you refering to something like Genetic Programming?
http://en.wikipedia.org/wiki/Genetic_programming
This is indeed a field being researched. Look at the milepost branch for GCC, which relies on profile-guided optimization and machine learning. The recent scientific literature for compilers is full of papers using a combination of data mining, machine learning (through genetic algorithms or neural networks), and more "classical", pattern-recognition of certain code patterns.