Nelder Mead algorithm for constrained optimization? - optimization

I have read that Nelder Mead algorithm is working for unconstrained optimization.
http://www.scholarpedia.org/article/Nelder-Mead_algorithm
I think in Matlab Nelder Mead is used also for unconstrained optimization.
However, I am a little bit confused, since I found a Java API for optimization
http://www.ee.ucl.ac.uk/~mflanaga/java/Minimisation.html
(Flanagan's Scientific Library)
that has a class that implements Nelder Mead simplex and allows for defining constraints and bounds.
So, is the version implemented in Flanagan's API a modified variation of the "classical" Nelder Mead algorithm?

It looks like the API is implementing a simple "soft" constraint system, where constraints are transformed into penalty functions which severely penalize regions outside the constraints. It's a cheap-and-cheerful way of adding constraints to an unconstrained solver, but there'll be a tradeoff between optimality, convergence, and the degree to which the constraints are satisfied.

Related

How does Constrained Nonlinear Optimization VI works? (Theory)

I am trying to get the theory behind LabVIEW's Constrained Nonlinear Optimization VI. There description provides how to use it but not which optimization algorithms works behind it.
Here is an overview of the optimization algorithms but it simply states
Solves a general nonlinear optimization problem with nonlinear equality constraint and nonlinear inequality constraint bounds using a sequential quadratic programming method.
I suspect that it is a wrapper for multiple algorithms depending on the inputs... I want to know if it uses a Levenberg-Marquardt or a Downhill-Simplex or other theory. It is not even said if it is trust-region or line search and how the bounds are ensured (e.g. by reflection)... In other languages, the documentation often refers to a paper from which I can take the original theory. This is what I am looking for. Can anyone help (or do I have to contact the NI support)? Thx
(using LabVIEW 2017 and 2018 32bit)

Limitations of optimisation software such as CPLEX

Which of the following optimisation methods can't be done in an optimisation software such as CPLEX? Why not?
Dynamic programming
Integer programming
Combinatorial optimisation
Nonlinear programming
Graph theory
Precedence diagram method
Simulation
Queueing theory
Can anyone point me in the right direction? I didn't find too much information regarding the limitations of CPLEX on the IBM website.
Thank you!
That's kind-of a big shopping list, and most of the things on it are not optimisation methods.
For sure CPLEX does integer programming, non-linear programming (just quadratic, SOCP, and similar but not general non-linear) and combinatoric optimisation out of the box.
It is usually possible to re-cast things like DP as MILP models, but will obviously require a bit of work. Lots of MILP models are also based on graphs, so yes it is certainly possible to solve a lot of graph problems using a MILP solver such as CPLEX.
Looking wider at topics like simulation, then that is quite a different approach. Simulation really is NOT an optimisation method, but it can be used alongside optimisation to get extra insights which may be useful in a business context. Might be used for example to discover some empirical relationships that could be used in an optimisation model by CPLEX.
The same can probably also be said for things like queuing theory, precedence, etc. Basically, use CPLEX as an optimisation tool to solve part or all of your problem once you have structured and analysed it via one of these other approaches.
Hope that helps.

Get infeasibilities with IBM cplex feasopt python's interface

I am using IBM CPLEX python's API to solve a linear program.
The linear program I am solving turned out to be infeasible, so I am using feasopt() from CPLEX to relax the problem.
I could get a feasible solution through my_prob.feasopt(my_prob.feasopt.all_constraints()), where feasopt relaxes all the constraints.
But I am interested in getting the amount of relaxation for each constraint. Particularly, in the documentation it says In addition to that conventional solution vector, FeasOpt also produces a vector of values that provide useful information about infeasible constraints and variables.
I am interested in getting this vector.
I believe you are looking for the methods available under the Cplex.solution.infeasibility interface.
Example usage:
# query the infeasibilities for all linear constraints
rowinfeas = my_prob.solution.infeasibility.linear_constraints(
my_prob.solution.get_values())

Genetic Sharp - Optimization under constraints

Is it possible to implement inegality constraints (linear or non-linear) in GeneticSharp?
You can implement them in the fitness function, by penalizing chromosomes that breach the constraint with a outrageous penalty compared with the normal fitness ranges.
I have done it for a few problems and it worked fine.

Is mixed integer linear programming used to implement optimization algorithms (e.g., genetic or particle swarm)

I am learning about optimization algorithms for automatic grouping of users. However, I am completely new to these algorithms and I have heard about them as I reviewed the related literature. And, differently, in one of the articles, the authors implemented their own algorithm (based on their own logic) using Integer Programming (this is how I heard about IP).
I am wondering if one needs to implement a genetic/particle swarm (or any other optimization) algorithm using mixed integer linear programming, or is this just one of the options. At the end, I will need to build a web-based system that groups users automatically. I appreciate any help.
I think you are confusing the terms a bit. These are all different optimization techniques. You can surely represent a problem using Mixed Integer Programming (MIP) notation but you can solve it with a MIP solver or genetic algorithms (GA) or Particle Swarm Optimization (PSO).
Integer Programming is part of a more traditional paradigm called mathematical programming, in which a problem is modelled based on a set of somewhat rigid equations. There are different types of mathematical programming models: linear programming (where all variables are continuous), integer programming, mixed integer programming (a mix of continuous and discrete variables), nonlinear programming (some of the equations are not linear).
Mathematical programming models are good and robust, depending on the model, you can tell how far you are from an ideal solution, for example. But these models often struggle in problems with many variables.
On the other hand, genetic algorithms and PSO belong to a younger branch of optimization techniques, one that it is often called metaheuristics. These techniques often find good or at least reasonable solutions even for large and complex problems, many practical applications
There are some hybrid algorithms that combine both mathematical models and metaheuristics and in this case, yes, you would use both MIP and GA/PSO. Choosing which approach (MIP, metaheuristics or hybrid) is very problem-dependent, you have to test what works better for you. I would usually prefer mathematical models if the focus is on the accuracy of the solution and I would prefer metaheuristics if my objective function is very complex and I need a quick, although poorer, solution.