Genetic Sharp - Optimization under constraints - geneticsharp

Is it possible to implement inegality constraints (linear or non-linear) in GeneticSharp?

You can implement them in the fitness function, by penalizing chromosomes that breach the constraint with a outrageous penalty compared with the normal fitness ranges.
I have done it for a few problems and it worked fine.

Related

How does Constrained Nonlinear Optimization VI works? (Theory)

I am trying to get the theory behind LabVIEW's Constrained Nonlinear Optimization VI. There description provides how to use it but not which optimization algorithms works behind it.
Here is an overview of the optimization algorithms but it simply states
Solves a general nonlinear optimization problem with nonlinear equality constraint and nonlinear inequality constraint bounds using a sequential quadratic programming method.
I suspect that it is a wrapper for multiple algorithms depending on the inputs... I want to know if it uses a Levenberg-Marquardt or a Downhill-Simplex or other theory. It is not even said if it is trust-region or line search and how the bounds are ensured (e.g. by reflection)... In other languages, the documentation often refers to a paper from which I can take the original theory. This is what I am looking for. Can anyone help (or do I have to contact the NI support)? Thx
(using LabVIEW 2017 and 2018 32bit)

why is the ORTOOLS guided local search, that starts with a feasible solution considered constraint programming?

I'm using the ORTOOLS library for solving a VRP problem. I give it an initial feasible solution to my problem, satisfying all the constraints of my problem but sub-optimal. Then ORTOOLS performs a GUIDED_LOCAL_SEARCH heuristic, continuously perturbing parts of my solution (possibly making it infeasible at times) until it hopefully reaches a better solution than my initial solution.
Why is it using a constraint programming solver? My understanding is that classic constraint programming starts with an infeasible (possibly empty) solution, propagates the constraints to narrow the domains of my variables until reaching a stationary state, and then makes a decision. Then it iterates again until solving the problem or backtracks if reaching a dead-end (think SUDOKU).
In what way are these capabilities (propagation, backtracking) needed when making the small perturbations?
There are two reasons.
1) The initial solution heuristics is a combination of fast LS heuristic search and standard constraint programming search.
2) The whole local search implementation is build on top of a traditional constraint programming solver and uses constraints and propagators to validate solution, and complete them.
See: https://github.com/google/or-tools/issues/920

Nelder Mead algorithm for constrained optimization?

I have read that Nelder Mead algorithm is working for unconstrained optimization.
http://www.scholarpedia.org/article/Nelder-Mead_algorithm
I think in Matlab Nelder Mead is used also for unconstrained optimization.
However, I am a little bit confused, since I found a Java API for optimization
http://www.ee.ucl.ac.uk/~mflanaga/java/Minimisation.html
(Flanagan's Scientific Library)
that has a class that implements Nelder Mead simplex and allows for defining constraints and bounds.
So, is the version implemented in Flanagan's API a modified variation of the "classical" Nelder Mead algorithm?
It looks like the API is implementing a simple "soft" constraint system, where constraints are transformed into penalty functions which severely penalize regions outside the constraints. It's a cheap-and-cheerful way of adding constraints to an unconstrained solver, but there'll be a tradeoff between optimality, convergence, and the degree to which the constraints are satisfied.

Giving Priority sequence to Constraints in Gurobi/Cplex (Linear programming)

I am working on Business Problem for factory and developing Linear programming Solution. problem has thousands of Constraints and variables. I want to give priority sequence to constraints so that constraints which are lower in priority can be breached if no optimum solution.
My Question is how to Set the constraint priority sequnece for CPLEX/Gurobi Solver.I am using java as language ,Do we have any specific format/function etc?
This is usually done at the modeling level. Add slacks to the equations, and add a term to the objective that minimizes the slack using a penalty or cost coefficient. Sometimes you can even use some dollar figures for the cost (e.g. storage capacity constraint: then cost is something like the price of renting extra storage space). This process is sometimes called making the model elastic, or introducing hard and soft constraints and is quite often used in practical models.

Improved Genetic algorithm for multiknapsack problem

Recently i've been improving traditional genetic algorithm for multiknapsack problem. So My Improved Genetic Algorithm is working better then Traditional Genetic Algorithm. I tested. (i used publically available from OR-Library (http://people.brunel.ac.uk/~mastjjb/jeb/orlib/mknapinfo.html) were used to test the GAs.) Does anybody know other improved GA. I wanted to compare with other improved genetic algorithm. Actually i searched in internet. But couldn't find good algorithm to compare.
There should be any number of decent GA methods against which you can compare. However, you should try to first clearly establish exactly which "traditional" GA method you have already tested.
One good method which I can recommend is the NSGA-II algorithm, which was developed for multi-objective optimization.
Take a look at the following for other ideas:
Genetic Algorithm - Wikipedia
Carlos A. Coello Coello (1999). "A Comprehensive Survey of Evolutionary-Based Multiobjective Optimization Techniques", Knowledge and Information Systems, Vol. 1, pp. 269-308.
Carlos A. Coello Coello et al (2005). "Current and Future Research Trends in Evolutionary Multiobjective Optimization", Information Processing with Evolutionary Algorithms, Springer.
You can compare your solution only to problems with the exact same encoding and fitness function (meaning they are equivalent problems). If the problem is different any comparison becomes quickly irrelevant as the problem changes, since the fitness function is almost always ad-hoc for whatever you're trying to solve. In fact the fitness function is the only thing you need to code if you use a Genetic Algorithms toolkit, as everything else usually comes out of the box.
On the other end, if the fitness function is the same, then it makes sense to compare results given different parameters, such as different mutation rate, different implementations of crossover, or even completely different evolutionary paradigms, such as coevolution, gene expression, compared to standard GAs, and so on.
Are you trying to improve the state-of-the-art in multiknapsack solvers by the use of genetic algorithms? Or are you trying to advance the genetic algorithm technique by using multiknapsack as a test platform? (Can you clarify?)
Depending on which one is your goal, the answer to your question is entirely different. Since others have addressed the latter question, I'll assume the former.
There has been little major leaps and bounds over the basic genetic algorithm. The best improvement in solving the multiknapsack via the use of genetic algorithms would be to improve your encoding of the mutation and crossover operators which can make orders of magnitude of difference in the resulting performance and blow out of the water any tweaks to the fundamental genetic algorithm. There is a lot you can do to make your mutation and crossover operators tailored to multiknapsack.
I would first survey the literature on multiknapsack to see what are the different kinds of search spaces and solution techniques people have used on multiknapsack. In their optimal or suboptimal methods (independent of genetic algorithms), what kinds of search operators do they use? What do they encode as variables and what do they encode as values? What heuristic evaluation functions are used? What constraints do they check for? Then you would adapt their encodings to your mutation and crossover operators, and see how well they perform in your genetic algorithms.
It is highly likely that an efficient search space encoding or an accurate heuristic evaluation function of the multiknapsack problem can translate into highly effective mutation and crossover operators. Since multiknapsack is a very well studied problem with a large corpus of research literature, it should be a gold mine for you.