Evolution Strategies [closed] - evolutionary-algorithm

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
What is the basic idea behind self adaptive evolution strategies? What are the strategy parameters and how are they manipulated during the run of the algorithm?

There's an excellent article on scholarpedia on the Evolution Strategy. I can also recommend the excellent journal article: Beyer, H.-G. & Schwefel, H.-P. Evolution Strategies - A Comprehensive Introduction. Natural Computing, 2002, 1, 3-52.
In the history of ES there have been several ways of adopting strategy parameters. The target of the adaptation generally is the shape and size of the sampling region around the current solution. The first one was the 1/5th success rule, then came the sigma self-adaptation and finally covariance matrix adaptation (CMA-ES). Why is this important? To put it simple: Adaptation of the mutation strength is necessary to maintain the evolution progress in all stages of the search. The closer you come to the optimum, the less you want to mutate your vector.
The advantage of CMA-ES over sigma self-adaptation is that it also adapts the shape of the region. Sigma self-adaption is restricted to axis-parallel adaptions only.

To get a larger picture, the book Introduction to Evolutionary Computing has a great chapter (#8) on parameter control, which self adaptation is part of.
Here is a quote taken from the introductory section:
Globally, we distinguish two major forms of setting parameter values:
parameter tuning and parameter control. By parameter tuning we mean
the commonly practised approach that amounts to finding good values
for the parameters Wont the run of the algorithm and then running the
algorithm using these values, which remain fixed during the run. Later
on in this section we give arguments that any static set of parameters
having the values fixed during an EA run seems to be inappropriate.
Parameter control forms an alternative. as it amounts to starting a
run with initial parameter values that are changed during the run.
Parameter tuning is a typical approach to algorithm design. Such
tuning is done by experimenting with different values and selecting
the ones that give the best results on the test problems at. hand.
However, the number of possible parameters and their different values
means that this is a very time-consuming activity
[Parameter control] is based on the observation that finding good
parameter values for an evolutionary algorithm is a poorly structured,
ill-defined, complex problem. This is exactly the kind of problem on
which EAs are often considered to perform better than other methods.
It is thus a natural idea to use an EA for tuning an EA to a
particular problem. This could be done using two EAs: one for problem
solving and another one - the so-called meta-EA - to tune the first
one. It could also be done by using only one EA that
tunes itself to a given problem, while solving that problem.
Self-adaptation, as introduced in evolution strategies for varying the
mutation parameters, falls within this category
It is then followed by concrete examples and further details.

well the goal behind self adapting in evolutionary computation in general is that algorithms should be general and require as less problem knowledge in form of input parameters you have to specify as possible.
self adapting makes an algorithm more general without the need of problem knowledge to choose the correct parametrisation.

Related

Optimizing Ant Colony System for TSP

I have implemented an Ant Colony System for symmetric TSP in Java using Dorigo's paper from the following link :
http://people.idsia.ch/~luca/acs-bio97.pdf
I also adapted the following strategy:
1.while not all ants have constructed a solution, each ant moves 1 step to a new city and updates the pheromone on the edge used using Dorigo's local pheromone update.
The ant producing the shortest path globally updates the pheromone on the edges used using Dorigo's global update formula
After a number of iterations the shortest path found in all iterations is returned
Is there a way I can improve the algorithm in order to give better results ?
For Example for TSP instance ch130 found in TSPLIB the optimal solution is 6110 and my algorithm is returning the answer 6223.
My ACS so far has parameters set as those defined in Dorigo's paper
There are a few things you can do to improve your solution:
Increase the number of iterations. There is a possibility that there is no stagnation yet, and new solutions can be achieved.
Increase the parameter associated with the visibility (heuristic) function to favor exploration of other solutions.
Have a look at the following two papers for more details. The first one combines ACO with a genetic algorithm to fine-tune the hyper-parameters which are used to configure ACO. The authors conclude that this method improves the convergence of ACO. The second paper use an adaptive procedure to set these parameters at runtime. As the authors claim these parameters are problem specific and depending problem which currently being solved, tunning needs to be performed to improve the convergence time of the algorithm.
Botee, Hozefa M., and Eric Bonabeau. "Evolving ant colony optimization." Advances in complex systems 1, no. 02n03 (1998): 149-159.
Stützle, Thomas, Manuel López-Ibánez, Paola Pellegrini, Michael Maur, Marco Montes De Oca, Mauro Birattari, and Marco Dorigo. "Parameter adaptation in ant colony optimization." In Autonomous search, pp. 191-215. Springer, Berlin, Heidelberg, 2011.
I guess the most straight forward way to improve the performance would be integrating with a local search method, e.g. 2-opt, 3-opt, and Lin-Ker heuristic. In practice, with the integration of these local search methods, a not very big scale TSP, e.g. ch130 can solve to the optimum easily.

Profiling / Code optimizing tool [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Please let me know which tool (GNU or 3rd party tool) is the best we can make use for profiling and optimizing our code. Is gprof an effective tool? Do we have dtrace tool ported in Linux?
You're not alone in conflating the terms "profiling" and "optimizing", but they're really very different. As different as weighing a book versus reading it.
As far as gprof goes, here are some issues.
Among profilers, the most useful are the ones that
Sample the whole call stack (not just the program counter) or at least as much of it as contains your code.
Samples on wall-clock time (not just CPU time). If you're losing time in I/O or other blocking calls, a CPU-only profiler will simply not see it.
Tells you by line-of-code (not just by function) the percent of stack samples containing that line. That's important because any such line of code that you could eliminate would save you that percent of overall time, and you don't have to eyeball functions to guess where it is.
A good profiler for linux that does this is Zoom. I'm sure there are others.
(Don't get confused about what matters. Efficiency and/or timing accuracy of a profiler is not important for helping you locate speed bugs. What's important is that it draws your attention to the right things.)
Personally, I use the random-pausing method, because I find it's the most effective.
Not only is it simple and requires no investment, but it finds speedup opportunities that are not localized to particular routines or lines of code, as in this example.
This is reflected in the speedup factors that can be achieved.
gprof is better than nothing. But not much. It estimates time spent not only in a function, but also in all of the functions called by the function - but beware it is an estimate, not a direct measurement. It does not make the distinction that some two callers of the same subroutine may have widely differing times spent inside it, per call. To do better than that you need a real call graph profiler, one that looks at several levels of the stack on a timer tick.
dtrace is good for what it does.
If you are doing low level performance optimization on x86, you should consider Intel's Vtune tool. Not only does it provide the best access I am aware of to low level performance measurement hardware on the chip, the so-called EMON (Event Monitoring) system (some of which I designed), but Vtune also has some pretty good high level tools. Such as call graph profiling that, I believe, is better than gprof. On the low level, I like doing things like generating profiles of the leading suspects, like branch mispredictions and cache misses, and looking at the code to see if there is something that can be done. Sometimes simple stuff, such as making an array size 255 rather than 256, helps a lot.
Generic Linux oprofile, http://oprofile.sourceforge.net/about/, is almost as good as Vtune, and better in some ways. And available for x86 and ARM. I haven't used it much, but I particularly like that you can use it in an almost completely non-intrusive manner, with no need to create the special -pg binary that gprof needs.
Their are many tools from where you can optimize your code,
For web application their are different tools to optimize the code i.e jzip compressors
e.g YUI Compressor etc.
For desktop application optimizing compiler is good.

Converting decision problems to optimization problems? (evolutionary algorithms)

Decision problems are not suited for use in evolutionary algorithms since a simple right/wrong fitness measure cannot be optimized/evolved. So, what are some methods/techniques for converting decision problems to optimization problems?
For instance, I'm currently working on a problem where the fitness of an individual depends very heavily on the output it produces. Depending on the ordering of genes, an individual either produces no output or perfect output - no "in between" (and therefore, no hills to climb). One small change in an individual's gene ordering can have a drastic effect on the fitness of an individual, so using an evolutionary algorithm essentially amounts to a random search.
Some literature references would be nice if you know of any.
Application to multiple inputs and examination of percentage of correct answers.
True, a right/wrong fitness measure cannot evolve towards more rightness, but an algorithm can nonetheless apply a mutable function to whatever input it takes to produce a decision which will be right or wrong. So, you keep mutating the algorithm, and for each mutated version of the algorithm you apply it to, say, 100 different inputs, and you check how many of them it got right. Then, you select those algorithms that gave more correct answers than others. Who knows, eventually you might see one which gets them all right.
There are no literature references, I just came up with it.
Well i think you must work on your fitness function.
When you say that some Individuals are more close to a perfect solution can you identify this solutions based on their genetic structure?
If you can do that a program could do that too and so you shouldn't rate the individual based on the output but on its structure.

Looking for ideas/references/keywords: adaptive-parameter-control of a search algorithm (online-learning)

I'm looking for ideas/experiences/references/keywords regarding an adaptive-parameter-control of search algorithm parameters (online-learning) in combinatorial-optimization.
A bit more detail:
I have a framework, which is responsible for optimizing a hard combinatorial-optimization-problem. This is done with the help of some "small heuristics" which are used in an iterative manner (large-neighborhood-search; ruin-and-recreate-approach). Every algorithm of these "small heuristics" is taking some external parameters, which are controlling the heuristic-logic in some extent (at the moment: just random values; some kind of noise; diversify the search).
Now i want to have a control-framework for choosing these parameters in a convergence-improving way, as general as possible, so that later additions of new heuristics are possible without changing the parameter-control.
There are at least two general decisions to make:
A: Choose the algorithm-pair (one destroy- and one rebuild-algorithm) which is used in the next iteration.
B: Choose the random parameters of the algorithms.
The only feedback is an evaluation-function of the new-found-solution. That leads me to the topic of reinforcement-learning. Is that the right direction?
Not really a learning-like-behavior, but the simplistic ideas at the moment are:
A: A roulette-wheel-selection according to some performance-value collected during the iterations (near past is more valued than older ones).
So if heuristic 1 did find all the new global best solutions -> high probability of choosing this one.
B: No idea yet. Maybe it's possible to use some non-uniform random values in the range (0,1) and i'm collecting some momentum of the changes.
So if heuristic 1 last time used alpha = 0.3 and found no new best solution, then used 0.6 and found a new best solution -> there is a momentum towards 1
-> next random value is likely to be bigger than 0.3. Possible problems: oscillation!
Things to remark:
- The parameters needed for good convergence of one specific algorithm can change dramatically -> maybe more diversify-operations needed at the beginning, more intensify-operations needed at the end.
- There is a possibility of good synergistic-effects in a specific pair of destroy-/rebuild-algorithm (sometimes called: coupled neighborhoods). How would one recognize something like that? Is that still in the reinforcement-learning-area?
- The different algorithms are controlled by a different number of parameters (some taking 1, some taking 3).
Any ideas, experiences, references (papers), keywords (ml-topics)?
If there are ideas regarding the decision of (b) in a offline-learning-manner. Don't hesitate to mention that.
Thanks for all your input.
Sascha
You have a set of parameter variables which you use to control your set of algorithms. Selection of your algorithms is just another variable.
One approach you might like to consider is to evolve your 'parameter space' using a genetic algorithm. In short, GA uses an analogue of the processes of natural selection to successively breed ever better solutions.
You will need to develop an encoding scheme to represent your parameter space as a string, and then create a large population of candidate solutions as your starting generation. The genetic algorithm itself takes the fittest solutions in your set and then applies various genetic operators to them (mutation, reproduction etc.) to breed a better set which then become the next generation.
The most difficult part of this process is developing an appropriate fitness function: something to quantitatively measure the quality of a given parameter space. Your search problem may be too complex to measure for each candidate in the population, so you will need a proxy model function which might be as hard to develop as the ideal solution itself.
Without understanding more of what you've written it's hard to see whether this approach is viable or not. GA is usually well suited to multi-variable optimisation problems like this, but it's not a silver bullet. For a reference start with Wikipedia.
This sounds like hyper heuristics which you're trying to do. Try looking for that keyword.
In Drools Planner (open source, java) I have support for tabu search and simulated annealing out the box.
I haven't implemented the ruin-and-recreate-approach (yet), but that should be easy, although I am not expecting better results. Challenge: Prove me wrong and fork it and add it and beat me in the examples.
Hyper heuristics are on my TODO list.

Improved Genetic algorithm for multiknapsack problem

Recently i've been improving traditional genetic algorithm for multiknapsack problem. So My Improved Genetic Algorithm is working better then Traditional Genetic Algorithm. I tested. (i used publically available from OR-Library (http://people.brunel.ac.uk/~mastjjb/jeb/orlib/mknapinfo.html) were used to test the GAs.) Does anybody know other improved GA. I wanted to compare with other improved genetic algorithm. Actually i searched in internet. But couldn't find good algorithm to compare.
There should be any number of decent GA methods against which you can compare. However, you should try to first clearly establish exactly which "traditional" GA method you have already tested.
One good method which I can recommend is the NSGA-II algorithm, which was developed for multi-objective optimization.
Take a look at the following for other ideas:
Genetic Algorithm - Wikipedia
Carlos A. Coello Coello (1999). "A Comprehensive Survey of Evolutionary-Based Multiobjective Optimization Techniques", Knowledge and Information Systems, Vol. 1, pp. 269-308.
Carlos A. Coello Coello et al (2005). "Current and Future Research Trends in Evolutionary Multiobjective Optimization", Information Processing with Evolutionary Algorithms, Springer.
You can compare your solution only to problems with the exact same encoding and fitness function (meaning they are equivalent problems). If the problem is different any comparison becomes quickly irrelevant as the problem changes, since the fitness function is almost always ad-hoc for whatever you're trying to solve. In fact the fitness function is the only thing you need to code if you use a Genetic Algorithms toolkit, as everything else usually comes out of the box.
On the other end, if the fitness function is the same, then it makes sense to compare results given different parameters, such as different mutation rate, different implementations of crossover, or even completely different evolutionary paradigms, such as coevolution, gene expression, compared to standard GAs, and so on.
Are you trying to improve the state-of-the-art in multiknapsack solvers by the use of genetic algorithms? Or are you trying to advance the genetic algorithm technique by using multiknapsack as a test platform? (Can you clarify?)
Depending on which one is your goal, the answer to your question is entirely different. Since others have addressed the latter question, I'll assume the former.
There has been little major leaps and bounds over the basic genetic algorithm. The best improvement in solving the multiknapsack via the use of genetic algorithms would be to improve your encoding of the mutation and crossover operators which can make orders of magnitude of difference in the resulting performance and blow out of the water any tweaks to the fundamental genetic algorithm. There is a lot you can do to make your mutation and crossover operators tailored to multiknapsack.
I would first survey the literature on multiknapsack to see what are the different kinds of search spaces and solution techniques people have used on multiknapsack. In their optimal or suboptimal methods (independent of genetic algorithms), what kinds of search operators do they use? What do they encode as variables and what do they encode as values? What heuristic evaluation functions are used? What constraints do they check for? Then you would adapt their encodings to your mutation and crossover operators, and see how well they perform in your genetic algorithms.
It is highly likely that an efficient search space encoding or an accurate heuristic evaluation function of the multiknapsack problem can translate into highly effective mutation and crossover operators. Since multiknapsack is a very well studied problem with a large corpus of research literature, it should be a gold mine for you.