Optimizing Ant Colony System for TSP - evolutionary-algorithm

I have implemented an Ant Colony System for symmetric TSP in Java using Dorigo's paper from the following link :
http://people.idsia.ch/~luca/acs-bio97.pdf
I also adapted the following strategy:
1.while not all ants have constructed a solution, each ant moves 1 step to a new city and updates the pheromone on the edge used using Dorigo's local pheromone update.
The ant producing the shortest path globally updates the pheromone on the edges used using Dorigo's global update formula
After a number of iterations the shortest path found in all iterations is returned
Is there a way I can improve the algorithm in order to give better results ?
For Example for TSP instance ch130 found in TSPLIB the optimal solution is 6110 and my algorithm is returning the answer 6223.
My ACS so far has parameters set as those defined in Dorigo's paper

There are a few things you can do to improve your solution:
Increase the number of iterations. There is a possibility that there is no stagnation yet, and new solutions can be achieved.
Increase the parameter associated with the visibility (heuristic) function to favor exploration of other solutions.
Have a look at the following two papers for more details. The first one combines ACO with a genetic algorithm to fine-tune the hyper-parameters which are used to configure ACO. The authors conclude that this method improves the convergence of ACO. The second paper use an adaptive procedure to set these parameters at runtime. As the authors claim these parameters are problem specific and depending problem which currently being solved, tunning needs to be performed to improve the convergence time of the algorithm.
Botee, Hozefa M., and Eric Bonabeau. "Evolving ant colony optimization." Advances in complex systems 1, no. 02n03 (1998): 149-159.
Stรผtzle, Thomas, Manuel Lรณpez-Ibรกnez, Paola Pellegrini, Michael Maur, Marco Montes De Oca, Mauro Birattari, and Marco Dorigo. "Parameter adaptation in ant colony optimization." In Autonomous search, pp. 191-215. Springer, Berlin, Heidelberg, 2011.

I guess the most straight forward way to improve the performance would be integrating with a local search method, e.g. 2-opt, 3-opt, and Lin-Ker heuristic. In practice, with the integration of these local search methods, a not very big scale TSP, e.g. ch130 can solve to the optimum easily.

Related

convergence of an ant colony algorithm

I use ant colony optimization to solve a problem. In my case, at each iteration, n ants are generated from n nodes (one ant per node every iteration). I obtain solutions that verify the conditions of the problem. But, I don't achieve a convergence (for example, I have 30 iterations, the best solution is obtained in the iteration 8 or 9). I want to know if using only a single ant at each iteration is the problem? Also, I want to know if an ant colony algorithm must converge to a state of equilibrium?
thank you in advance.
Convergence and divergence of heuristics algorithms is a very broad topic. Your problem type, dimension, parameters affect the behaviour of the algorithm. You should study the paper in here http://iridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2009-013r001.pdf for basic information about ACO algorithms.
After then you should ask a question based on https://stackoverflow.com/help/mcve.

ACO Pheromone update

I'm working on ACO and a little confused about the probability of choosing the next city. I have read some papers and books but still the idea of choosing is unclear. I am looking for a simple explanation how this path building works.
Also how does the heuristics and pheromone come into this decision making?
Because we have same pheromone values at every edge in the beginning and the heuristics (closeness) values remains constant, so how will different ants make different decisions based on these values?
Maybe is too late to answer to question but lately I have been working with ACO and it was also a bit confusing for me. I didn't find much information on stackoverflow about ACO when I needed it, so I have decided to answer this question since maybe this info is usefull for people working on ACO now or in the future.
Swarm Intelligence Algorithms are a set of techniques based on emerging social and cooperative behavior of organisms grouped in colonies, swarms, etc.
The collective intelligence algorithm Ant Colony Optimization (ACO) is an optimization algorithm inspired by ant colonies. In nature, ants of some species initially wander randomly until they find a food source and return to their colony laying down a pheromone trail. If other ants find this trail, they are more likely not to keep travelling at random, but instead to follow the trail, and reinforcing it if they eventually find food.
In the Ant Colony Optimization algorithm the agents (ants) are placed on different nodes (usually it is used a number of ants equal to the number of nodes). And, the probability of choosing the next node (city) is based on agents chose the next node using the equation known as the transition rule, that represents the probability for ant ๐‘˜ to go from city ๐‘– to city ๐‘— on the ๐‘กth tour.
In the equation, ๐œ๐‘–๐‘— represents the pheromone trail and ๐œ‚๐‘–๐‘— the visibility between the two cities, while ๐›ผ and ๐›ฝ are adjustable parameters that control the relative weight of trail intensity and visibility.
At the beginning there is the same value of pheromone in all the edges. Based on the transition rule, which, in turn, is based on the pheromone and the visibility (distance between nodes), some paths will be more likely to be chosen than others.
When the algorithm starts to run each agent (ant) performs a tour (visits each node), the best tour found until the moment will be updated with a new quantity of pheromone, which will make that tour more probably to be chosen next time by the ants.
You can found more information about ACO in the following links:
http://dataworldblog.blogspot.com.es/2017/06/ant-colony-optimization-part-1.html
http://dataworldblog.blogspot.com.es/2017/06/graph-optimization-using-ant-colony.html
http://hdl.handle.net/10609/64285
Regards,
Ester

Getting started with Finite Elements methods

There is a cubic block of fractured rock; the question is:
how to simulate fluid flow from top-side to down-side or left-side to right-side?
Is FEA (FEM,...) the only practical solution?
If so for the question above in its simplest conditions, that is, flow can happen only through fractures; no interaction between matrix and the fluid; etc etc how to have a quick simulation with FEA?
Is this practical someone with professionality in FEA could do this in a few minutes? Suppose there is already a suitable mesh generated.
If not what would you recommend to get started rapidly to be able to solve such simple cases?
Is there anybody having experience with similar problem (flow modeling); if so what did you use and how did you fulfilled the job?
Note that we are aware of availability of many FEM packages e.g., FEniCS, OpenFoam, ....
Your question refers to simulation of the fluid in the porous medium, e.g. the rock.
I highly recommend using LBM instead of FEM-based methods. LBM simulates the flow in porous media by nature. Phys Review E contains publications about that approach. What is even more attractive, LBM can be also easily parallelized on GPU.
There are a number of numerical techniques that could be used to solve this problem, finite elements being probably the most common. If you have a mesh of the fluid flow domain already (presumably the voids/cracks in the rock) it would be very straightforward to set up and run the flow model with pretty much any CFD package (finite element based or not) and most people with any exposure to FEA should be able to do it. I am assuming that you want to understand the fluid flow within the rock in some detail, rather than just evaluate the effects of the rock on the flow in some larger flow domain. In the latter case, there are other approaches which might be more computationally efficient.
You could use the one-dimensional form of Darcy's Law.

Looking for ideas/references/keywords: adaptive-parameter-control of a search algorithm (online-learning)

I'm looking for ideas/experiences/references/keywords regarding an adaptive-parameter-control of search algorithm parameters (online-learning) in combinatorial-optimization.
A bit more detail:
I have a framework, which is responsible for optimizing a hard combinatorial-optimization-problem. This is done with the help of some "small heuristics" which are used in an iterative manner (large-neighborhood-search; ruin-and-recreate-approach). Every algorithm of these "small heuristics" is taking some external parameters, which are controlling the heuristic-logic in some extent (at the moment: just random values; some kind of noise; diversify the search).
Now i want to have a control-framework for choosing these parameters in a convergence-improving way, as general as possible, so that later additions of new heuristics are possible without changing the parameter-control.
There are at least two general decisions to make:
A: Choose the algorithm-pair (one destroy- and one rebuild-algorithm) which is used in the next iteration.
B: Choose the random parameters of the algorithms.
The only feedback is an evaluation-function of the new-found-solution. That leads me to the topic of reinforcement-learning. Is that the right direction?
Not really a learning-like-behavior, but the simplistic ideas at the moment are:
A: A roulette-wheel-selection according to some performance-value collected during the iterations (near past is more valued than older ones).
So if heuristic 1 did find all the new global best solutions -> high probability of choosing this one.
B: No idea yet. Maybe it's possible to use some non-uniform random values in the range (0,1) and i'm collecting some momentum of the changes.
So if heuristic 1 last time used alpha = 0.3 and found no new best solution, then used 0.6 and found a new best solution -> there is a momentum towards 1
-> next random value is likely to be bigger than 0.3. Possible problems: oscillation!
Things to remark:
- The parameters needed for good convergence of one specific algorithm can change dramatically -> maybe more diversify-operations needed at the beginning, more intensify-operations needed at the end.
- There is a possibility of good synergistic-effects in a specific pair of destroy-/rebuild-algorithm (sometimes called: coupled neighborhoods). How would one recognize something like that? Is that still in the reinforcement-learning-area?
- The different algorithms are controlled by a different number of parameters (some taking 1, some taking 3).
Any ideas, experiences, references (papers), keywords (ml-topics)?
If there are ideas regarding the decision of (b) in a offline-learning-manner. Don't hesitate to mention that.
Thanks for all your input.
Sascha
You have a set of parameter variables which you use to control your set of algorithms. Selection of your algorithms is just another variable.
One approach you might like to consider is to evolve your 'parameter space' using a genetic algorithm. In short, GA uses an analogue of the processes of natural selection to successively breed ever better solutions.
You will need to develop an encoding scheme to represent your parameter space as a string, and then create a large population of candidate solutions as your starting generation. The genetic algorithm itself takes the fittest solutions in your set and then applies various genetic operators to them (mutation, reproduction etc.) to breed a better set which then become the next generation.
The most difficult part of this process is developing an appropriate fitness function: something to quantitatively measure the quality of a given parameter space. Your search problem may be too complex to measure for each candidate in the population, so you will need a proxy model function which might be as hard to develop as the ideal solution itself.
Without understanding more of what you've written it's hard to see whether this approach is viable or not. GA is usually well suited to multi-variable optimisation problems like this, but it's not a silver bullet. For a reference start with Wikipedia.
This sounds like hyper heuristics which you're trying to do. Try looking for that keyword.
In Drools Planner (open source, java) I have support for tabu search and simulated annealing out the box.
I haven't implemented the ruin-and-recreate-approach (yet), but that should be easy, although I am not expecting better results. Challenge: Prove me wrong and fork it and add it and beat me in the examples.
Hyper heuristics are on my TODO list.

What are the typical use cases of Genetic Programming?

Today I read this blog entry by Roger Alsing about how to paint a replica of the Mona Lisa using only 50 semi transparent polygons.
I'm fascinated with the results for that particular case, so I was wondering (and this is my question): how does genetic programming work and what other problems could be solved by genetic programming?
There is some debate as to whether Roger's Mona Lisa program is Genetic Programming at all. It seems to be closer to a (1 + 1) Evolution Strategy. Both techniques are examples of the broader field of Evolutionary Computation, which also includes Genetic Algorithms.
Genetic Programming (GP) is the process of evolving computer programs (usually in the form of trees - often Lisp programs). If you are asking specifically about GP, John Koza is widely regarded as the leading expert. His website includes lots of links to more information. GP is typically very computationally intensive (for non-trivial problems it often involves a large grid of machines).
If you are asking more generally, evolutionary algorithms (EAs) are typically used to provide good approximate solutions to problems that cannot be solved easily using other techniques (such as NP-hard problems). Many optimisation problems fall into this category. It may be too computationally-intensive to find an exact solution but sometimes a near-optimal solution is sufficient. In these situations evolutionary techniques can be effective. Due to their random nature, evolutionary algorithms are never guaranteed to find an optimal solution for any problem, but they will often find a good solution if one exists.
Evolutionary algorithms can also be used to tackle problems that humans don't really know how to solve. An EA, free of any human preconceptions or biases, can generate surprising solutions that are comparable to, or better than, the best human-generated efforts. It is merely necessary that we can recognise a good solution if it were presented to us, even if we don't know how to create a good solution. In other words, we need to be able to formulate an effective fitness function.
Some Examples
Travelling Salesman
Sudoku
EDIT: The freely-available book, A Field Guide to Genetic Programming, contains examples of where GP has produced human-competitive results.
Interestingly enough, the company behind the dynamic character animation used in games like Grand Theft Auto IV and the latest Star Wars game (The Force Unleashed) used genetic programming to develop movement algorithms. The company's website is here and the videos are very impressive:
http://www.naturalmotion.com/euphoria.htm
I believe they simulated the nervous system of the character, then randomised the connections to some extent. They then combined the 'genes' of the models that walked furthest to create more and more able 'children' in successive generations. Really fascinating simulation work.
I've also seen genetic algorithms used in path finding automata, with food-seeking ants being the classic example.
Genetic algorithms can be used to solve most any optimization problem. However, in a lot of cases, there are better, more direct methods to solve them. It is in the class of meta-programming algorithms, which means that it is able to adapt to pretty much anything you can throw at it, given that you can generate a method of encoding a potential solution, combining/mutating solutions, and deciding which solutions are better than others. GA has an advantage over other meta-programming algorithms in that it can handle local maxima better than a pure hill-climbing algorithm, like simulated annealing.
I used genetic programming in my thesis to simulate evolution of species based on terrain, but that is of course the A-life application of genetic algorithms.
The problems GA are good at are hill-climbing problems. Problem is that normally it's easier to solve most of these problems by hand, unless the factors that define the problem are unknown, in other words you can't achieve that knowledge somehow else, say things related with societies and communities, or in situations where you have a good algorithm but you need to fine tune the parameters, here GA are very useful.
A situation of fine tuning I've done was to fine tune several Othello AI players based on the same algorithms, giving each different play styles, thus making each opponent unique and with its own quirks, then I had them compete to cull out the top 16 AI's that I used in my game. The advantage was they were all very good players of more or less equal skill, so it was interesting for the human opponent because they couldn't guess the AI as easily.
http://en.wikipedia.org/wiki/Genetic_algorithm#Problem_domains
You should ask yourself : "Can I (a priori) define a function to determine how good a particular solution is relative to other solutions?"
In the mona lisa example, you can easily determine if the new painting looks more like the source image than the previous painting, so Genetic Programming can be "easily" applied.
I have some projects using Genetic Algorithms. GA are ideal for optimization problems, when you cannot develop a fully sequential, exact algorithm do solve a problem. For example: what's the best combination of a car characteristcs to make it faster and at the same time more economic?
At the moment I'm developing a simple GA to elaborate playlists. My GA has to find the better combinations of albums/songs that are similar (this similarity will be "calculated" with the help of last.fm) and suggests playlists for me.
There's an emerging field in robotics called Evolutionary Robotics (w:Evolutionary Robotics), which uses genetic algorithms (GA) heavily.
See w:Genetic Algorithm:
Simple generational genetic algorithm pseudocode
Choose initial population
Evaluate the fitness of each individual in the population
Repeat until termination: (time limit or sufficient fitness achieved)
Select best-ranking individuals to reproduce
Breed new generation through crossover and/or mutation (genetic
operations) and give birth to
offspring
Evaluate the individual fitnesses of the offspring
Replace worst ranked part of population with offspring
The key is the reproduction part, which could happen sexually or asexually, using genetic operators Crossover and Mutation.