I am using the FFD Algorithm in Optaplanner as a construction heuristic for my CVRP problem. I thought I understood the FFD-Alg from bin picking, but I don't understand the logic behind it when applied in OP on CVRP. So my thought was, it focuses on the demands (Sort cities in decreasing order, starting with the highest demand). To proof my assumption, I fixed the city coordinates to only one location, so the distance to the depot of all cities is the same. Then I changed the demands from big to small. But it doesn't take the cities in decrasing order in the result file.
The Input is: City 1: Demand 16, City 2: Demand 12, City 3: Demand 8, City 4: Demand 4,
City 5: Demand 2.
3 Vehicles with a capacity of 40 per vehicle.
What I thougt: V1<-[C1,C2,C3,C4], V2<-[C5]
What happened: V1<-[C5,C4,C3,C2], V2<-[C1]
Could anyone please explain me the theory of this? Also, I would like to know what happens the other way around, same capacities, but different locations per customer. I tried this too, but it also doesn't sort the cities beginning with the farthest one.
Thank you!
(braindump)
Unlike with non-VRP problems, the choice of "difficulty comparison" to determine the "Decreasing Difficulty" of "First Fit Decreasing", isn't always clear. I've done experiments with several forms - such as based on distance to depot, angle to depot, latitude, etc. You can find all those forms of difficulty comperators in the examples, usually TSP.
One common pitfall is to tweak the comperator before enabling Nearby Selection: tweak Nearby Selection first. If you're dealing with large datasets, an "angle to depo" comparator will behave much better, just because Nearby Selection and/or Paritioned Search aren't active yet. In any case, as always: you need to be using optaplanner-benchmark for this sort of work.
This being said, on a pure TSP use case, the First Fit Decreasing algorithm has worse results than the Nearest Neighbor algorithm (which is another construction heuristics for which we have limited support atm). Both require Local Search to improve further, of course. However, translating the Nearest Neighbor algorithm to VRP is difficult/ambiguous (to say the least): I was/am working on such a translation and I am calling it the Bouy algorithm (look for a class in the vrp example that starts with Bouy). It works, but it doesn't combine well with Nearby Selection yet IIRC.
Oh, and there's also the Clarke and Wright savings algorithm, which is great on small pure CVRP cases. But suffers from BigO (= scaling) problems on bigger datasets and it too becomes difficult/ambiguous when other constraints are added (such as time windows, skill req, lunch breaks, ...).
Long story short: the jury's still out on what's the best construction heuristic for real-world, advanced VRP cases. optaplanner-benchmark will help us there. This despite all the academic papers talking about their perfect CH for a simple form of VRP on small datasets...
Related
I'm working on ACO and a little confused about the probability of choosing the next city. I have read some papers and books but still the idea of choosing is unclear. I am looking for a simple explanation how this path building works.
Also how does the heuristics and pheromone come into this decision making?
Because we have same pheromone values at every edge in the beginning and the heuristics (closeness) values remains constant, so how will different ants make different decisions based on these values?
Maybe is too late to answer to question but lately I have been working with ACO and it was also a bit confusing for me. I didn't find much information on stackoverflow about ACO when I needed it, so I have decided to answer this question since maybe this info is usefull for people working on ACO now or in the future.
Swarm Intelligence Algorithms are a set of techniques based on emerging social and cooperative behavior of organisms grouped in colonies, swarms, etc.
The collective intelligence algorithm Ant Colony Optimization (ACO) is an optimization algorithm inspired by ant colonies. In nature, ants of some species initially wander randomly until they find a food source and return to their colony laying down a pheromone trail. If other ants find this trail, they are more likely not to keep travelling at random, but instead to follow the trail, and reinforcing it if they eventually find food.
In the Ant Colony Optimization algorithm the agents (ants) are placed on different nodes (usually it is used a number of ants equal to the number of nodes). And, the probability of choosing the next node (city) is based on agents chose the next node using the equation known as the transition rule, that represents the probability for ant π to go from city π to city π on the π‘th tour.
In the equation, πππ represents the pheromone trail and πππ the visibility between the two cities, while πΌ and π½ are adjustable parameters that control the relative weight of trail intensity and visibility.
At the beginning there is the same value of pheromone in all the edges. Based on the transition rule, which, in turn, is based on the pheromone and the visibility (distance between nodes), some paths will be more likely to be chosen than others.
When the algorithm starts to run each agent (ant) performs a tour (visits each node), the best tour found until the moment will be updated with a new quantity of pheromone, which will make that tour more probably to be chosen next time by the ants.
You can found more information about ACO in the following links:
http://dataworldblog.blogspot.com.es/2017/06/ant-colony-optimization-part-1.html
http://dataworldblog.blogspot.com.es/2017/06/graph-optimization-using-ant-colony.html
http://hdl.handle.net/10609/64285
Regards,
Ester
I have been assigned the following problem as a research topic for summer. However, I have not been able to successfully find related problem, except that it seems to be a combination of travelling salesman with the knapsack, even though I'm not sure if that is the case. The statement is:
You are a truck driver who earned a big contract and now you must
deliver a lot of packages. There are N packages to be delivered, each
one has to be delivered at a certain address (x,y) on the city.
Additionally each package i has a weight Wi.
For simplicity, suppose the distribution area is rectangular and you
always start at the point (0,90).
You only have one truck, with limited capacity of 1000 (excluding
truck weight). The truck base weight is 10.
The distances to be travelled are far away, so the distance shall be
computer using Haversine distance.
The company who contracted you will provide you with enough fuel, so
you can make unlimited ammount of travels.
However, you must be very careful while delivering the packages since
you must deliver every single one of them and, if you choose to pick
up a package during a trip, you must deliver it no matter what, you
can't leave them in the middle of your trip.
As you are a bit miserly, you agree on the conditions, but you know
that if you don't take a close to optimal strategy, your truck can
wear too much, so you could end leaving the contract incomplete,
which will get you sued and leave you without truck and money.
So, due to your experience, you know that to maximize the chances of survival
of your truck, you must minimize the following function:
http://goo.gl/jQxXzN (sorry, I can't post images because I have not enough reputation).
where m is the number of tripss, n is the number of packages for each trip j, wij is the weight of the i-th gift during the j-th trip, Dist represents Haversine distance between two points, Loc(i) is the location of the i-th gift, Loc(0) and Loc(n) are the (0,90) point (starting point), and wnj (last weight of the trip) is always the truck weight (base weight).
So, basically, those are all the restictions of the research topic I got.
I have been thinking maybe some metaheuristics such as ant collony or genetic algorithms could be of help, but I would have to get to know the problem a bit better. Any idea or information (paper, book, etc) is welcomed.
This sounds like a variation of the Capacitated Vehicle Routing Problem (CRVP), specifically with with single-vehicle and single-depot (however with non-uniform packages). For some reference on this problem, see e.g.:
On the Capacitated Vehicle Routing Problem (T.K. Ralphs, L. Kopman, W.R. Pulleyblank, and L.E. Trotter, Jr)
Modeling and Solving the Capacitated Vehicle Routing Problem on Trees (B. Chandran and S. Raghavan)
I think your idea of metaheuristics---ant colony optimisation (ACO) in particular---would be a wise approach. Note that for TSP related problems, generally ACO is to prefer over genetic algorithms (GA).
Possibly the following somewhat well known article (1) can get you started to studying the possible benefits of ACO approach. (2) extends the result from (1). Note that this covers the regular VHP (no capacitated), but it should prove valuable as a starting point, the least giving inspiration.
SavingsAnts for the Vehicle Routing Problem (K. Doerner et al.)
An improved ant colony optimization for vehicle routing problem (Yu Bin et al.)
There also seems to exists literature specifically one the subject of ACO and CVRP, but I cannot comment to the quality of these, but I'll list them for you to inspect by yourself. Note that (3) is an extension from the authors of (1).
Parallel Ant Systems for the Capacitated Vehicle Routing Problem (K. Doerner et al.)
Ant Colony Optimization for Capacitated Vehicle Routing Problem (W.F. Tan et al.)
Sound like an interesting research topic, good luck!
So, I've done a fair bit of reading into generation of a Sudoku puzzle. From what I can tell, the standard way to have a Sudoku puzzle of a desired difficulty is to generate a puzzle, and then grade it afterwards, and repeat until you have one of an acceptable rating. This can be refined by generating via backtracing using some of the more complex solving patterns (XY-wing, swordfish, etc.), but that's not quite what I'm wanting to do here.
What I want to do, but have been unable to find any real resource on, is generate a puzzle from a "difficulty value" (0-1.0 value, 0 being the easiest, and 1.0 being the hardest).
For example, I want create a moderately difficult puzzle, so the value .675 is selected. Now using that value I want to be able to generate a moderately difficult puzzle.
Anyone know of something like this? Or perhaps something with a similar methodology?
Adding another answer for generating a sudoku of desired difficulty on-the-fly.
This means that unlike other approaches the algorithm runs only once and returns a sudoku configuration matching the desired difficulty (with high probability within a range or with probability=1)
Various solutions for generating (and rating) a sudoku difficulty have to do with human-based techniques and approaches, which can be easily rated.
Then one (after having generated a sudoku configuration) re-solves the sudoku with the human-like solver and depending on the techniques the solver used (e.g pairs, x-wing, swordfish etc.) a difficulty rate is also assigned.
Problems with this approach
(and requirements for the use case i had)
In order to generate a sudoku with given difficulty, with previous method one needs to solve a sudoku twice (once with the basic algorithm and once with the human-like solver).
One has to (pre-)generate many sudokus which can only be rated as to difficulty after being solved by the human-like solver. So one cannot generate a desired sudoku on-the-fly once.
The human-like solver can be complicated and in most cases (if not all) is tightly coupled to 9x9 sudoku grids. So no easy generalisation to other sudokus (e.g 4x4, 16x16, 6x6 etc.)
The difficulty rating of the human-like techniques is very subjective. For example why x-wing is taken to be more difficult than hidden singles? (personaly have solved many difficult published sudoku puzzles manualy and never used such techniques)
Another approach was used which has the following benefits:
Generalises well to arbitrary sudokus (9x9, 4x4, 6x6, 16x16 etc..)
The sudoku configuration, with desired difficulty, is generated once and on-the-fly
The difficulty rating is objective.
How it works?
First of all, the simple fact that the more difficult the puzzle, the more time it needs to be solved.
But time to be solved is intimately correlated to both number of clues (givens) and average alternatives to be investigated per empty cell.
Extending my previous answer, it was mentioned that for any sudoku puzzle the minimum number of clues is an objective property of the puzzle (for example for 9x9 grids the minimum number of clues for having a valid sudoku is 17)
One can start from there and compute minimum number of clues per difficulty level (linear correlation).
Furthermore at each step of the sudoku generation process, one can make sure the average alternatives (to be investigated) per empty cell is within given bounds (as a function of desired difficulty)
Depending on whether the algorithm uses backtrack or not (for the use case discussed the algorithm does no backtracking) the desired difficulty can be reached either with probability=1 or with high probability within bounds (respectively).
Tests of the sudokus generated with this algorithm and difficulty rating based on the previous approaches (human-like solver), show a correlation of desired and estimated difficulty rates, plus a greater ability for generalisation to arbitrary sudoku configurations.
(have used this online sudoku solver (and also this one) to correlate the difficulty rates of the test sudokus)
The code is available free on github sudoku.js (along with sample demo application), a scaled-down version of CrossWord.js a professional crossword builder in JavaScript, by same author
The sudoku difficulty is related in an interesting way to the (minimum) amount of information needed to specify a unique solution for a given grid.
Sounds like information theory, yes it has applications here too.
Sudoku puzzles should have a unique solution. Furthermore sudoku puzzles have certain symmetries, i.e by row, by column and by sub-square.
These symmetries specify the minimum number of clues (and their position more or less) needed so that the solution would be unique (i.e using a sudoku compiler or an algorithm like backtrack-search).
This would be the most difficult/hard sudoku puzzle level (i.e minimum needed number of clues). Then all other difficulty levels from less hard to easy are generated by allowing more clues than the minimum amount needed.
It should be noted that sudoku difficulty levels are not standard, as explained above, one can have as many or as few difficulty levels as one wants. What is standard is the minimum number (and position) of clues (which is the hardest level and which is relatd to the sudoku symmetries), then one can generate as many difficulty levels as one wants simply by allowing extra/redundant clues to be visible as well.
It's not as elegant as what you ask, but you can simulate this behavior with caching:
Decide how many "buckets" you want for puzzles. For example, let's say you choose 20. Thus, your buckets will contain puzzles of different difficulty ranges: 0-.05, .05-.1, .1-.15, .. , .9-.95, .95-1
Generate a puzzle
Grade the puzzle
Put it in the appropriate bucket (or throw it away when the bucket is full)
Repeat till your buckets are "filled". The size of the buckets and where they are stored will be based on the needs of your application.
Then when a user requests a certain difficulty puzzle, give them a cached one from the bucket they choose. You might also want to consider swapping numbers and changing orientation of puzzles with known difficulties to generate similar puzzles with the same level of difficulty. Then repeat the above as needed when you need to refill your buckets with new puzzles.
Well, you can't know how complicated it is, before you know how to solve it. And Sudoku solving (and therefore also the difficulty rating) belongs to the NP-C complexity class, that means it's (most likely) logically impossible to find an algorithm that is (asymptotically) faster than the proposed randomly-guess-and-check.
However, if you can find one, you have solved the P versus NP problem and should clear a cupboard for the Fields Medal... :)
An article has been making the rounds lately discussing the use of genetic algorithms to optimize "build orders" in StarCraft II.
http://lbrandy.com/blog/2010/11/using-genetic-algorithms-to-find-starcraft-2-build-orders/
The initial state of a StarCraft match is pre-determined and constant. And like chess, decisions made in this early stage of the match have long-standing consequences to a player's ability to perform in the mid and late game. So the various opening possibilities or "build orders" are under heavy study and scrutiny. Until the circulation of the above article, computer-assisted build order creation probably wasn't as popularity as it has been recently.
My question is... Is a genetic algorithm really the best way to model optimizing build orders?
A build order is a sequence of actions. Some actions have prerequisites like, "You need building B before you can create building C, but you can have building A at any time." So a chromosome may look like AABAC.
I'm wondering if a genetic algorithm really is the best way to tackle this problem. Although I'm not too familiar with the field, I'm having a difficult time shoe-horning the concept of genes into a data structure that is a sequence of actions. These aren't independent choices that can be mixed and matched like a head and a foot. So what value is there to things like reproduction and crossing?
I'm thinking whatever chess AIs use would be more appropriate since the array of choices at any given time could be viewed as tree-like in a way.
Although I'm not too familiar with the field, I'm having a difficult time shoe-horning the concept of genes into a data structure that is a sequence of actions. These aren't independent choices that can be mixed and matched like a head and a foot. So what value is there to things like reproduction and crossing?
Hmm, that's a very good question. Perhaps the first few moves in Starcraft can indeed be performed in pretty much any order, since contact with the enemy is not as immediate as it can be in Chess, and therefore it is not as important to remember the order of the first few moves as it is to know which of the many moves are included in those first few. But the link seems to imply otherwise, which means the 'genes' are indeed not all that amenable to being swapped around, unless there's something cunning in the encoding that I'm missing.
On the whole, and looking at the link you supplied, I'd say that genetic algorithms are a poor choice for this situation, which could be accurately mathematically modelled in some parts and the search tree expanded out in others. They may well be better than an exhaustive search of the possibility space, but may not be - especially given that there are multiple populations and poorer ones are just wasting processing time.
However, what I mean by "a poor choice" is that it is inefficient relative to a more appropriate approach; that's not to say that it couldn't still produce 98% optimal results in under a second or whatever. In situations such as this where the brute force of the computer is useful, it is usually more important that you have modelled the search space correctly than to have used the most effective algorithm.
As TaslemGuy pointed out, Genetic Algorithms aren't guaranteed to be optimal, even though they usually give good results.
To get optimal results you would have to search through every possible combination of actions until you find the optimal path through the tree-like representation. However, doing this for StarCraft is difficult, since there are so many different paths to reach a goal. In chess you move a pawn from e2 to e4 and then the opponent moves. In StarCraft you can move a unit at instant x or x+1 or x+10 or ...
A chess engine can look at many different aspects of the board (e.g. how many pieces does it have and how many does the opponent have), to guide it's search. It can ignore most of the actions available if it knows that they are strictly worse than others.
For a build-order creator only time really matters. Is it better to build another drone to get minerals faster, or is it faster to start that spawning pool right away? Not as straightforward as with chess.
These kinds of decisions happen pretty early on, so you will have to search each alternative to conclusion before you can decide on the better one, which will take a long time.
If I were to write a build-order optimizer myself, I would probably try to formulate a heuristic that estimates how good (close the to the goal state) the current state is, just as chess engines do:
Score = a*(Buildings_and_units_done/Buildings_and_units_required) - b*Time_elapsed - c*Minerals - d*Gas + e*Drone_count - f*Supply_left
This tries to keep the score tied to the completion percentage as well as StarCraft common knowledge (keep your ressources low, build drones, don't build more supply than you need). The variables a to f would need tweaking, of course.
After you've got a heuristic that can somewhat estimate the worth of a situation, I would use Best-first search or maybe IDDFS to search through the tree of possibilities.
Edit:
I recently found a paper that actually describes build order optimization in StarCraft, in real time even. The authors use depth-first search with branch and bound and heuristics that estimate the minimum amount of effort required to reach the goal based on the tech tree (e.g. zerglings need a spawning pool) and the time needed to gather the required minerals.
Genetic Algorithm can be, or can sometimes not be, the optimal or non-optimal solution. Based on the complexity of the Genetic Algorithm, how much mutation there is, the forms of combinations, and how the chromosomes of the genetic algorithm is interpreted.
So, depending on how your AI is implemented, Genetic Algorithms can be the best.
You are looking at a SINGLE way to implement genetic algorithms, while forgetting about genetic programming, the use of math, higher-order functions, etc. Genetic algorithms can be EXTREMELY sophisticated, and by using clever combining systems for crossbreeding, extremely intelligent.
For instance, neural networks are optimized by genetic algorithms quite often.
Look up "Genetic Programming." It's similar, but uses tree-structures instead of lines of characters, which allows for more complex interactions that breed better. For more complex stuff, they typically work out better.
There's been some research done using hierarchical reinforcement learning to build a layered ordering of actions that efficiently maximizes a reward. I haven't found much code implementing the idea, but there are a few papers describing MAXQ-based algorithms that have been used to explicitly tackle real-time strategy game domains, such as this and this.
This Genetic algorithm only optimizes the strategy for one very specific part of the game: The order of the first few build actions of the game. And it has a very specific goal as well: To have as many roaches as quickly as possible.
The only aspects influencing this system seem to be (I'm no starcraft player):
build time of the various units and
buildings
allowed units and buildings given the available units and buildings
Larva regeneration rate.
This is a relatively limited, relatively well defined problem with a large search space. As such it is very well suited for genetic algorithms (and quite a few other optimization algorithm at that). A full gene is a specific set of build orders that ends in the 7th roach. From what I understand you can just "play" this specific gene to see how fast it finishes, so you have a very clear fitness test.
You also have a few nice constraints on the build order, so you can combine different genes slightly smarter than just randomly.
A genetic algorithm used in this way is a very good tool to find a more optimal build order for the first stage of a game of starcraft. Due to its random nature it is also good at finding a surprising strategy, which might have been an additional goal of the author.
To use a genetic algorithm as the algorithm in an RTS game you'd have to find a way to encode reactions to situations rather than just plain old build orders. This also involves correctly identifying situations which can be a difficult task in itself. Then you'd have to let these genes play thousands of games of starcraft, against each other and (possibly) against humans, selecting and combining winners (or longer-lasting losers). This is also a good application of genetic algorithms, but it involves solving quite a few very hard problems before you even get to the genetic algorithm part.
Tried doing a bit of research on the following with no luck. Thought I'd ask here in case someone has come across it before.
I help a volunteer-run radio station with their technology needs. One of the main things that have come up is they would like to schedule their advertising programmatically.
There are a lot of neat and complex rule engines out there for advertising, but all we need is something pretty simple (along with any experience that's worth thinking about).
I would like to write something in SQL if possible to deal with these entities. Ideally if someone has written something like this for other advertising mediums (web, etc.,) it would be really helpful.
Entities:
Ads (consisting of a category, # of plays per day, start date, end date or permanent play)
Ad Category (Restaurant, Health, Food store, etc.)
To over-simplify the problem, this will be a elegant sql statement. Getting there... :)
I would like to be able to generate a playlist per day using the above two entities where:
No two ads in the same category are played within x number of ads of each other.
(nice to have) high promotion ads can be pushed
At this time, there are no "ad slots" to fill. There is no "time of day" considerations.
We queue up the ads for the day and go through them between songs/shows, etc. We know how many per hour we have to fill, etc.
Any thoughts/ideas/links/examples? I'm going to keep on looking and hopefully come across something instead of learning it the long way.
Very interesting question, SMO. Right now it looks like a constraint programming problem because you aren't looking for an optimal solution, just one that satisfies all the constraints you have specified. In response to those who wanted to close the question, I'd say they need to check out constraint programming a bit. It's far closer to stackoverflow that any operations research sites.
Look into constraint programming and scheduling - I'll bet you'll find an analogous problem toot sweet !
Keep us posted on your progress, please.
Ignoring the T-SQL request for the moment since that's unlikely to be the best language to write this in ...
One of my favorites approaches to tough 'layout' problems like this is Simulated Annealing. It's a good approach because you don't need to think HOW to solve the actual problem: all you define is a measure of how good the current layout is (a score if you will) and then you allow random changes that either increase or decrease that score. Over many iterations you gradually reduce the probability of moving to a worse score. This 'simulated annealing' approach reduces the probability of getting stuck in a local minimum.
So in your case the scoring function for a given layout might be based on the distance to the next advert in the same category and the distance to another advert of the same series. If you later have time of day considerations you can easily add them to the score function.
Initially you allocate the adverts sequentially, evenly or randomly within their time window (doesn't really matter which). Now you pick two slots and consider what happens to the score when you switch the contents of those two slots. If either advert moves out of its allowed range you can reject the change immediately. If both are still in range, does it move you to a better overall score? Initially you take changes randomly even if they make it worse but over time you reduce the probability of that happening so that by the end you are moving monotonically towards a better score.
Easy to implement, easy to add new 'rules' that affect score, can easily adjust run-time to accept a 'good enough' answer, ...
Another approach would be to use a genetic algorithm, see this similar question: Best Fit Scheduling Algorithm this is likely harder to program but will probably converge more quickly on a good answer.