Optaplanner take fastest path - optaplanner

How can we optimize Optaplanner to select the fastest route? See the highlighted point in the below image. It is taking the long route.
Note: Vehicles does not need to come back depot. I think i cannot use CVRPTW as arrivalAfterDueTimeAtDepot is a build-in hard constraint (and besides i do not have any time constraints).
How can we write a constraint to select the less capacity vehicle?
For example, A customer needs only 3 items and we have two vehicles with 4 and 9 capacities. Seems like Optaplanner is selecting the first vehicle from the order of input by default.

I presume it's taking the blue vehicle for the center of Bengaluru because the green in is already at full capacity.
Check what the score is (calculated through Solver.getScoreDirectorFactory()) if you manually put that location in the green trip and swap the vehicles of the green and blue trip. If it's worse (or breaks a hard constraint), then it's normal that OptaPlanner selects the other solution. In that case, either your score function has bug (or you realize don't want that solution at all). But if it has indeed a better score, OptaPlanner's <localSearch> (such as Late Acceptance) should find it (especially when scaling out because ironically local optima are a bigger problem when scaling down). You can try to add <subchainSwapMoveSelector> etc to escape local optima faster.
If you want to guide the search more (which is often not a good idea), you can define a planning value strength comparator to sort small vehicles before big vehicles and use the Construction Heuristic WEAKEST_FIT(_DECREASING).

Related

Can OptaPlanner be used to do physical layout problems?

I'd like to use OptaPlanner (or something comparable that allows for constraint based optimization) to do vegetable garden bed layout. Ideally this would take into account the space needed per plant, which plants do well next to each other and which don't, and then also how long it takes for the plant to produce food, how much food, etc. The idea would be to help plan a full season where as soon as a plant has been harvested something else can be dropped into its place to max out the productivity and the continuity of food availability. Last, I'm hoping to create aesthetic rules which bias designs towards more pleasing layouts. For example symmetry & repeated (rhythmic) elements vs randomness and ease of harvesting large contiguous groupings.
I think most of these I can figure out how to create constraints in OptaPlanner, but I'm not sure how to represent the physical space and the proximity of each location to its neighbors. Would I break each bed into a set of grid cells, and then assign plants of different cell sizes to regions? Or maybe this needs to be turned into a graph representation? Are there any other physical layout example scenarios I can build off of?
At the moment, we do not have any examples for using OptaPlanner on a bin packing problem. (Which I think your situation could be mapped onto.)
Assuming you can separate your garden bed into a predefined set of fixed slots, in which a plant either does or does not fit, I think OptaPlanner could be used. These slots would then have a fixed set of geographic data, like which is close to which.
If, on the other hand, you can not subdivide the garden beds beforehand and would like the solver to do that for you as well, then the problem becomes much harder to solve. You are solving two problems, one of which is spatial - and you may actually find some success treating the two problems individually. First, find the ideal layout based on the number and type of plants you want to plant, and in the second step figure out which plants to put where.

Is there a way for optaplanner to continue the last best solution to the next calculation

I am using optaplanner 8.4.1 and constraint flow API.
Does optaplanner have a way to output the best solution to the problem based on simple constraints, and then input the solution into complex constraints to find the best solution? Can this speed up the time to the best solution?
Constraints are too complex and slow, especially when you use groupby and toList methods. My current logic involves the judgment of consecutive substrings.
For example: there are 10 positions and 10 balls with serial numbers. The color of the ball can be white, black or red. At this point, you need to line up 10 balls into 10 positions. It is required that the white balls must be piled together. The white balls have a range of 4 to 6; the position here is the planning entity, and the ball is the planning variable. Here we need to calculate how many continuous white balls there are. Constraint flow currently only supports 4 bases, so it can only be judged in the form of groupBy and toList
You can't do that as a single solver currently, but you should be able to do that by running 2 Solvers (with a different SolverConfig) one after another. For you example you could have BasicConstraintProvider with the hard constraints and a FullConstraintProvider that extends the basic one and adds the soft constraints.
That being said, I'd first invest time in improving the score calculation speed (see info log or a benchmark report), to avoid such workarounds all together. That number should be above 10 000.
Also, FIRST_FEASIBLE_FIT might be intresting to look at.
I don't know the details of your problem, but what I am doing is:
Run solver for some time (maybe even using less number of constraints), then serialize solution it provides
Run solver second time (could be with different settings) providing previous solution as a problem, it will keep looking for better one.
It works for me in version 8.30.
Maybe it will help somebody else.

Optaplanner - soft scoring rule not working as expected

I built an application which implements a similar function as task assignment. I thought it works well until recently I noticed the solutions are not optimal. In details, there is a score table for each possible pair between machines and tasks, and usually the number of machines is much less than the number of tasks. I used hard/medium/soft rules, where the soft rule is incremental based on the score of each assignment from the score table.
However, when I reviewed the results after 1-2 hours run, I found out of the unassigned tasks there are many better choices (would achieve higher soft score if assigned) than current assignments. The benchmark reports indicate that the total soft score reached plateau within a hour and then stuck at that score level.
I checked the logic of rules - if the soft rule working perfectly, it should eventually find a way of allocation which achieves the highest overall soft score, whereas meeting the other hard/medium rules, isn't it?
I've been trying various things such as tuning algorithm parameters, scaling the score table, etc. but none delivers the optimal solution.
One problem is that you might be facing a score trap (see docs). In that case, make your constraint score more fine grained to deal with that.
If that's not the case, and you're stuck in a local optima, then I wouldn't play too much with the algorithm parameters - they will probably fix it, but you'll be overfitting on that dataset.
Instead, figure out the smallest possible move that gets you of that local optima and a step closer to the global optimum. Add that kind of moves as a custom move. For example if a normal swap move can't help, but you see a way of getting there by doing a 3-swap move, then implement that move.

First Fit Decreasing Algorithm on CVRP in Optaplanner

I am using the FFD Algorithm in Optaplanner as a construction heuristic for my CVRP problem. I thought I understood the FFD-Alg from bin picking, but I don't understand the logic behind it when applied in OP on CVRP. So my thought was, it focuses on the demands (Sort cities in decreasing order, starting with the highest demand). To proof my assumption, I fixed the city coordinates to only one location, so the distance to the depot of all cities is the same. Then I changed the demands from big to small. But it doesn't take the cities in decrasing order in the result file.
The Input is: City 1: Demand 16, City 2: Demand 12, City 3: Demand 8, City 4: Demand 4,
City 5: Demand 2.
3 Vehicles with a capacity of 40 per vehicle.
What I thougt: V1<-[C1,C2,C3,C4], V2<-[C5]
What happened: V1<-[C5,C4,C3,C2], V2<-[C1]
Could anyone please explain me the theory of this? Also, I would like to know what happens the other way around, same capacities, but different locations per customer. I tried this too, but it also doesn't sort the cities beginning with the farthest one.
Thank you!
(braindump)
Unlike with non-VRP problems, the choice of "difficulty comparison" to determine the "Decreasing Difficulty" of "First Fit Decreasing", isn't always clear. I've done experiments with several forms - such as based on distance to depot, angle to depot, latitude, etc. You can find all those forms of difficulty comperators in the examples, usually TSP.
One common pitfall is to tweak the comperator before enabling Nearby Selection: tweak Nearby Selection first. If you're dealing with large datasets, an "angle to depo" comparator will behave much better, just because Nearby Selection and/or Paritioned Search aren't active yet. In any case, as always: you need to be using optaplanner-benchmark for this sort of work.
This being said, on a pure TSP use case, the First Fit Decreasing algorithm has worse results than the Nearest Neighbor algorithm (which is another construction heuristics for which we have limited support atm). Both require Local Search to improve further, of course. However, translating the Nearest Neighbor algorithm to VRP is difficult/ambiguous (to say the least): I was/am working on such a translation and I am calling it the Bouy algorithm (look for a class in the vrp example that starts with Bouy). It works, but it doesn't combine well with Nearby Selection yet IIRC.
Oh, and there's also the Clarke and Wright savings algorithm, which is great on small pure CVRP cases. But suffers from BigO (= scaling) problems on bigger datasets and it too becomes difficult/ambiguous when other constraints are added (such as time windows, skill req, lunch breaks, ...).
Long story short: the jury's still out on what's the best construction heuristic for real-world, advanced VRP cases. optaplanner-benchmark will help us there. This despite all the academic papers talking about their perfect CH for a simple form of VRP on small datasets...

I am looking for a radio advertising scheduling algorithm / example / experience

Tried doing a bit of research on the following with no luck. Thought I'd ask here in case someone has come across it before.
I help a volunteer-run radio station with their technology needs. One of the main things that have come up is they would like to schedule their advertising programmatically.
There are a lot of neat and complex rule engines out there for advertising, but all we need is something pretty simple (along with any experience that's worth thinking about).
I would like to write something in SQL if possible to deal with these entities. Ideally if someone has written something like this for other advertising mediums (web, etc.,) it would be really helpful.
Entities:
Ads (consisting of a category, # of plays per day, start date, end date or permanent play)
Ad Category (Restaurant, Health, Food store, etc.)
To over-simplify the problem, this will be a elegant sql statement. Getting there... :)
I would like to be able to generate a playlist per day using the above two entities where:
No two ads in the same category are played within x number of ads of each other.
(nice to have) high promotion ads can be pushed
At this time, there are no "ad slots" to fill. There is no "time of day" considerations.
We queue up the ads for the day and go through them between songs/shows, etc. We know how many per hour we have to fill, etc.
Any thoughts/ideas/links/examples? I'm going to keep on looking and hopefully come across something instead of learning it the long way.
Very interesting question, SMO. Right now it looks like a constraint programming problem because you aren't looking for an optimal solution, just one that satisfies all the constraints you have specified. In response to those who wanted to close the question, I'd say they need to check out constraint programming a bit. It's far closer to stackoverflow that any operations research sites.
Look into constraint programming and scheduling - I'll bet you'll find an analogous problem toot sweet !
Keep us posted on your progress, please.
Ignoring the T-SQL request for the moment since that's unlikely to be the best language to write this in ...
One of my favorites approaches to tough 'layout' problems like this is Simulated Annealing. It's a good approach because you don't need to think HOW to solve the actual problem: all you define is a measure of how good the current layout is (a score if you will) and then you allow random changes that either increase or decrease that score. Over many iterations you gradually reduce the probability of moving to a worse score. This 'simulated annealing' approach reduces the probability of getting stuck in a local minimum.
So in your case the scoring function for a given layout might be based on the distance to the next advert in the same category and the distance to another advert of the same series. If you later have time of day considerations you can easily add them to the score function.
Initially you allocate the adverts sequentially, evenly or randomly within their time window (doesn't really matter which). Now you pick two slots and consider what happens to the score when you switch the contents of those two slots. If either advert moves out of its allowed range you can reject the change immediately. If both are still in range, does it move you to a better overall score? Initially you take changes randomly even if they make it worse but over time you reduce the probability of that happening so that by the end you are moving monotonically towards a better score.
Easy to implement, easy to add new 'rules' that affect score, can easily adjust run-time to accept a 'good enough' answer, ...
Another approach would be to use a genetic algorithm, see this similar question: Best Fit Scheduling Algorithm this is likely harder to program but will probably converge more quickly on a good answer.