VRP with clustered routes using optaplanner - optaplanner

I am using optaplaner to solve a vehicle routing problem, I apply different constraint providers to enforce weight and volume capacities, time windows, etc.
I am getting routes, which as shown in the image are elongated in direction.
And my question is: How to obtain clustered routes as shown in the following image, what constraint could I implement or what algorithm should I activate in obtaplanner to obtain a similar behavior?
I value very much any idea or input. Thanks!

You have to answer a question which I can not answer for you. Why are clustered routes preferrable? Are they shorter in driving distance? Are there legal requirements for maximum distance driven from a depot? Something else?
Once you have an answer to that, write a constraint for that criteria.
You may possibly find some inspiration in the facility location quickstart.

This is an active area of reserach.
Some ideas as how you may be able to achieve this:
Add metrics to the optimization target if minimized the routes are more clustered (e.g. bounding box, area of a route, cross-overs between routes, average squared distance to the route's center). The con to this approach is that often these metrics slow the solver down rather significantly.
Cluster first, route second algorithms. You can create cluster first and set a hard constraint that these clusters are not to be broken. The con to this approach is that you may not use the "optimal" amount of resources.
Cluster first, route second algorithms, but with an objective to not break up the clusters. The con to this approach is that the clusters may not be respected if there are better solutions where the cluster needs to be broken up.
That's all I have for now. In my experience the time window constraint is often the most constraining factor, and having narrow time windows often contribute to what is perceived as "messy" routes. See if you can relax one or more constraints if you can.

Related

Looking for a base for a constraint solving problem

I am new to OptPlanner but I have a reasonable understanding of constraint solving alebit somewhat dated.
I have a problem I want to model. On the one hand the National Grid have requirements to save electricity between defined time slots on specific days in specific locations (post codes). On the other individuals with static or mobile batteries charge their batteries at some point during a 24 hour cycle and have a need to get a specific amount of charge into those batteries. I need to model a set of constraints at the top (the grid) and the constraints at the bottom (the individuals) to ensure the individuals get what they need and the grid saves what it requires.
What model should I pick and why?
I am just starting this so I have not tried anything yet. I would prefer a Java/SpringBoot solution.
Many thanks for any help.
Steve T
First read the domain modeling guide in the docs to understand my answer below.
https://www.optaplanner.org/docs/optaplanner/latest/design-patterns/design-patterns.html#domainModelingGuide
I think the maintenance scheduling quickstart might be a good start. Code is here:
https://github.com/kiegroup/optaplanner-quickstarts/tree/stable/use-cases/maintenance-scheduling
Motivation: it sounds like there could be gaps between charging at the charging stations, so a chained through time model does not fit. You're not solving a VRP anyway. So I suspect a timegrain model it is, which is what the maintenance scheduling quickstart actually uses.

why is the ORTOOLS guided local search, that starts with a feasible solution considered constraint programming?

I'm using the ORTOOLS library for solving a VRP problem. I give it an initial feasible solution to my problem, satisfying all the constraints of my problem but sub-optimal. Then ORTOOLS performs a GUIDED_LOCAL_SEARCH heuristic, continuously perturbing parts of my solution (possibly making it infeasible at times) until it hopefully reaches a better solution than my initial solution.
Why is it using a constraint programming solver? My understanding is that classic constraint programming starts with an infeasible (possibly empty) solution, propagates the constraints to narrow the domains of my variables until reaching a stationary state, and then makes a decision. Then it iterates again until solving the problem or backtracks if reaching a dead-end (think SUDOKU).
In what way are these capabilities (propagation, backtracking) needed when making the small perturbations?
There are two reasons.
1) The initial solution heuristics is a combination of fast LS heuristic search and standard constraint programming search.
2) The whole local search implementation is build on top of a traditional constraint programming solver and uses constraints and propagators to validate solution, and complete them.
See: https://github.com/google/or-tools/issues/920

OptaPlanner partitioned search strategy

I'm considering OptaPlanner's partitioned search feature because of a big scale VRPTW problem I need to deal with.
As far as I know a custom implementation of the SolutionPartitioner have to be implemented according to OptaPlanner's documentation. The example of a partitioner for a cloud balancing problem is straightforward, but I wonder how to partition planning etities in a VRPTW class problem.
Should I use a kind of a clustering algorithm in order to make a cluster based partitioning, or should I just divide input data like in the cloud balance example? Sometimes a lot of customers are placed on a relatively small area, but more workers are scheduled to service them. On the other hand there can be a service area where two clearly disjoint sub-areas are visible.
Thank you in advance!

Travelling Salesman and Map/Reduce: Abandon Channel

This is an academic rather than practical question. In the Traveling Salesman Problem, or any other which involves finding a minimum optimization ... if one were using a map/reduce approach it seems like there would be some value to having some means for the current minimum result to be broadcast to all of the computational nodes in some manner that allows them to abandon computations which exceed that.
In other words if we map the problem out we'd like each node to know when to give up on a given partial result before it's complete but when it's already exceeded some other solution.
One approach that comes immediately to mind would be if the reducer had a means to provide feedback to the mapper. Consider if we had 100 nodes, and millions of paths being fed to them by the mapper. If the reducer feeds the best result to the mapper than that value could be including as an argument along with each new path (problem subset). In this approach the granularity is fairly rough ... the 100 nodes will each keep grinding away on their partition of the problem to completion and only get the new minimum with their next request from the mapper. (For a small number of nodes and a huge number of problem partitions/subsets to work across this granularity would be inconsequential; also it's likely that one could apply heuristics to the sequence in which the possible routes or problem subsets are fed to the nodes to get a rapid convergence towards the optimum and thus minimize the amount of "wasted" computation performed by the nodes).
Another approach that comes to mind would be for the nodes to be actively subscribed to some sort of channel, or multicast or even broadcast from which they could glean new minimums from their computational loop. In that case they could immediately abandon a bad computation when notified of a better solution (by one of their peers).
So, my questions are:
Is this concept covered by any terms of art in relation to existing map/reduce discussions
Do any of the current map/reduce frameworks provide features to support this sort of dynamic feedback?
Is there some flaw with this idea ... some reason why it's stupid?
that's a cool theme, that doesn't have that much literature, that was done on it before. So this is pretty much a brainstorming post, rather than an answer to all your problems ;)
So every TSP can be expressed as a graph, that looks possibly like this one: (taken it from the german Wikipedia)
Now you can run a graph algorithm on it. MapReduce can be used for graph processing quite well, although it has much overhead.
You need a paradigm that is called "Message Passing". It was described in this paper here: Paper.
And I blog'd about it in terms of graph exploration, it tells quite simple how it works. My Blogpost
This is the way how you can tell the mapper what is the current minimum result (maybe just for the vertex itself).
With all the knowledge in the back of the mind, it should be pretty standard to think of a branch and bound algorithm (that you described) to get to the goal. Like having a random start vertex and branching to every adjacent vertex. This causes a message to be send to each of this adjacents with the cost it can be reached from the start vertex (Map Step). The vertex itself only updates its cost if it is lower than the currently stored cost (Reduce Step). Initially this should be set to infinity.
You're doing this over and over again until you've reached the start vertex again (obviously after you visited every other one). So you have to somehow keep track of the currently best way to reach a vertex, this can be stored in the vertex itself, too. And every now and then you have to bound this branching and cut off branches that are too costly, this can be done in the reduce step after reading the messages.
Basically this is just a mix of graph algorithms in MapReduce and a kind of shortest paths.
Note that this won't yield to the optimal way between the nodes, it is still a heuristic thing. And you're just parallizing the NP-hard problem.
BUT a little self-advertising again, maybe you've read it already in the blog post I've linked, there exists an abstraction to MapReduce, that has way less overhead in this kind of graph processing. It is called BSP (Bulk synchonous parallel). It is more freely in the communication and it's computing model. So I'm sure that this can be a lot better implemented with BSP than MapReduce. You can realize these channels you've spoken about better with it.
I'm currently involved in an Summer of Code project which targets these SSSP problems with BSP. Maybe you want to visit if you're interested. This could then be a part solution, it is described very well in my blog, too. SSSP's in my blog
I'm excited to hear some feedback ;)
It seems that Storm implements what I was thinking of. It's essentially a computational topology (think of how each compute node might be routing results based on a key/hashing function to the specific reducers).
This is not exactly what I described, but might be useful if one had a sufficiently low-latency way to propagate current bounding (i.e. local optimum information) which each node in the topology could update/receive in order to know which results to discard.

I am looking for a radio advertising scheduling algorithm / example / experience

Tried doing a bit of research on the following with no luck. Thought I'd ask here in case someone has come across it before.
I help a volunteer-run radio station with their technology needs. One of the main things that have come up is they would like to schedule their advertising programmatically.
There are a lot of neat and complex rule engines out there for advertising, but all we need is something pretty simple (along with any experience that's worth thinking about).
I would like to write something in SQL if possible to deal with these entities. Ideally if someone has written something like this for other advertising mediums (web, etc.,) it would be really helpful.
Entities:
Ads (consisting of a category, # of plays per day, start date, end date or permanent play)
Ad Category (Restaurant, Health, Food store, etc.)
To over-simplify the problem, this will be a elegant sql statement. Getting there... :)
I would like to be able to generate a playlist per day using the above two entities where:
No two ads in the same category are played within x number of ads of each other.
(nice to have) high promotion ads can be pushed
At this time, there are no "ad slots" to fill. There is no "time of day" considerations.
We queue up the ads for the day and go through them between songs/shows, etc. We know how many per hour we have to fill, etc.
Any thoughts/ideas/links/examples? I'm going to keep on looking and hopefully come across something instead of learning it the long way.
Very interesting question, SMO. Right now it looks like a constraint programming problem because you aren't looking for an optimal solution, just one that satisfies all the constraints you have specified. In response to those who wanted to close the question, I'd say they need to check out constraint programming a bit. It's far closer to stackoverflow that any operations research sites.
Look into constraint programming and scheduling - I'll bet you'll find an analogous problem toot sweet !
Keep us posted on your progress, please.
Ignoring the T-SQL request for the moment since that's unlikely to be the best language to write this in ...
One of my favorites approaches to tough 'layout' problems like this is Simulated Annealing. It's a good approach because you don't need to think HOW to solve the actual problem: all you define is a measure of how good the current layout is (a score if you will) and then you allow random changes that either increase or decrease that score. Over many iterations you gradually reduce the probability of moving to a worse score. This 'simulated annealing' approach reduces the probability of getting stuck in a local minimum.
So in your case the scoring function for a given layout might be based on the distance to the next advert in the same category and the distance to another advert of the same series. If you later have time of day considerations you can easily add them to the score function.
Initially you allocate the adverts sequentially, evenly or randomly within their time window (doesn't really matter which). Now you pick two slots and consider what happens to the score when you switch the contents of those two slots. If either advert moves out of its allowed range you can reject the change immediately. If both are still in range, does it move you to a better overall score? Initially you take changes randomly even if they make it worse but over time you reduce the probability of that happening so that by the end you are moving monotonically towards a better score.
Easy to implement, easy to add new 'rules' that affect score, can easily adjust run-time to accept a 'good enough' answer, ...
Another approach would be to use a genetic algorithm, see this similar question: Best Fit Scheduling Algorithm this is likely harder to program but will probably converge more quickly on a good answer.