Is there an example of an optaplanner vrp model that minimizes cost per unit? - optaplanner

I have a vrp variant that minimizes cost for a set of liquid deliveries. I have been asked to minimize cost per unit instead.
The costs are: hourly vehicle costs from standstill, back to depot (just the time to previous standstill and time to depot from the VRP example, multiplied by the vehicle hourly rate), plus the cost of product.
The amount delivered varies depending on the solution, but can be calculated by doing a sum of the deliveries of each vehicle.
So I have three streams for costs and one for unit count. Is there a way to join them and divide the two sums? Or is a shadow variable the only way to do it?
For a shadow variable method, I would add "cost"to each customer and then have a single constraint that replaces all the soft constraints that looks like:
protected Constraint costPerUnit(ConstraintFactory factory) {
return factory.forEach(Customer.class)
.groupBy( c->sumLong(c.getCost()), sumLong(c.getLitres))
.penalizeLong(
HardSoftLongScore.ONE_SOFT,
(cost, amount) -> cost / amount)
.asConstraint("costOfProduct");
}
It seems like it would be very slow though.
edit: thinking about this some more, is there a performance reason for using constraint streams instead of just calculating the score in listerners and then using one simple constraint stream rule for all soft constraints?

Even though, with a lot of care and attention, you could probably implement a very fast listener to tackle this sort of problem, I doubt it would be as fast as a properly incremental solution.
Now does that solution need to be implemented using Constraint Streams? No. For small problems, EasyScoreCalculator will be, well, easy - but for small problems, you wouldn't need OptaPlanner. For problems large in size but easy in how the score is calculated, you may want to look into IncrementalScoreCalculator - those are tricky to implement, but once you get it right, there is no way you could be any faster. Well-designed incremental calculators routinely beat Constraint Streams in terms of performance.
The main benefit of Constraint Streams is good performance without the need for complex code. And you get constraint justifications, and therefore score explanations. The downside is you have to learn to think in the API, and there is some overhead. It's up to you to weigh these factors against each other and make the choice that is best for your particular problem.

Related

Optaplanner: Penalize ConstraintStream at Multiple Levels

I have a domain model where I penalize a score at multiple levels within a single rule. Consider a cloud scheduling problem where we have to assign processes to computers, but each process can be split amongst several computers. Each process has a threshold (e.g. 75%), and we can only "win" the process if we can schedule up to its threshold. We get some small additional benefit from scheduling the remaining 25% of the process, but our solver is geared to "winning" as many processes as possible, so we should be scheduling as many processes as possible to their threshold before scheduling the remainder of the process.
Our hard rule counts hard constraints (we can't schedule more processes on a computer than it can handle)
Our medium rule is rewarded for how many processes have been scheduled up to the threshold (no additional reward for going above 75%).
Our soft rule is rewarded for how many processes have been scheduled total (here we do get additional reward for going above 75%).
This scoring implementation means that it is more important to schedule all processes up to their threshold than to waste precious computer space scheduling 100% of a process.
When we used a drools implementation, we had a rule which rewarded the medium and soft levels simultaneously.
when
$process : Process()
$percentAllocated : calculatePercentProcessAllocated($process) //uses an accumulator over all computers
then
mediumReward;
if ($percentAllocated > $process.getThreshold()) {
mediumReward = $process.getThreshold();
}
else {
mediumReward = $percentAllocated;
}
softReward = $percentAllocated;
scoreHolder.addMultiConstraintMatch(0, mediumReward, softReward);
The above pseudo-drools is heavily simplified, just want to show how we were rewarded two levels at once.
The problem is that I don't see any good way to apply multi constraint matches using constraint streams. All the examples I see automatically add a terminator after applying a penalize or reward method, so that no further score modifications can be made. The only way I see to implement my rules is to make two rules which are identical outside of their reward calls.
I would very much like to avoid running the same constraint twice if possible, so is there a way to penalize the score at multiple levels at once?
Also to anticipate a possible answer to this question, it is not possible to split our domain model so that each process is two processes (one process from 0% to the threshold, and another from the threshold to 100%). Part of the accumulation that I have glossed over involves linking the two parts and would be too expensive to perform if they were separate objects.
There is currently no way in the constraint streams API to do that. Constraint weights are assumed to be constant.
If you must do this, you can take advantage of the fact that there really is no "medium" score. The medium part in HardMediumSoftScore is just another level of soft. Therefore 0hard/1medium/2soft would in practice behave the same as 0hard/1000002soft. If you pick a sufficiently high constant to multiply the medium part with, you can have HardSoftScore work just like HardMediumSoftScore and implement your use case at the same time.
Is it a hack? Yes. Do I like it? No. But it does solve your issue.
Another way to do that would be to take advantage of node sharing. I'll use an example that will show it better than a thousand words:
UniConstraintStream<Person> stream = constraintFactory.forEach(Person.class);
Constraint one = stream.penalize(HardSoftScore.ONE).asConstraint("Constraint 1")
Constraint two = stream.penalize(HardSoftScore.TEN).asConstraint("Constraint 2")
This looks like two constraints executing twice. And in CS-Drools, it is exactly that. But CS-Bavet can actually optimize this and will only execute the stream once, applying two different penalties at the end.
This is not a very nice programming model, you lose the fluency of the API and you need to switch to the non-default CS implementation. But again - if you absolutely need to do what you want to do, this would be another way.

OptaPlanner for large data sets

I have been asked by a customer to work on a project using Drools. Looking at the Drools documentation I think they are talking about OptaPlanner.
The company takes in transport orders from many customers and links these to bookings on multiple carriers. Orders last year exceeded 100,000. The "optimisation" that currently takes place is based on service, allocation and rate and is linear (each order is assigned to a carrier using the constraints but without any consideration of surrounding orders). The requirement is to hold non-critical orders in a pool for a number of days and optimize the orders in the pool for lowest cost using the same constraints.
Initially they want to run "what if's" over last year's orders to fine-tune the constraints. If this exercise is successful they want to use it in their live system.
My question is whether OptaPlanner is the correct tool for this task, and if so, if there is an example that I can use to get me started.
Take a look at the vehicle routing videos, as it sounds like you have a vehicle routing problem.
If you use just Drools to assign orders, you basically build a Construction Heuristic (= a greedy algorithm). If you use OptaPlanner to assign the orders (and Drools to calculate the quality (= score) of a solution), then you get a better solution. See false assumptions on vehicle routing to understand why.
To scale to 100k orders (= planning entities), use Nearby Selection (which is good up to 10k) and Partitioned Search (which is a sign of weakness but needed above 10k).

Giving Priority sequence to Constraints in Gurobi/Cplex (Linear programming)

I am working on Business Problem for factory and developing Linear programming Solution. problem has thousands of Constraints and variables. I want to give priority sequence to constraints so that constraints which are lower in priority can be breached if no optimum solution.
My Question is how to Set the constraint priority sequnece for CPLEX/Gurobi Solver.I am using java as language ,Do we have any specific format/function etc?
This is usually done at the modeling level. Add slacks to the equations, and add a term to the objective that minimizes the slack using a penalty or cost coefficient. Sometimes you can even use some dollar figures for the cost (e.g. storage capacity constraint: then cost is something like the price of renting extra storage space). This process is sometimes called making the model elastic, or introducing hard and soft constraints and is quite often used in practical models.

Optaplanner and real time replanning without simple backup planning, minimising changes

If I have the following situation - a kind of "Travelling Technician" problem modeled on the vehicle routing but instead of vehicles its is technicians traveling to sites.
We want to:
generate a plan for the week ahead
send that plan to each of the technicians and sites with who is visiting, why and when
So far all ok, we generate the plan for the week..
But on Tuesday a technician phones in ill (or at 11:30 the technicians car breaks down). Assume we do not have a backup (so simple backup planning will not work). How can I redo the plan minimising any changes? Basically keeping the original plan constraints but adding a constraint that rewards keeping as close to the original plan as possible and minimising the number of customers that we upset.
Yes, basically every Entity has an extra field which holds the original planning variable value. That extra field is NOT a planning variable itself. Then you add rules which says that if the plannign variable != original value, it inflicts a certain soft cost. The higher the soft cost, the less volatile your schedule is. The lower the soft cost, the more flexible your schedule is towards the new situation.
See the MachineReassignment example for an example implementation. That actually has 3 types of these soft costs.

I am looking for a radio advertising scheduling algorithm / example / experience

Tried doing a bit of research on the following with no luck. Thought I'd ask here in case someone has come across it before.
I help a volunteer-run radio station with their technology needs. One of the main things that have come up is they would like to schedule their advertising programmatically.
There are a lot of neat and complex rule engines out there for advertising, but all we need is something pretty simple (along with any experience that's worth thinking about).
I would like to write something in SQL if possible to deal with these entities. Ideally if someone has written something like this for other advertising mediums (web, etc.,) it would be really helpful.
Entities:
Ads (consisting of a category, # of plays per day, start date, end date or permanent play)
Ad Category (Restaurant, Health, Food store, etc.)
To over-simplify the problem, this will be a elegant sql statement. Getting there... :)
I would like to be able to generate a playlist per day using the above two entities where:
No two ads in the same category are played within x number of ads of each other.
(nice to have) high promotion ads can be pushed
At this time, there are no "ad slots" to fill. There is no "time of day" considerations.
We queue up the ads for the day and go through them between songs/shows, etc. We know how many per hour we have to fill, etc.
Any thoughts/ideas/links/examples? I'm going to keep on looking and hopefully come across something instead of learning it the long way.
Very interesting question, SMO. Right now it looks like a constraint programming problem because you aren't looking for an optimal solution, just one that satisfies all the constraints you have specified. In response to those who wanted to close the question, I'd say they need to check out constraint programming a bit. It's far closer to stackoverflow that any operations research sites.
Look into constraint programming and scheduling - I'll bet you'll find an analogous problem toot sweet !
Keep us posted on your progress, please.
Ignoring the T-SQL request for the moment since that's unlikely to be the best language to write this in ...
One of my favorites approaches to tough 'layout' problems like this is Simulated Annealing. It's a good approach because you don't need to think HOW to solve the actual problem: all you define is a measure of how good the current layout is (a score if you will) and then you allow random changes that either increase or decrease that score. Over many iterations you gradually reduce the probability of moving to a worse score. This 'simulated annealing' approach reduces the probability of getting stuck in a local minimum.
So in your case the scoring function for a given layout might be based on the distance to the next advert in the same category and the distance to another advert of the same series. If you later have time of day considerations you can easily add them to the score function.
Initially you allocate the adverts sequentially, evenly or randomly within their time window (doesn't really matter which). Now you pick two slots and consider what happens to the score when you switch the contents of those two slots. If either advert moves out of its allowed range you can reject the change immediately. If both are still in range, does it move you to a better overall score? Initially you take changes randomly even if they make it worse but over time you reduce the probability of that happening so that by the end you are moving monotonically towards a better score.
Easy to implement, easy to add new 'rules' that affect score, can easily adjust run-time to accept a 'good enough' answer, ...
Another approach would be to use a genetic algorithm, see this similar question: Best Fit Scheduling Algorithm this is likely harder to program but will probably converge more quickly on a good answer.