I have been asked by a customer to work on a project using Drools. Looking at the Drools documentation I think they are talking about OptaPlanner.
The company takes in transport orders from many customers and links these to bookings on multiple carriers. Orders last year exceeded 100,000. The "optimisation" that currently takes place is based on service, allocation and rate and is linear (each order is assigned to a carrier using the constraints but without any consideration of surrounding orders). The requirement is to hold non-critical orders in a pool for a number of days and optimize the orders in the pool for lowest cost using the same constraints.
Initially they want to run "what if's" over last year's orders to fine-tune the constraints. If this exercise is successful they want to use it in their live system.
My question is whether OptaPlanner is the correct tool for this task, and if so, if there is an example that I can use to get me started.
Take a look at the vehicle routing videos, as it sounds like you have a vehicle routing problem.
If you use just Drools to assign orders, you basically build a Construction Heuristic (= a greedy algorithm). If you use OptaPlanner to assign the orders (and Drools to calculate the quality (= score) of a solution), then you get a better solution. See false assumptions on vehicle routing to understand why.
To scale to 100k orders (= planning entities), use Nearby Selection (which is good up to 10k) and Partitioned Search (which is a sign of weakness but needed above 10k).
Related
I have a vrp variant that minimizes cost for a set of liquid deliveries. I have been asked to minimize cost per unit instead.
The costs are: hourly vehicle costs from standstill, back to depot (just the time to previous standstill and time to depot from the VRP example, multiplied by the vehicle hourly rate), plus the cost of product.
The amount delivered varies depending on the solution, but can be calculated by doing a sum of the deliveries of each vehicle.
So I have three streams for costs and one for unit count. Is there a way to join them and divide the two sums? Or is a shadow variable the only way to do it?
For a shadow variable method, I would add "cost"to each customer and then have a single constraint that replaces all the soft constraints that looks like:
protected Constraint costPerUnit(ConstraintFactory factory) {
return factory.forEach(Customer.class)
.groupBy( c->sumLong(c.getCost()), sumLong(c.getLitres))
.penalizeLong(
HardSoftLongScore.ONE_SOFT,
(cost, amount) -> cost / amount)
.asConstraint("costOfProduct");
}
It seems like it would be very slow though.
edit: thinking about this some more, is there a performance reason for using constraint streams instead of just calculating the score in listerners and then using one simple constraint stream rule for all soft constraints?
Even though, with a lot of care and attention, you could probably implement a very fast listener to tackle this sort of problem, I doubt it would be as fast as a properly incremental solution.
Now does that solution need to be implemented using Constraint Streams? No. For small problems, EasyScoreCalculator will be, well, easy - but for small problems, you wouldn't need OptaPlanner. For problems large in size but easy in how the score is calculated, you may want to look into IncrementalScoreCalculator - those are tricky to implement, but once you get it right, there is no way you could be any faster. Well-designed incremental calculators routinely beat Constraint Streams in terms of performance.
The main benefit of Constraint Streams is good performance without the need for complex code. And you get constraint justifications, and therefore score explanations. The downside is you have to learn to think in the API, and there is some overhead. It's up to you to weigh these factors against each other and make the choice that is best for your particular problem.
The data available includes ERP data for real order quantity and revenue, as well as adobe online analytics data for addin cart and online revenue.
It was asked to determine if an update of content will impact sales, so we have some proof to roll out similar update to all contents. However, the sales by nature will increase. How do we build a model to exclude natual increase sales and provide a statistical proof of increase/decrease by the update?
Thanks,
If I got this right, I come up with two possible solutions:
if the natural grownth is predictable, you should be able to clean it out by approximating. for instance if you have 2% monthly steady sales growth (this can easyly be extracted from the ERP), you can roughly substract them from the results of the updated site. the approach details greatly depend on how presice you wish the model to be
perform A/B site testing. in this case you'll get the real figures. this requires to involve your web team
I'm developing a solver for a VRPTW problem using the OptaPlanner and I have faced a problem when large number of customers need to be serviced. By the large number I mean up to 10,000 customers. I have tried running a solver for about 48 hours but no feasible solution was ever reached.
I use a highly customized VRPTW domain model that introduces additional planning entity so-called "Workbreak". Workbreaks are like customers but they can have a location that is actually another planning value - because every day a worker can return home or go to the hotel. Workbreaks have fixed time of departure (usually next day morning), and a variable time of arrival (because it depends on the previous entity within a chain). A hard constraint cares about not allowing to "arrive" to the Workbreak after certain point of time. There are other hard constraints too, like:
multiple service time windows per customer
every week the last customer in chain must be a special customer "storage space visit" (workers need to gather materials before the next week)
long jobs management (when a customer needs to be serviced longer than specified time it should be serviced before specific hour of a day)
max number of jobs per workday
max total job duration per workday (as worker cannot work longer than specified time)
a workbreak cannot have a location of a hotel that is too close to worker's home.
jobs can not be serviced on Sundays
... and many more - there is a total number of 19 hard constrains that have to be applied. There are 3 soft constraints too.
All the aforementioned constraints were initially written as Drools rules, but because of many accumulation-based constraints (max jobs per day, max hours per day, overtime hours per week) the overall speed of the solver (benchmarks) was about 400 step/sec.
At first I thought that solver's speed is too slow to reach a feasible solution in a reasonable time, so I have rewritten all rules into easy score calculator, and it had a decent speed - about 4600 steps/sec. I knew that is will only perform best for a really small number of customers, but I wanted to know if the Drools was the cause of that poor performance. Then I have rewritten all these rules into incremental score calculator (and survived the pain of corrupted score bugs until all of them were successfully fixed). Surprisingly incremental score calculation is a bit slower for a small number of customers, comparing to easy score calculator, but it is not an issue, because overall speed is about 4000 steps/sec - no matter how many entities I have.
The thing that bugs me the most is that above a certain number of customers (problems start at 1000 customers) the solver cannot reach feasible solution. Currently I'm using Late Acceptance and Step Counting algorithms, because they perform really good for this kind of a problem (at least for a less number of customers). I used Simulated Annealing too, but without success, mostly because I could not find good values for algorithm specific parameters.
I have implemented some custom moves too:
Composite move that changes workbreak's location when sibling entities are changed using other moves like change/swap moves (it helps escaping many score traps, as improving step usually needs at least two moves to be performed in a single step)
Move factory for better long jobs assignment (it generates moves that tries to put customers with longer service time in the front of a workday chain)
Workbreak assignment move factory (it generates moves that helps putting workbreaks in proper sequence)
Now I'm scratching my head, and wondering what I should do to diagnose the source of my problem. I suspected that maybe it was hitting a score trap, but I have modified the solver so it saves snapshots of best score each minute. After reading these snapshots I realized that the score was still decreasing. Can the number of hard constraints play the role? I suspect that many moves need to be performed to find out a move that improves the score. The fact is that maybe 48 hours isn't that much for this kind of a problem, and it should make computations a whole week? Unfortunately I have nothing to compare with.
I would like to know how to find out if it is solely a performance problem, or a solver (algorithm, custom moves, hard/soft score) configuration problem.
I really apologize for my bad English.
TL;DR but FWIW:
To scale above 1k locations you need to use NearBy selection.
To scale above 10k locations, add Partitioned Search too.
If I have the following situation - a kind of "Travelling Technician" problem modeled on the vehicle routing but instead of vehicles its is technicians traveling to sites.
We want to:
generate a plan for the week ahead
send that plan to each of the technicians and sites with who is visiting, why and when
So far all ok, we generate the plan for the week..
But on Tuesday a technician phones in ill (or at 11:30 the technicians car breaks down). Assume we do not have a backup (so simple backup planning will not work). How can I redo the plan minimising any changes? Basically keeping the original plan constraints but adding a constraint that rewards keeping as close to the original plan as possible and minimising the number of customers that we upset.
Yes, basically every Entity has an extra field which holds the original planning variable value. That extra field is NOT a planning variable itself. Then you add rules which says that if the plannign variable != original value, it inflicts a certain soft cost. The higher the soft cost, the less volatile your schedule is. The lower the soft cost, the more flexible your schedule is towards the new situation.
See the MachineReassignment example for an example implementation. That actually has 3 types of these soft costs.
Tried doing a bit of research on the following with no luck. Thought I'd ask here in case someone has come across it before.
I help a volunteer-run radio station with their technology needs. One of the main things that have come up is they would like to schedule their advertising programmatically.
There are a lot of neat and complex rule engines out there for advertising, but all we need is something pretty simple (along with any experience that's worth thinking about).
I would like to write something in SQL if possible to deal with these entities. Ideally if someone has written something like this for other advertising mediums (web, etc.,) it would be really helpful.
Entities:
Ads (consisting of a category, # of plays per day, start date, end date or permanent play)
Ad Category (Restaurant, Health, Food store, etc.)
To over-simplify the problem, this will be a elegant sql statement. Getting there... :)
I would like to be able to generate a playlist per day using the above two entities where:
No two ads in the same category are played within x number of ads of each other.
(nice to have) high promotion ads can be pushed
At this time, there are no "ad slots" to fill. There is no "time of day" considerations.
We queue up the ads for the day and go through them between songs/shows, etc. We know how many per hour we have to fill, etc.
Any thoughts/ideas/links/examples? I'm going to keep on looking and hopefully come across something instead of learning it the long way.
Very interesting question, SMO. Right now it looks like a constraint programming problem because you aren't looking for an optimal solution, just one that satisfies all the constraints you have specified. In response to those who wanted to close the question, I'd say they need to check out constraint programming a bit. It's far closer to stackoverflow that any operations research sites.
Look into constraint programming and scheduling - I'll bet you'll find an analogous problem toot sweet !
Keep us posted on your progress, please.
Ignoring the T-SQL request for the moment since that's unlikely to be the best language to write this in ...
One of my favorites approaches to tough 'layout' problems like this is Simulated Annealing. It's a good approach because you don't need to think HOW to solve the actual problem: all you define is a measure of how good the current layout is (a score if you will) and then you allow random changes that either increase or decrease that score. Over many iterations you gradually reduce the probability of moving to a worse score. This 'simulated annealing' approach reduces the probability of getting stuck in a local minimum.
So in your case the scoring function for a given layout might be based on the distance to the next advert in the same category and the distance to another advert of the same series. If you later have time of day considerations you can easily add them to the score function.
Initially you allocate the adverts sequentially, evenly or randomly within their time window (doesn't really matter which). Now you pick two slots and consider what happens to the score when you switch the contents of those two slots. If either advert moves out of its allowed range you can reject the change immediately. If both are still in range, does it move you to a better overall score? Initially you take changes randomly even if they make it worse but over time you reduce the probability of that happening so that by the end you are moving monotonically towards a better score.
Easy to implement, easy to add new 'rules' that affect score, can easily adjust run-time to accept a 'good enough' answer, ...
Another approach would be to use a genetic algorithm, see this similar question: Best Fit Scheduling Algorithm this is likely harder to program but will probably converge more quickly on a good answer.