We are using a customized VRP tutorial example to optimize daily routes for service engineers who travel to customers in order to execute certain repair and installation tasks. We do have time windows and we optimize 1000+ tasks for multiple weeks into the future.
Our (simplified) domain model consists of:
Engineer - the guy doing all the work
Task - a single work assignmet at a certain location
DailyRoute - an Engineer's route for given day, consists of a linked list of Tasks
As a new requirement we must now support two engineers working in parallel on the same task.
Our current plan is to implement this by creating subtasks for the second engineer and implement a rule that their arrival time must be identical to the main task.
However, this is problematic since moving one of the interdependant tasks to a different time (e.g. a different DailyRoute) will mostly violate the above constraint.
So far, we have come up with the following ideas:
Allow single task moves only to a DailyRoute on the same day as the other task's assigned route
can be done via a SelectionFilter
Use CompositeMoves to move both of the parallel tasks at once to different days
Do we need a custom MoveIteratorFactory to select the connected tasks?
Or can this be done with a CartesianProductMoveSelector instead?
Can we use nearby selection for the second move to prefer the same day as the first move's newly assigned day (is move one already done at that time)?
For two engineers working in parallel on the same task, see docs "design patterns" specifically "the delay till last pattern". There is no example, but our support services have helped implemented it a few times - it works.
For the multiple stops at the same location: I've seen users split such visits up into smaller pieces to allow optaplanner to choose which of those pieces to aggregate. It works but it's not perfect: the more fine-grained the pieces, the much bigger the search space - the more that adding a custom move that focusses on moving all pieces together might help (but I won't start out with it). Generally speaking: if the smallest vehicle has a capacity of 100, I 'd run some experiments with splitting up to half that capacity - and they try a quarter too, just to see what works best through benchmarking with optaplanner-benchmark.
Related
I'm playing the game Factorio, where you build a factory.
For the time being, I made a kind-of flowchart using libreoffice calc to calculate how many machines I need to produce a certain material.
Example image from the spreadsheet
Each block has a recipe saved (blue). This recipe includes what and how much it produces and needs and how much time it takes.
It takes the demand from the previous Block (yellow) and, using the recipe, calculates how many machines (green) it needs to fulfill this demand.
Based on the amount of machines it calculates its own demands (orange).
Then the following blocks do the same, until it has reached the last block.
Doing this in a spreadsheet does work, but it is quite a tedious task.
I showed this to my dad, as I'm quite proud of what I made, and he said that maybe a database would be more suitable.
I definitely see its advantages. For example I could easily summarize the final demands of raw resources, or the total power consumption, etc.
So I got myself Microsoft Access, and I'm pretty lost now. I know the basics of Databases and some SQL-Coding, but I'm not quite sure how I would make this.
My first attempt was:
one table for machines. It includes the machines production speed and other relevant stats.
one table for recipes. Each recipe clearly states what it produces, what it needs, the amount of each, and whether or not it is a basic. Basic means that it is a raw resources, i.e. the production chain would end with this.
one table for units. Each unit has a machine, a recipe and an amount. For example I would have one unit using basic assemblers to produce iron gears. This unit also says how many machines there are, so it needs more and produces more.
I did manage to make a query that calculates the total in and outputs of all units based on their machine and recipe, as well as a total energy consumption.
However, that is nowhere near the spreadsheet I made.
For now we can probably set the Graphical overlay aside, that would probably be quite a bit overkill. However what I do want to be able to make:
enter how much I want of a certain resources
based on that entry the database would create a new table. The first entry would be the unit that produces the requested resources. The second would fulfill the firsts demand, the third fulfills the seconds demand, and so on.
So in the end I would end up with a list of units that will produce my requested resource.
I hope someone can help me. There are programs out there that already do this kind of stuff, but I want to do this myself. If this is a problem that a database isn't suited for, then please tell me so.
Thanks for any help!
I have been started to learn Opataplanner for sometime, I try to figure out a model design for my use case to progress the solution calculation, here is my case in real world in manufactory's production line:
There is a working order involved list of sequential processes
Each kind of machine can handle fixed types of processes.(assume machine quantity are enough)
The involved team has a number of available employees, each employee has the skills for set of processes with their different own working cost time
production line has fixed number of stations available
each station put one machine/employee or leave empty
question: how to design the model to calculate the maximum output of completion product in one day.
confusion: in the case, the single station will have one employee and one machined populated, and specified dynamic processed to be working on. but the input of factors are referred by each other and dynamic: employee => processes skill , process skill => machines
can please help to guide how to design the models?
Maybe some of the examples are close to your requirements. See their docs here. Specifically the task assignment, cheap time scheduling or project job scheduling examples.
Otherwise, follow the domain modeling guidelines.
I am wondering if it's possible to treat scheduling problems with tasks with the following property using Optaplanner. Instead of have a fixed duration of 1 hour we have a 1 hour-man, i.e if there is two employees working on that task, it could be done in 1/2 hour.
Otherwise, what are the other solvers that could be used ?
Model wise, the easy approach is to split up that 1 task into 2 smaller tasks that get individually assigned. (When they're both assigned to the same person, sequentially after each other, you can add a soft constraint to reward that.) The downside is that you have to decide in advance for each task into how many pieces they can be split up.
In reality, tasks are rarely arbitrary dividable. Some parts of each task are atomic. For example, taking out the garbage can is a do-or-do-not task. Taking it half the way out, or taking half of it out, and assigning someone else to do the rest, is not allowed because it will increase the time spent on it.
Some tasks need at least 2 persons to execution. For example, someone to hold the ladder while the other is standing on it. In the docs, see the auto delay to last pattern.
Alternatively to the simple model, you can also play with nullable=true and custom moves to allow multiple people to assign to the same tasks, but it's complicated. It can avoid having to tune the number of task pieces in advance too much. Once we support #PlanningVariableCollection, and do so fully, more and better options in this regard will become available.
I'm working on the project where university course is represented as a to-do list, where:
course owner (teacher of the course) can add tasks (containing the URL to the resource needs to be learned and two datetime fields - when to start and when to complete the task)
course subscriber (student) can mark tasks as complete or not complete and their marks are saved individually for each account.
If student marks task as complete - his account + element he marked are shown in the course activity tab for teacher where he can:
initiate a conversation in JavaScript-based chat with him
evaluate the result of the conversation
What optimization algorithm you could recommend me to use for timetable rescheduling (changing datetime fields for to-do element if student procrastinates) here?
Actually, we can use the student activity on the resource + fact that he marked the task as complete + if he clicked or not on the URL placed on the to-do element leading to the external learning material (for example Google Book).
For example, are genetic algorithms suitable for this model and what pitfalls do they have: https://medium.com/#vijinimallawaarachchi/time-table-scheduling-2207ca593b4d ?
I'm not sure I completely understand your problem but it sounds like you have a feasible timetable to begin with and you just need to improve it.
If so genetic algorithms will work very well, but I think representing everything as binary 'chromosomes' like in the link might not be practical.
There are many other ways you can represent a timetable, such as in a 2D array, or giving an event a slot number.
You could look into algorithms such as Tabu search, Simulated Annealing and Great Deluge and Hill Climbing. They are all based on similar ideas but some work better with some problems than others. For example if you have a very rough search space simulated annealing won't be the best and Hill Climbing usually only finds a local optimum.
The general architecture of the algorithms mentioned above and many other genetic algorithms and Metaheuristics is: select a neighbouring solution using a move operator (e.g. swapping the time of one or two or three events or swapping the rooms of two events etc...), check the move doesn't violate any hard constraints, use an acceptance strategy such as, simulated annealing or Great Deluge, to determine if the move is accepted. If it is keep the solution and repeat the steps until the termination criterion is met. This can be max time, number of iterations reached or improving move hasn't been found in x number of iterations.
Whilst this is running keep a log of the 'best' solution so when the algorithm is terminated you have the best solution found. You can determine what is considered 'best' based on how many soft constraints the timetable violates
Hope this helps!
I'm developing a solver for a VRPTW problem using the OptaPlanner and I have faced a problem when large number of customers need to be serviced. By the large number I mean up to 10,000 customers. I have tried running a solver for about 48 hours but no feasible solution was ever reached.
I use a highly customized VRPTW domain model that introduces additional planning entity so-called "Workbreak". Workbreaks are like customers but they can have a location that is actually another planning value - because every day a worker can return home or go to the hotel. Workbreaks have fixed time of departure (usually next day morning), and a variable time of arrival (because it depends on the previous entity within a chain). A hard constraint cares about not allowing to "arrive" to the Workbreak after certain point of time. There are other hard constraints too, like:
multiple service time windows per customer
every week the last customer in chain must be a special customer "storage space visit" (workers need to gather materials before the next week)
long jobs management (when a customer needs to be serviced longer than specified time it should be serviced before specific hour of a day)
max number of jobs per workday
max total job duration per workday (as worker cannot work longer than specified time)
a workbreak cannot have a location of a hotel that is too close to worker's home.
jobs can not be serviced on Sundays
... and many more - there is a total number of 19 hard constrains that have to be applied. There are 3 soft constraints too.
All the aforementioned constraints were initially written as Drools rules, but because of many accumulation-based constraints (max jobs per day, max hours per day, overtime hours per week) the overall speed of the solver (benchmarks) was about 400 step/sec.
At first I thought that solver's speed is too slow to reach a feasible solution in a reasonable time, so I have rewritten all rules into easy score calculator, and it had a decent speed - about 4600 steps/sec. I knew that is will only perform best for a really small number of customers, but I wanted to know if the Drools was the cause of that poor performance. Then I have rewritten all these rules into incremental score calculator (and survived the pain of corrupted score bugs until all of them were successfully fixed). Surprisingly incremental score calculation is a bit slower for a small number of customers, comparing to easy score calculator, but it is not an issue, because overall speed is about 4000 steps/sec - no matter how many entities I have.
The thing that bugs me the most is that above a certain number of customers (problems start at 1000 customers) the solver cannot reach feasible solution. Currently I'm using Late Acceptance and Step Counting algorithms, because they perform really good for this kind of a problem (at least for a less number of customers). I used Simulated Annealing too, but without success, mostly because I could not find good values for algorithm specific parameters.
I have implemented some custom moves too:
Composite move that changes workbreak's location when sibling entities are changed using other moves like change/swap moves (it helps escaping many score traps, as improving step usually needs at least two moves to be performed in a single step)
Move factory for better long jobs assignment (it generates moves that tries to put customers with longer service time in the front of a workday chain)
Workbreak assignment move factory (it generates moves that helps putting workbreaks in proper sequence)
Now I'm scratching my head, and wondering what I should do to diagnose the source of my problem. I suspected that maybe it was hitting a score trap, but I have modified the solver so it saves snapshots of best score each minute. After reading these snapshots I realized that the score was still decreasing. Can the number of hard constraints play the role? I suspect that many moves need to be performed to find out a move that improves the score. The fact is that maybe 48 hours isn't that much for this kind of a problem, and it should make computations a whole week? Unfortunately I have nothing to compare with.
I would like to know how to find out if it is solely a performance problem, or a solver (algorithm, custom moves, hard/soft score) configuration problem.
I really apologize for my bad English.
TL;DR but FWIW:
To scale above 1k locations you need to use NearBy selection.
To scale above 10k locations, add Partitioned Search too.