I'm just starting learning to use OptaPlanner recently. Please pardon me if there is any technically inaccurate description below.
Basically, I have a problem to assign several tasks on a bunch of machines. Tasks have some precedence restrictions such that some task cannot be started before the end of another task. In addition, each task can only be run on certain machines. The target is to minimize the makespan of all these tasks.
I modeled this problem with Chained Through Time Pattern in which each machine is the anchor. But the problem is that tasks on certain machine might not be executed sequentially due to the precedence restriction. For example, Task B can only be started after Task A completes while Tasks A and B are executed on machines I and II respectively. This means during the execution of Task A on machine I, if there is no other task that can be run on machine II, then machine II can only keep idle until Task A completes at which point Task B could be started on it. This kind of gap is not deterministic as it depends on the duration of Task A with respect to this example. According to the tutorial of OptaPlanner, it seems that additional planning variable gaps should be introduced for this kind of problem. But I have difficulty in modeling this gap variable now. In general, how to integrate the gap variable in the model using Chained Through Time Pattern? Some detailed explanation or even a simple example would be highly appreciated.
Moreover, I'm actually not sure whether chained through time pattern is suitable for modeling this kind of task assigning problem or I just used an entirely inappropriate method. Could someone please shed some light on this? Thanks in advance.
I'am using chained through time pattern to solve the same question as yours.And to solve the precedence restriction you can write drools rules.
Related
I have a large MILP that I build with cvxpy and want to solve with GUROBI. When I give use the solve() function of cvxpy it take a really really really long time to setup and does not start solving for hours. Whilest doing that only 1 core of my cluster is being used. It is used for 100%. I would like to use multiple cores to build the model so that the process of building the model does not take so long. Running grbprobe also shows that gurobi knows about the other cores and for solving the problem it uses multiple cores.
I have tried to run with different flags i.e. turning presolve off and on or giving the number of Threads to be used (this seemed like i didn't even for the solving.
I also have reduce the number of constraints in the problem and it start solving much faster which means that this is definitively not a problem of the model itself.
The problem in it's normal state should have 2200 constraints i reduce it to 150 and it took a couple of seconds until it started to search for a solution.
The problem is that I don't see anything since it takes so long to get the ""set username parameters"" flag and I don't get any information on what the computer does in the mean time.
Is there a way to tell GUROBI or CVXPY that it can take more cpus for the build-up?
Is there another way to solve this problem?
Sorry. The first part of the solve (cvxpy model generation, setup, presolving, scaling, solving the root, preprocessing) is almost completely serial. The parallel part is when it really starts working on the branch-and-bound tree. For many problems, the parallel part is by far the most expensive, but not for all.
This is not only the case for Gurobi. Other high-end solvers have the same behavior.
There are options to do less presolving and preprocessing. That may get you earlier in the B&B. However, usually, it is better not to touch these options.
Running things with verbose=True may give you more information. If you have more detailed questions, you may want to share the log.
To all,
Version of optaplanner: 7.48
Since a moment now, I'm no longer able to resume solving.
The process is:
thread 1: solver.solve();
thread 2: solver.terminateEarly();
thread 2: solver.solve(solver.getBestSolution());
The longer the time spent between solve() and terminateEarly() is short, the less likely the resume is to work fine.
When not working, symptoms are after the Construction Heuristics is finished, only a few new best solutions are found and then the solver stops for ever to find new best solutions even if it's still calculating at a significant CPU rate.
The problem is similar when solver.getBestSolution() is serialized and reloaded later.
Any suggestion?
Thanks.
Regards.
JLL
Based on the contents of the question, the title is wrong - OptaPlanner resumes just fine, it just can not find any better solutions. There are two reasons for why that could be the case:
There are no more better solutions to be found. The bigger your data set becomes, the less likely this is.
There are better solutions available, but OptaPlanner can not get to them, as it is stuck in a local optima. This is a common problem.
Escaping local optima is usually accomplished by a combination of the following:
Eliminating score traps from your constraints.
Increasing variety in move selection. See the available generic moves, or consider implementing a custom move for any intricacies of your particular problem.
Iterative local search. We do not (yet) support that out of the box, but the general idea is that at a certain point, you ruin a part of your solution (perhaps by uninitializing it) and then recreate it (randomly or otherwise).
Finally, I wholeheartedly recommend you to upgrade to OptaPlanner 8. The upgrade is easy, and the 7.x stream has been in maintenance mode for a very long time now.
I've got a problem which I think optaplanner may be able to solve, but I haven't seen a demo that quite fits what I'm looking to do. My problem set is scheduling IoT node usage for a testbed. Each test execution (job) requires different sets of constraints on the nodes it will use. For example, a job may ask for M nodes with resource A, and N nodes with resource B. It will also specify a length of time it needs the nodes for and a window in which the job start is acceptable. To successfully schedule a job, it must be able to claim enough resources to meet the job specific requirements (ie, hard limits).
Being new to optaplanner, my understanding is that most of the examples focus on only needing one resource per Job. Any insight into whether this problem could be solved with optaplanner and where to start would be highly appreciated.
If you haven't already, look at the (cheap time scheduling example](https://www.youtube.com/watch?v=r6KsveB6v-g&list=PLJY69IMbAdq0uKPnjtWXZ2x7KE1eWg3ns) and project job scheduling example.
The differentiating question is if when job J1 needs M nodes with resource A if whether or not any of those M nodes can also supply resource B, just not at the same time.
If that's not the case, this is an easy model: you can threat resource A as a capacity like cloud balancing.
If that is the case, it's a complex model (but still possible), for example the jobs are chained or time grained (=> planning var 1) and each job has tasks which are assigned to nodes (=> planning var 2). All of this is likely to need custom moves for efficiency.
I am trying to use real time planning using a custom SolverThread implementing SolverEventListener and the daemon mode.
I am not interested in inserting or deleting entities. I am just interested in "updating" them, for example, changing the "priority" for a particular entity in my PlanningEntityCollectionProperty collection.
At the moment, I am using:
scoreDirector.beforeProblemPropertyChanged(entity);
entity.setPriority(newPriority);
scoreDirector.afterProblemPropertyChanged(entity);
It seems that the solver is executed and it manages to improve the actual solution, but it only spends a few ms on it:
org.optaplanner.core.impl.solver.DefaultSolver: Real-time problem fact changes done: step total (1), new best score (0hard/-100medium/-15soft).
org.optaplanner.core.impl.solver.DefaultSolver: Solving restarted: time spent (152), best score (0hard/-100medium/-15soft), environment mode (REPRODUCIBLE),
Therefore, the solver stops really soon, considering that my solver has a 10 seconds UnimprovedSecondsSpentLimit. So, the first time the solver is executed, it stops after 10 seconds, but the following times, it stops after a few ms and doesn't manage to get a good solution.
I am not sure I need to use "beforeProblemPropertyChanged" when the planning entity changes, but I can't find any alternative because "beforeVariableChanged" is used when the planning variable changes, right? Maybe optaplanner just doesn't support updates in the entities and I need to delete the old one using beforeEntityRemoved and inserted it again using beforeEntityAdded?
I was using BRANCH_AND_BOUND, however, I have changed to local search TABU_SEARCH and it seems that the scheduler uses 10 seconds now. However, it seems stuck in a local optima because it doesn't manage to improve the solution, even with a really small collection (10 entities).
Anyone with experience with real time planning?
Thanks
The "Solving restarted" always follows very shortly after "Real-time problem fact changes done", because real-time problem facts effectively "stop & restart" the solver. The 10 sec unimproved termination only starts counting again after the restart.
DEBUG logging (or even TRACE) will show you what's really going on.
I have been given a multi-step Cascading program that runs in about ten times the amount of time that an equivalent M/R job runs. How do I go about figuring out which of the steps is running the slowest so I can target it for optimization?
Not a complete answer, but enough to get you started I think. You need to generate a graphical representation of the MapReduce workflow for your job. See this page for an example: http://www.cascading.org/multitool/. The graph should help with trying to figure out where the bottleneck is.