Composite Tasks using OptaPlanner - optaplanner

I am trying to build Pools of Workers in a chained TWVRP scenario with multiple anchors. One Composite Task will be split into multiple smaller Task and distributed onto the chains in an optimal manner. Now, how can I ensure that all tasks that belong to the same composite task have the same start time? Can I solve this using custom moves or is using Drools to model this behaviour my only option?
I studied the documentation on custom moves but I just couldn't figure out how to use them in this case... Does anyone have a hint for me?

Make the startTime of a single Task a shadow variable that is the maximum previousTaskEndTime of all the single tasks that belong to the same CompositeTask.

Related

Implementing a planning optimization algorithm on hierarchical problem

I am working on a planning problem involving:
a collection of planning entities each containing the planning variable A
a global planning variable B (contained in the planning solution)
Since I am a beginner with Optaplanner and planning optimization in general, I started on a simpler version problem, focusing on optimizing A with B modeled as a planning fact.
Now that I have a program successfully optimizing A given B. I want to implement a new solver optimizing both A and B. It turns out that the best optimization search strategy is to first select a B value, and then optimize A given that B value. This process should be repeated until an optimum is found (the problem at hand is hierarchical)
I am looking for advice on how to implement this with Optaplanner. I initially thought I would implement this as two phases (optimize B -> optimize A) but I now understand Optaplanner phases are not meant to do that. For example, the solver cannot loop over this ordered sequence of two phases.
Instead, I think I should implement a custom MoveSelector which starts with a move on B, and then an infinite list of A move..
What do you think ? Am I on the right track ?
Kind regards,
A and B are different stages, not different phases (in OptaPlanner terminology).
In multi-stage planning (see short entry in docs), it's basically 2 different solvers, with one feeding into the other. This is very common when A and B occur at different times (think strategic vs tactical vs operation planning) or due to Conway's law (the organization structure of the users). This is the easy solution and often also by far the most practical during change management for the business. It's the least risk. However, it's suboptimal (in theory at least).
The alternative is indeed to have multiple planning entities, which makes it one big planning problem. That is the perfect solution. That is challenging. Perfect can be the enemy of good. OptaPlanner's architecture supports it, but custom moves are indeed required (today in OptaPlanner 7.35), as the default move selectors won't escape local optima often enough.

How to implement a Service Function Chain (SFC) in OptaPlanner?

I need to plan the placement of a set of virtualized functionalities over a set of servers using OptaPlanner. My functionalities needs to be executed in a specific pre-defined order, forming a Service Function Chain.
For example Let's say 2 chains are defined
F1->F2
F1->F2->F3
The goal is to place them on a set of servers while minimizing the costs (bandwidth, CPU, Storage,.. costs)
The examples I see in the OptaPlanner user guide for a set of chained planning entities include the Traveler Salesman Problem (TSP) and VRP, however in theses problems the planning entities do not need to be planned in a specific order.
If the sequence order of the functionalities is given you can just give it a bad score if the planned sequence is not correct.
e.g. (pseudorule)
rule "keepServiceFunctionChainSequence"
when
Functionality($chainId:chainId,$end:end, $orderPos:orderPos)
// find another entity in the same chain with a higher position in the order
// that starts earlier
Functionality(chainId==$chainId, orderPos>$orderPos, start.before($end), $orderPos2)
then
scoreHolder.addHardConstraintMatch(kcontext, $orderPos2-$orderPos);
end
if you have a lot of functionalities to plan and you see too many useless moves it may be smart to make an own move that moves one whole chain at once and keeps the sequence.

Correct way to represent a while loop with one task in BPMN?

Which is the correct~er way in BPMN to represent a simple while loop that redirects to one task only?
I would say that using the loop activity is the better option as it helps keep the process model tidy.
Also be careful when creating loop in a process as usually task definition change between the first iteration and the second. e.g. first iteration is creation of a file, second will actually be an edition of the file: two different actions (create and edit) should not be in a single task definition.
Normally, the BPMN represents activities marching through time in a linear fashion similar to a Value Stream Map. To create a backward loop would disrupt the timeline.

Parallel Lifelines in Sequence Diagrams in Enterprise Architect

I am using Enterprise Architect to make a sequence diagram. The sequence diagram contains some entities that actually runs in parallel because there are multiple cores and hardware peripherals that runs in parallel. When i try to draw the sequence diagram of a behavior that contains entities that runs in parallel, the program automatically shifts the messages and the calls of the other entities to down because it thinks that they run after each other. I actually mean to make them run in the same time.
How can i force Enterprise Architect to allow me to draw parallel sequences without shifting other events down ?
Thanks in advance,
On the example picture you can see how to draw sequence diagram to descripe paralel (concurent) execution on two (or more) lifenines. Interaction of each lifeline can be define in separated sections in par combined fragment.
You can use a combined fragment of type par to denote this. Within the fragment, you specify two or more "conditions", which in the case of a par fragment should be read as separate threads of execution. You can name them or not, as you prefer, and you can also name the fragment itself.
There's a simple example at IBM developerWorks, look for Figure 17. In this example, neither the fragment nor the conditions are named.
Note that parallel / concurrent fragments are meant to show essentially individual messages being processed in parallel. If you have large, complex sequences that occur concurrently, you probably need to split them into separate diagrams - remember, one sequence diagram is intended to show one sequence of related events, so there is always an implicit strict timeline running top to bottom.

How would you create a cyclic task graph in TPL, and/or is this possible?

My project has a requirement to gather data from a number of sources, then do things in response to the completion of the gathering of that data. Some of the gathering tasks have dependencies on prior gathering tasks. TPL has been a good fit because it naturally continues with tasks from their antecedents, and the "final" tasks that use the results are again dependents. Great. However, we would like to have a "sleep and regather" task that starts upon completion of the "final" tasks; this task's job is logically to be the antecedent of the "final" tasks and kick off the next cycle. In effect, the TPL's DAG becomes cyclic, or, if thought of sequentially, a loop.
Is it possible to express this cyclic requirement completely within the TPL API? If so, how? Our current implementation instead does a WaitAll() on the antecedents, and then a Task.StartNew() given a delegate that does a sleep followed by rebuilding a task graph with the WaitAll(). This works, but seems a bit artificial.
There are a few options here. What you are doing now seems reasonable.
However, you could potentially setup the entire operation as a producer/consumer scenario using BlockingCollection<T>. If your consuming enumerable used a ManualResetEvent that was set after the WaitAll completed, it could allow a single "item" to be consumed at a time, using tasks as you have it written now.
That being said, this seems like a perfect candidate for the TPL Dataflow library (in CTP).