Optaplanner strange behaviour when FULL_ASSERT - optaplanner

I am developing an incremental solver for rostering, the two planning entities are Assignment and Employee, worker has an #InverseRelationShadowVariable collection of assignments.
I have noticed a strange behaviour when using FULL_ASSERT.
At the start of the LS phase an EmployeeSwapMove is evaluated.
After that the listener is invoked which first retracts all assignments from Employee than inserts all assignments as per the move to the shadowed collection.
After that resetWorkingSolution is invoked, but the Employee has the Assignments before the move.
Optaplanner also logs, that the above move has been selected, but the Employee state (shadow collection) does not reflect that.
After that the score gets corrupted, of course.
The issue is not there when using FAST_ASSERT.
Can someone give me a hint?

I found out at last. In my Listener I messed up the after/beforeVariableChanged calls.

Related

Optaplanner, update shadow variables after every step

I am trying to add moves selectors which consider the state of the current working solution. For example, suppose in the cloud balancing problem I was trying to make a move which preferentially moved a process onto a computer which already holds few processes. I have a shadow variable which tracks the number of processes on the computer, then I have a valueSelector which implements SelectionProbabilityWeightFactory that gives a higher weight to computers with fewer processes.
This setup works fine and produces the moves that I want. But it is terribly slow because it is updating the shadow variable far more often than I need it to. Since I am not using this shadow variable for scoring, I don't need it to be updated after every move attempted during the step. I only need the shadow variable to be updated after each accepted move (i.e. the end of the step).
Alternately, I could use a custom move factory, but that requires that every computer have its process count fully re-calculated at each step. This means I would lose the incremental calculation benefit I get with the shadow variables.
So is there a way to force shadow variables to update after each step, rather than after each move. Or is there a better way to track the status of the working solution for use in move selectors?
Bad news first:
It's not possible to have VariableListener only update a shadow variable per step and not per move. And it's unlikely we'll ever want to allow that particular change, as it would hurt the predictability and integrity of the state of the domain model between move iterations. This could create a lot of havoc, including multiple forms of corruptions, if used slightly incorrectly.
Good news next:
Yes, you need to calculate some state per step to generate moves efficiently. This is a common problem I've run into a few times before too.
But why put that on the domain model? It doesn't belong there.
It belongs on the the move selector. For example, if you use a MoveIteratorFactory, that has a method called phaseStarted() (called when the phase starts) and a method createRandomMoveIterator() (called when a step starts even with SelectionCacheType.JIT).
Some something like this should do the trick:
public class MyMoveIteratorFactory implements MoveIteratorFactory<...> {
default void phaseStarted(ScoreDirector<...> scoreDirector) {
}
Iterator<Move_> createRandomMoveIterator(ScoreDirector<...> scoreDirector, Random workingRandom) {
List<Computer> alreadyUsedComputerList = ...; // runs once per step
return new MyIterator(alreadyUsedComputerList, workingRandom);
}
Now, the plot thickens when multiple move selectors need to reuse the same calculation. That's where SupplyManager comes into play, which is not public API. But this is definitely a good requirement for our "move streams API" experiment that we'll do next year.

OptaPlanner, in VariableListener change shadow variable of other entity

Working on my project based on optaplanner example taskassigning. In the example, StartTimeUpdatingVariableListener updateStartTime() changes the time of the source-task. Will it be OK, right in the function, change the shadow variable of the previous task instead of the source task? Because in my scenario, each task has a waiting time (shadow variable), only when a new task is added, the previous task's waiting time can be calculated. Different source task will bring different waiting time to its previous task. Eventually the sum of all employees' waiting time will be minimized in rule. Looking at the example, in the listener, only the source task time is updated, and is surrounded by beforeVariableChanged and afterVariableChanged. Will there be any problem to update other task's shadow variable?
You can't create cycles that would cause infinite loops.
Across different shadow variable declarations
A custom shadow variable (VariableListener) can trigger another custom shadow variable (VariableListener), for example in the image below C triggers E. So the VariableListener of C that changes variable C, triggers (delayed) events on the VariableListener of E.
But the dependency tree cannot contain a cycle, OptaPlanner validates this through the sources attribute.
Notice how all the variable listeners methods of C are called before the first of E is called. They are delayed. OptaPlanner give you this guarantee behind the scenes.
For a single shadow variable declaration
The VariableListener for C is triggered when A or B changes. So when it changes a C variable, no new trigger events for that VariableListeners are created.
When a single A variable changes, one event is triggered on the VariableListener for C, which can change multiple C variables. The loop that changes those multiple C variable must not be an infinite loop.
In practice, with VRP and task assignment scheduling, I 've found that they only way to guarantee the absence of an infinite loop is to only make changes forward in time. So in a chained model, follow the next variables, but not the previous.

Assigned task cannot be reassigned or unassigned in optaplanner task assignment process

I want help regarding Optaplanner task assignment changes.
I have 10 tasks 5 customer and 4 employees, In 10 tasks i run optaplanner it assigned 10 task to 5 employees. Again i have produced 5 task and run optaplanner. It removed already assigned task and reassigned that task to other employee.
How can i stop the reassigned or unassigned task?
With #PlanningPin (see docs) or immableEntityFilter (see docs), you can pin assignments during repeated planning (such as continuous planning) so OptaPlanner doesn't change those entities but still respects them for the constraints.
That being said, task assignment is a using a chained variable and there is a pitfall with pinning there: if an entity is pinned, the previous entity must be pinned too (all the way back to the anchor).

Optaplanner: check if chained planning variable has anchor

I'm modifying the vehicle routing Optaplanner example. Vehicles are replaced with individuals who have to travel around the city, but they can do so using different modes of transportation. So I have an attribute on the anchor (Vehicle in the example, Employee in my modified code) called modeOfTransportation.
When calculating the arrival time using the custom shadow variable listener from the example, I want to take the mode of transportation into account of course. But, when Optaplanner starts initialising my planning entities (consumers), it seems that they at first are not connected to an anchor. So I can't get my mode of transportation, and everything breaks down.
Any ideas on how to proceed?
Below is what I want to accomplish.
shadowVisit is my planning entity, shadowVisit.getEmployee() should give me the anchor.
Doing a shadowVisit.getEmployee()==null check seems to hang the entire solving process.
arrivalTime =
previousStopDepartureTime.plus(
shadowVisit.getLocation().getDistanceFrom(
shadowVisit.getPreviousStop().getLocation(), shadowVisit.getEmployee().getModeOfTransportation())
OK, so I figured out what the issue was.
My problem is overconstrained, and I implemented a dummy employee as the solution (see optaplanner: modifying vehicle routing to let customers be not served)
I had not set a modeOfTransportation for my dummy, causing the null pointers. Sometimes is just good to write down a problem, makes you think hard enough to solve it!
Thank you very much for your input Geoffrey
That's strange, because the chain principles guarantee that every chain has an anchor (see below).
Maybe your #CustomShadowVariable's sources attribute doesn't include the anchor shadow var, and your custom variable listener is called before the anchor shadow variable listener is called.
OptaPlanner guarantees that it will call one type of variable listener for all domain classes before calling the next type. The order of those types of variable listeners is determined by that sources attribute (see second image).

When writing a game, should you make objects/enemies/etc. have unique ID numbers?

I have recently encountered some issues with merely passing references to objects/enemies in a game I am making, and am wondering if I am using the wrong approach.
The main issue I have is disposing of enemies and objects, when other enemies or players may still have links to them.
For example, if you have a Rabbit, and a Wolf, the Wolf may have selected the Rabbit to be its target. What I am doing, is the wolf has a GameObject Target = null; and when it decides it is hungry, the Target becomes the Rabbit.
If the Rabbit then dies, such as another wolf killing it, it cannot be removed from the game properly because this wolf still has a reference to it.
In addition, if you are using a decoupled approach, the rabbit could hit by lightning, reducing its health to below zero. When it next updates itself, it realises it has died, and is removed from the game... but there is no way to update everything that is interested in it.
If you gave every enemy a unique ID, you could simply use references to that instead, and use a central lookup class that handled it. If the monster died, the lookup class could remove it from its own index, and subsequently anything trying to access it would be informed that it's dead, and then they could act accordingly.
Any thoughts on this?
One possible approach is to have objects register an interest with the object they're tracking. So the tracked object can inform the trackers of state changes dynamically. e.g. the Wolf registers with the Rabbit (that has a list of interested parties), and those parties are notified by the Rabbit whenever there's a state change.
This approach means that each object knows about its clients and that state is directly tied to that object (and not in some third-party manager class).
This is essentially the Observer pattern.
You approach sounds reasonable, why not? Registering all your objects in a hashmap shouldn't be too expensive. You could then have sort of an event bus where objects could register for different events.
Other than that, there is another approach coming to my mind. You could have the rabbit expose the event directly and have the wolf register on it.
The second approach is appealing for it's simplicity, however it will to some extend couple the event publishers to the subscribers. The first approach is technically more complex but has the benefit of allowing other kind of lookups too.
In practice I hardly ever find situations where I ever need to hold a reference or pointer to game objects from other game objects. There are a few however, such as the targeting example you give, and in those situations that's where unique ID numbers work great.
I suppose you could use the observer pattern for such things to ensure that references get cleared when necessary, but I think that will start to get messy if you need more than 1 reference per object, for example. You might have a target gameobject, you might have gameobjects in your current group, you might be following a gameobject, talking to one, fighting one, etc. This probably means your observing object needs to have a monolithic clean-up function that checks all the outgoing object references and resets them.
I personally think it's easier just to use an ID and validate the object's continued existence at the point of use, although the price is a bit of boilerplate code to do that and the performance cost of the lookup each time.
References only work while design stays monolithic.
First, passing references to other modules (notably, scripting) leads to security and technical problems.
Second, if you want to extend existing object by implementing some behavior and related properties in a new module - you won't have a single reference for all occasions.