How to handle huge amount of changes in real time planning? - optaplanner

If real time planning and daemon mode is enabled, when an update or addition of planning entity is to be made a problem fact change must be invoked.
So let say the average rate of change will be 1/sec, so for every second a problem fact change must be called resulting to restarting the solver every second.
Do we just invoke or schedule a problem fact change every second resulting to restarting the solver every second or if we know that there will be huge amount of changes, stop the solver first, apply changes then start the solver?

In the scenario you describe, the solver will be likely restarted every time. It's not a complete restart as if you just call the Solver.solve() with the updated last known solution, but the ScoreDirector, a component responsible for score calculation, is restarted each time a problem change is applied.
If problem changes come faster, they might be processed in a batch. The solver checks problem changes between the evaluation of individual moves, so if multiple changes come before the solver finishes the evaluation of the current move, they are all applied and the solver restarts just once. In the opposite case, when there are seldom changes coming, the restart doesn't matter much, as there is enough time for the solver to improve the solution.
But the rate of 1 change/sec will likely lead to frequent solver restarts and will affect its ability to produce better solutions.
The solver does not know if there is going to be a bigger amount of changes in the next second. The current behavior may be improved by processing the problem changes periodically in a predefined time interval rather than between move evaluations.
Of course, the periodic grouping of problem changes can be done outside the solver as well.

Related

Optaplanner - Why switching from mode full_assert to another gives totally different results?

I am developing a solver using drools to find the most likely date (start date = planningVariable) for which a medical analysis (planningEntity) will start. Then i calculate other dates from it whose calculation does not depend on the optaplanner mechanism (end date and other intermediate dates). For my problem, I need each rule to fire whenever the planningVariable value change.
When using FULL_ASSERT mode everything works but when i change to another mode, results are a mess. Almost no rule is respected or the solution gives null values and i don't understand why.
Is it because the full_assert mode is the only one that guarantees to fire all rules each time the planningVariable value changes?
Try running NON_INTRUSIVE_FULL_ASSERT, that doesn't trigger more fireAllRules().
Either way, switching on FULL_ASSERT etc, shouldn't change the behavior (unless you're explicitly using simulated annealing with walk clock time because it is time gradient sensitive). If it does change behavior, it's probably due to some sort of corruption. All the more reasons to run NON_INTRUSIVE_FULL_ASSERT and figure out where.

Is there a function for aborting routing calculation in optaplanner?

I want to have a function like if the calculation time get too long, we abort routing calculation and submit the best solution at the point of time. Is there such a function in optaplanner ?
For example in a GUI application, you would start solving on a background (worker) thread. In this scenario you can stop solver asynchronously by calling solver.terminateEarly() from another thread, typically the UI thread when you click a stop button.
If this is not what you're looking for, read on.
Provided that by calculation you actually mean the time spent solving, you have several options how to stop solver. Besides asynchronous termination described in the first paragraph, you can use synchronous termination:
Use time spent termination if you know how much time you want dedicate to solving beforehand.
Use unimproved time spent termination if you want to stop solving if the solution doesn't improve for a specified amount of time.
Use best score termination if you want to stop solving after a certain score has been reached.
Synchronous termination is defined before starting the solver and it's done either by XML solver configuration or using the SolverConfig API. See OptaPlanner documentation for other termination conditions.
Lastly, in case you're talking about score calculation and it takes too long to calculate score for a single move (solution change) then you're most certainly doing something wrong. For OptaPlanner to be able to search the solution space effectively, the score calculation must be fast (at least 1000 calculations per second).
For example in vehicle routing problem, driving time or road distances must be known at the time when you start solving. You shouldn't slow down score calculation with a heavy computation that can be done beforehand.

Optaplanner - Real-time planning doesn't find good solution

I am trying to use real time planning using a custom SolverThread implementing SolverEventListener and the daemon mode.
I am not interested in inserting or deleting entities. I am just interested in "updating" them, for example, changing the "priority" for a particular entity in my PlanningEntityCollectionProperty collection.
At the moment, I am using:
scoreDirector.beforeProblemPropertyChanged(entity);
entity.setPriority(newPriority);
scoreDirector.afterProblemPropertyChanged(entity);
It seems that the solver is executed and it manages to improve the actual solution, but it only spends a few ms on it:
org.optaplanner.core.impl.solver.DefaultSolver: Real-time problem fact changes done: step total (1), new best score (0hard/-100medium/-15soft).
org.optaplanner.core.impl.solver.DefaultSolver: Solving restarted: time spent (152), best score (0hard/-100medium/-15soft), environment mode (REPRODUCIBLE),
Therefore, the solver stops really soon, considering that my solver has a 10 seconds UnimprovedSecondsSpentLimit. So, the first time the solver is executed, it stops after 10 seconds, but the following times, it stops after a few ms and doesn't manage to get a good solution.
I am not sure I need to use "beforeProblemPropertyChanged" when the planning entity changes, but I can't find any alternative because "beforeVariableChanged" is used when the planning variable changes, right? Maybe optaplanner just doesn't support updates in the entities and I need to delete the old one using beforeEntityRemoved and inserted it again using beforeEntityAdded?
I was using BRANCH_AND_BOUND, however, I have changed to local search TABU_SEARCH and it seems that the scheduler uses 10 seconds now. However, it seems stuck in a local optima because it doesn't manage to improve the solution, even with a really small collection (10 entities).
Anyone with experience with real time planning?
Thanks
The "Solving restarted" always follows very shortly after "Real-time problem fact changes done", because real-time problem facts effectively "stop & restart" the solver. The 10 sec unimproved termination only starts counting again after the restart.
DEBUG logging (or even TRACE) will show you what's really going on.

Understanding simple simulation and rendering loop

This is an example (pseudo code) of how you could simulate and render a video game.
//simulate 20ms into the future
const long delta = 20;
long simulationTime = 0;
while(true)
{
while(simulationTime < GetMilliSeconds()) //GetMilliSeconds = Wall Clock Time
{
//the frame we simulated is still in the past
input = GetUserlnput();
UpdateSimulation(delta, input);
//we are trying to catch up and eventually pass the wall clock time
simulationTime += delta;
}
//since my current simulation is in the future and
//my last simulation is in the past
//the current looking of the world has got to be somewhere inbetween
RenderGraphics(InterpolateWorldState(GetMilliSeconds() - simulationTime));
}
That's my question:
I have 40ms to go through the outer 'while true' loop (means 25FPS).
The RenderGraphics method takes 10ms. So that means I have 30ms for the inner loop. The UpdateSimulation method takes 5ms. Everything else can be ignored since it's a value under 0.1ms.
What is the maximum I can set the variable 'delta' to in order to stay in my time schedule of 40ms (outer loop)?
And why?
This largely depends on how often you want and need to update your simulation status and user input, given the constraints mentioned below. For example, if your game contains internal state based on physical behavior, you would need a smaller delta to ensure that movements and collisions, if any, are properly evaluated and reflected within the game state. Also, if your user input requires fine-grained evaluation and state update, you would also need smaller delta values. For example, a shooting game with analogue user input (e.g. mouse, joystick), would benefit from update frequencies larger than 30Hz. If your game does not need such high-frequency evaluation of input and game state, then you could get away with larger delta values, or even by simply updating your game state once any input by the player was being detected.
In your specific pseudo-code, your simulation would update according to a fixed time-slice of length delta, which requires your simulation update to be processed in less wallclock time than the wallclock time to be simulated. Otherwise, wallclock time would proceed faster, than your simulation time can be updated. This ultimately limits your delta depending on how quick any simulation update of delta simulation time can actually be computed. This relationship also depends on your use case and may not be linear or constant. For example, physics engines often would divide your delta time given internally to what update rate they can reasonably process, as longer delta times may cause numerical instabilities and harder to solve linear systems raising processing effort non-linearly. In other use cases, simulation updates may take a linear or even constant time. Even so, many (possibly external) events could cause your simulation update to be processed too slowly, if it is inherently demanding. For example, loading resources during simulation updates, your operating system deciding to lay your execution thread aside, another process run by the user, anti-virus software kicking in, low memory pressure, a slow CPU and so on. Until now, I saw mostly two strategies to evade this problem or remedy its effects. First, simply ignoring it could work if the simulation update effort is low and it is assumed that the cause of the slowdown is temporary only. This would result in more or less noticeable "slow motion" behavior of your simulation, which could - in worst case - lead to simulation time lag piling up forever. The second strategy I often saw was to simply cap the measured frame time to be simulated to some artificial value, say 1000ms. This leads to smooth behavior as soon as the cause of slow down disappears, but has the drawback that the 'capped' simulation time is 'lost', which may lead to animation hiccups if not handled or accounted for. To choose a strategy, analyzing your use case could consist of measuring the wallclock time it takes to process simulation updates of delta and x * delta time and how changing the delta time and simulation load to process actually reflects in wallclock time needed to compute it, which will hint you to what the maximum value of delta is for your specific hardware and software environment.

Is it possible not to recreate whole model every timer Tick in Elm?

For example, I need a timer on the page. So I can have an action every 100 ms
type Action = Tick Time
and I have field time in my Model. Model could be big, but I need to recreate it and whole view every 100 ms because of time field. I think it would not be effective performance-wise.
Is there another approach or I shouldn't worry about such thing?
The whole view isn't necessarily being recreated every time. Elm uses a Virtual DOM and does diffs to change only the bare minimum of the actual DOM. If large parts of your view are actually changing on every 100ms tick, then that could obviously cause problems, but I'm guessing you're only making smaller adjustments at every 100ms, and you probably have nothing to worry about. Take a look at your developer tools to see the whether the process utilization is spiking.
Your model isn't being recreated every 100ms either. There are optimizations around the underlying data structures (see this related conversation about foldp internals) that let you think in terms of immutability and pureness, but are optimized under the hood.