Getting ScoreExplanation during/after a Custom Move - optaplanner

My solution is for VRPTW and I've created a Custom Move.
After a custom move has been tried (prior to being accepted), I would like to see a score breakdown of it (not just the score itself).
Where is a good location to use ScoreExplanation to see the detailed score breakdown? Eg. in my AbstractMove implementation somewhere?
I have TRACE mode on and can see the score. I've tried pulling it up upon the next custom move being run and it seemed to be working since it was retrieving the correct score from the previous custom move, but when I looked at the ScoreExplanation, it looks completely off (it doesn't add up to the score).

Score explanations are not designed to be used inside of a step, they are far too slow for that. You are free to use the ScoreManager API any time you like, but you will pay a heavy performance penalty if you do that on the solver thread or the move threads.

Related

Optaplanner: How to calculate score delta for move

I'm using Optaplanner to automatically solve school timetables. After a timetable has been solved the user will manually change some lessons and will get feedback on how this affects the score via the following call:
scoreManager.updateScore(timetable);
This call takes some 200ms and will, I assume, do a complete evaluation. Im trying to optimize this and want to only pass in a Move object so that Optaplanner only has to recalculate the delta, like:
scoreManager.updateScore(previousTimetable,changeMove);
Is there a way to do this?
There really is no way how to do just a single move. You don't do moves - the solver does moves. You can only make external problem changes to the solution. You should look into the ProblemChange interface and its use in the SolverManager.
However, the problem change will likely reset the entire working solution anyway. And after the external change is done, you're not guaranteed that the solution will still make sense. (What if it breaks hard constraints now?) You simply need to expect and account for the fact that, after users submit their changes, the solver will need to run; possibly even for a prolonged period of time.

Optaplanner early termination deltas between Incremental Score state and solution

I'm using an incremental score calculator class because of the heavy use of maps and calculations which did not scale well in drools.
It seems to work well, but as I've had to debug I've noticed differences between the number of moves used in the final solution and the moves processed by my before/afterVariableChanged handlers. This causes a diffenence between what is actually assigned in the solution and the state in my incremental score objects. Based on some logs it looks like the incremental does not know when the solution has stopped accepting moves because of early termination (i'm using secondsSpentLimit).
How can I stop my incremental score implentation from receiving before/afterVariableChanged events that would not be considered in the solution because of early termination?
Can you turn on TRACE logging to confirm this behavior?
In essence, LocalSearch looks like this (with moveThreadCount=NONE (= default)):
for (each step && not terminated) {
stepStarted()
for (each move && not terminated) {
doMove(); // Triggers variable listeners
if (better move) updateBestSolution();
undoMove(); // Triggers variable listeners
}
stepEnded()
}
So termination is atomic versus move evaluation. So your issue should have another cause.
With moveThreadCount=4 etc this story becomes a bit more complex, but each move thread has it's own ScoreDirector, so they pretty much work in isolation.
That being said, incremental java score calculation and shadow variables together can drive people insane... This is mainly caused because the before events happen in the middle of doMove()'s, right before a variable changes. At that time, half of the model can already be in an intermediate state that is impossible to understand, due to other variables that move was already changing. FWIW, take a look at VariableListener.requiresUniqueEntityEvents(), it might help a bit...
Or try ConstraintStreams (BAVET implementation if you favor speed over functionality).

Is there a way to save all feasible scores found?

I'm building a student schedule generator and I need a way of producing more than one solution. Is there some way to save off feasible scores or scores of Xhard/Ysoft?
I need to be able to output more than one potential schedule, that way the student will have a choice for one schedule over the other if for whatever reason they don't want the "best" schedule (maybe they don't like one of the professors, maybe they don't want an 8am class, whatever)
My original idea was to save off all feasible solutions using the bestSolutionChanged event listener. The problem with this, is that once it finds a 0hard/0soft score, it ignores all scores after that, including scores that are equal.
Ideally I'd like to save off all scores of 0hard/-3soft or better, but just being able to save any feasible scores or force optaplanner to look for a new best score would be useful as well.
This is not a solution, but an analysis of the problem:
Hacking the BestSolutionRecaller is obviously not just a big pain, it's also behaviour we don't want to encourage as it makes upgrading to newer version an even bigger pain. So don't expect us to solve this by adding an easy way to configure that in the solver config any time soon. That being said, a solution for this common problem is clearly needed.
When a new best solution is found, it is planning cloned (see docs for definition) from the working solution (the internal solution in OptaPlanner). This allow us to remember that new best solution as the working solution solution changes. That also means the BestSolutionChangedEvents gets a plannng clone and can safely ship it to another thread, for example to marshal it to a client (presuming any ProblemFactChanges you create do copies instead of alterations), without being corrupted by the solver thread that modifies the working solution.
New best solution imply that workingScore > bestScore. The moment it instead does workingScore >= bestScore, we need far more planning clones (which are a bit CPU expensive), but we could then just send out BestSolutionChangedEvents for that too, if and only if a flag is enabled of course, because most users (unlike yourself) don't want this behaviour.
One proposal is to create a separate BestSolutionChangedOrSameEvent, next to the BestSolutionChangedEvent. This might not be ideal, because we need to be able to detect whether or not someone needs those extra planning clones.
Another proposal is to just have a flag in the <solver> config that switches from > to >= behavior for BestSolutionChangedEvent.
Please create a jira (see "get help" on webpage) and link it it here, or create a support ticket (also see "get help" on webpage).

Optaplanner select only entities in conflict

In the change and swap move selector, I would like to only consider moves that involve entities in conflict as they are more likely to improve the heuristic score.
How should this be done? What classes and interfaces do I have to reuse/extend? I looked at ScoreDirector and PhaseLifecycleListener.
A MoveFilter might do that (if it's not in phase or solver cached as it changes ever step). See the course scheduling example and docs for how to use a filter.
I wouldn't recommend it though, as you still want to move non-conflicting entities at times. You might just want to focus more on those conflicting lectures. So I would keep a vanilla move selector in the mix.
The move filter isn't perfect either - the Guided Local Search feature (not yet available) is a better way to deal with this.
However, given the other question about the model and similar cases I 've seen, I 'd say moves are not your problem. A better model will make all these kinds of move tweaking obsolete.

Source core repositories and sticky notes

An interesting problem occured recently, and I've been thinking of the "best" way (for a given value of "best") to implement this.
In essence, it's one of tracking notes against source code. The example that flagged this was getting a problem fixed in live within SLAs, and how to best achieve this. Without going into all the details, it came down to finding a function that's used in a number of places which may or may not be buggy, yet the problem was being reporting only in a single location.
The fix to meet the SLAs was simply to add a check into the location where the problem was reported, rather than tweaking the common code and having to test everything that touches that function.
The interesting issue is then for upstreaming. The "correct" method would then be to go back and check the original function, validate it's correct for everywhere it's called and then make the change "properly" if its determined the library function is wrong.
The problem is this takes time, so upstreaming may simply take the workaround, etc. However if the problem occurs again (say six months later) in another location calling the same library function, there isn't an easy way to link the two problems together. You can search the bug tracking database, but this isn't guranteed to help - it depends if a note's been added saying something along the lines of "this library function needs more thorough checking, but no time to investigate now".
So the question is this: within a large team of developers (30 plus, split into teams of both support and on-going development), what methods do you use to manage (what are effectively) "sticky notes" against source code, short of adding a comment to the suspicious function's source code saying "this might be a bit dodgy"?
The problem with the commiting a comment is one of process: a change is a change, so committing a zero-change change (i.e., one where just comments are added) is not ideal; developers can make mistakes even adding a comment (hit a stray key or something) so it's always (IMO) better to commit only where actual changes are made.
Now a wiki could be used to track per-file notes, but we've got a minimum of four branches and inexcess of a few hundred files (SQL objects, source code, XML files, etc), so a wiki will get unmangable quite quickly.
This is the sort of thing that it would be nice if SCM's could support - bits of metadata against files that are simply notes, but don't add to the SCM's version history - that can be displayed when doing (say) an svn update, or manually viewed.
There may already be solutions out there -- so how do you manage this type of knowledge sharing?
Well we're now using this method: in each folder checked into SVN, we've created a .url shortcut (this is Windows we're dev'ing on) that links to a page on our development wiki about that folder. Thus we can update the Wiki info freely, and on checkout/update everyone gets a link that will take them to the appropriate Wiki page for that folder/module.
We've not long instigated it so we'll have to see how well it works long term -- but it's better than what we had before (i.e., nothing :-) ).