Methods for incremental score calculations - optaplanner

can someone explain the purpose of the methods that need to be implemented for incremental score calculation? I understand all the after... methods, but why should I adjust the score before an entity is added, removed or a variable is changed (beforeEntityAdded, beforeVariableChanged, beforeEntityRemoved)?

See this image from the 6.0.0.Final docs:
Also see the section "incremental score calculation" (which also explains why this is so much faster than SimpleScoreCalculator). Look at the example implementations. You'll see that beforeVariableChanged() is needed to retract the violated constraint matches that no longer match.
In the diagram above, the ChangeMove needs to get +1 because AB no longer match during the beforeVariableChanged() method and -1 becaues AC now match during the afterVariableChanged method.

Related

Why the facts and planning entities are null workingsolution during planning?

I'm developing a project what similarly with the example "meetingscheduling", when I have a MeetingAssignment, I want to get its "previous MeetingAssignment" in the same room.
In other words, when I have a MeetingAssignment - TA1, I want to find the MeetingAssignment which is in the same room with TA1, and it should be nearest one on the left of TA1.
My idea is: When I get TA1 :
1. Get all the MeetingAssignments that have the same room with TA1.
2. Get the MeetingAssignments from the list generate by previous step what have
the less startingTimeGrain than TA1.
3. Find the MeetingAssignment what have the largest startingTimeGrain value.
it's the "previous MeetingAssignment".
But when I get the MeetingAssignment list of the sulotion class during planning(WorkingSolution), all of the room is null, I got the wrong solution?
Any better idea for it? Many thanks.
At the end of the Construction Heuristic (if it has a chance to complete, see DEBUG logging), all planning variables will be non-null. If the CH takes too, see the "scaling CH" chapter in the docs.

How to use min-conflict algorithm with optaplanner

Is there a min-conflict algorithm in optaplanner? or how to do it?
What about using it as part of the neighborhood selection like:
Define custom swap factory that construct neighborhood as follow
Get all violations per variable to optimize, thus requires a call to scoreDirector.calculateScore then parse/process constraintMatches
Order by variables lowest score or highest violations
Construct neighborhood via swapping those variables first
If that's viable, is there a way to get the constraintMatches without the need to re-call the calculateScore in order to speed up the process
This algorithm isn't supported out of the box yet by OptaPlanner. I 'd call it Guided Local Search. But it's not that hard to add yourself. In fact, it's not a matter of changing the algorithm, but changing the entity selectors.
Something like this should work:
<swapMoveSelector>
<entitySelector>
<cacheType>STEP</cacheType>
<probabilityWeightFactoryClass>...MyProbabilityWeightFactory</probabilityWeightFactoryClass>
</entitySelector>
</swapMoveSelector>
Read about the advanced swapMoveSelector configuration, entity selector, sorted selection and probability selection.
The callback class you implement for the probabilistic selection or sorted selection should prioritize entities that are part of a conflict.
I would definitely use sorted or probabilistic selection on the entity selector, not the entire swapMoveSelector because that is overkill, cpu hungry and memory hungry.
I would prefer probabilistic selection over sorted selection. Even though sorted selection better reflects your pseudo code, I believe (but haven't proven) that probabilistic selection will do better, given the nature of Metaheuristics. Try both, run some benchmarks with the Benchmarker and let us know what works best ;)
Not sure about how to solve your overall problem but for your last point:
You can create a PhaseLifecycleListener and attach it via ((DefaultSolver) solver).addPhaseLifecycleListener
In the stepStarted or stepEnded(depending on your need) you can then call
stepScope.getScoreDirector().getConstraintMatchTotals()
to get the constraint totals.
Hope this somewhat helps.

How to convert Greensock's CustomEase functions to be usable in CreateJS's Tween system?

I'm currently working on a project that does not include GSAP (Greensock's JS Tweening library), but since it's super easy to create your own Custom Easing functions with it's visual editor - I was wondering if there is a way to break down the desired ease-function so that it can be reused in a CreateJS Tween?
Example:
var myEase = CustomEase.create("myCustomEase", [
{s:0,cp:0.413,e:0.672},{s:0.672,cp:0.931,e:1.036},
{s:1.036,cp:1.141,e:1.036},{s:1.036,cp:0.931,e:0.984},
{s:0.984,cp:1.03699,e:1.004},{s:1.004,cp:0.971,e:0.988},
{s:0.988,cp:1.00499,e:1}
]);
So that it turns it into something like:
var myEase = function(t, b, c, d) {
//Some magic algorithm performed on the 7 bezier/control points above...
}
(Here is what the graph would look like for this particular easing method.)
I took the time to port and optimize the original GSAP-based CustomEase class... but due to license restrictions / legal matters (basically a grizzly bear that I do not want to poke with a stick...), posting the ported code would violate it.
However, it's fair for my own use. Therefore, I believe it's only fair that I guide you and point you to the resources that made it possible.
The original code (not directly compatible with CreateJS) can be found here:
https://github.com/art0rz/gsap-customease/blob/master/CustomEase.js (looks like the author was also asked to take down the repo on github - sorry if the rest of this post makes no sense at all!)
Note that CreateJS's easing methods only takes a "time ratio" value (not time, start, end, duration like GSAP's easing method does). That time ratio is really all you need, given it goes from 0.0 (your start value) to 1.0 (your end value).
With a little bit of effort, you can discard those parameters from the ease() method and trim down the final returned expression.
Optimizations:
I took a few extra steps to optimize the above code.
1) In the constructor, you can store the segments.length value directly as this.length in a property of the CustomEase instance to cut down a bit on the amount of accessors / property lookups in the ease() method (where qty is set).
2) There's a few redundant calculations done per Segments that can be eliminated in the ease() method. For instance, the s.cp - s.s and s.e - s.s operations can be precalculated and stored in a couple of properties in each Segments (in its constructor).
3) Finally, I'm not sure why it was designed this way, but you can unwrap the function() {...}(); that are returning the constructors for each classes. Perhaps it was used to trap the scope of some variables, but I don't see why it couldn't have wrapped the entire thing instead of encapsulating each one separately.
Need more info? Leave a comment!

Optimized binary search

I have seen many example of binary search,many method how to optimize it,so yesterday my lecturer write code (in this code let us assume that first index starts from 1 and last one is N,so that N is length of array,consider it in pseudo code.code is like this:
L:=1;
R:=N;
while( L<R)
{
m:=div(R+L,2);
if A[m]> x
{
L:=m+1;
}
else
{
R:=m;
}
}
Here we assume that array is A,so lecturer said that we are not waste time for comparing if element is at middle part of array every time,also benefit is that if element is not in array,index says about where it would be located,so it is optimal,is he right?i mean i have seen many kind of binary search from John Bentley for example(programming pearls) and so on,and is this code optimal really?it is written in pascal in my case, but language does not depend.
It really depends on whether you find the element. If you don't, this will have saved some comparisons. If you could find the element in the first couple of hops, then you've saved the work of all the later comparisons and arithmetic. If all the values in the array are distinct, it's obviously fairly unlikely that you hit the right index early on - but if you have broad swathes of the array containing the same values, that changes the maths.
This approach also means you can't narrow the range quite as much as you would otherwise - this:
R:=m;
would normally be
R:=m-1;
... although that would reasonably rarely make a significant difference.
The important point is that this doesn't change the overall complexity of the algorithm - it's still going to be O(log N).
also benefit is that if element is not in array,index says about where it would be located
That's true whether you check for equality or not. Every binary search implementation I've seen would give that information.

Custom EQ AudioUnit on iOS

The only effect AudioUnit on iOS is the "iTunes EQ", which only lets you use EQ pre-sets. I would like to use a customized eq in my audio graph
I came across this question on the subject and saw an answer suggesting using this DSP code in the render callback. This looks promising and people seem to be using this effectively on various platforms. However, my implementation has a ton of noise even with a flat eq.
Here's my 20 line integration into the "MixerHostAudio" class of Apple's "MixerHost" example application (all in one commit):
https://github.com/tassock/mixerhost/commit/4b8b87028bfffe352ed67609f747858059a3e89b
Any ideas on how I could get this working? Any other strategies for integrating an EQ?
Edit: Here's an example of the distortion I'm experiencing (with the eq flat):
http://www.youtube.com/watch?v=W_6JaNUvUjA
In the code in EQ3Band.c, the filter coefficients are used without being initialized. The init_3band_state method initialize just the gains and frequencies, but the coefficients themselves - es->f1p0 etc. are not initialized, and therefore contain some garbage values. That might be the reason for the bad output.
This code seems wrong in more then one way.
A digital filter is normally represented by the filter coefficients, which are constant, the filter inner state history (since in most cases the output depends on history) and the filter topology, which is the arithmetic used to calculate the output given the input and the filter (coeffs + state history). In most cases, and of course when filtering audio data, you expect to get 0's at the output if you feed 0's to the input.
The problems in the code you linked to:
The filter coefficients are changed in each call to the processing method:
es->f1p0 += (es->lf * (sample - es->f1p0)) + vsa;
The input sample is usually multiplied by the filter coefficients, not added to them. It doesn't make any physical sense - the sample and the filter coeffs don't even have the same physical units.
If you feed in 0's, you do not get 0's at the output, just some values which do not make any sense.
I suggest you look for another code - the other option is debugging it, and it would be harder.
In addition, you'd benefit from reading about digital filters:
http://en.wikipedia.org/wiki/Digital_filter
https://ccrma.stanford.edu/~jos/filters/