Optaplanner Score calculation types questions - optaplanner

Solve the doubts on the use of OptaPlanner.
OptaPlanner uses the following score calculation types: Drools score calculation or Constraint streams score calculation. Both methods support incremental calculation of scores.
One doubt about incremental calculation of scores:
Demo:
import org.optaplanner.examples.cloudbalancing.domain.CloudBalance;
import org.optaplanner.examples.cloudbalancing.domain.CloudComputer;
import org.optaplanner.examples.cloudbalancing.domain.CloudProcess;
global HardSoftScoreHolder scoreHolder;
// ############################################################################
// Hard constraints
// ############################################################################
rule "requiredCpuPowerTotal"
when
$computer : CloudComputer($cpuPower : cpuPower)
accumulate(
CloudProcess(
computer == $computer,
$requiredCpuPower : requiredCpuPower);
$requiredCpuPowerTotal : sum($requiredCpuPower);
$requiredCpuPowerTotal > $cpuPower
)
then
scoreHolder.addHardConstraintMatch(kcontext, $cpuPower - $requiredCpuPowerTotal);
end
Assuming that the requiredCpuPowerTotal condition is established, the then logic is executed, and the hard score is added, assuming -100.
At this time, the CloudProcess solution has been changed. CloudComputer has reduced some of the CloudProcess reception so that it will not exceed the cpu limit of this computer CloudComputer. This condition does not hold.
My questions are:
If the condition is met for the first time, computer A allocates 2 threads to a computer with a total of 4 core CPUs, but the first time computer A has 2 cores, so this time the solution is a negative score of hard -100 .
The second program A computer allocates 1 thread, if the current conditions are not met, the negative score will not be increased. The result is hard 0. Questions: 1. But how did the previous negative score be deleted?
Or does it mean that the score will be recalculated every new solution?

The previous negative score will be deleted because addHardConstraintMatch() is doing some black magic: it registers a rule unmatch listener to undo that negative addition when the score no longer matches.
scoreDRL is incremental, so only the delta of the score change will be recalculated.
PS: Take a look at ConstraintStreams too, they are also incremental :)

Related

How to determine when to accelerate and decelerate to reach a given destination?

I'm creating a computer game in which there is a computer-controlled car that needs to travel along a straight line (thus making this problem effectively 1-dimensional) and reach a destination coming to a rest at 0 velocity. This car "thinks" every second and decides whether (and by how much) to accelerate or decelerate.
To summarize, I want the car to accelerate as strongly as possible and then stop as rapidly as possible.
Here are the variables that the car must respect:
- RemainingDistance = Our current remaining distance to the destination in meters.
- Velocity = Our current velocity towards the destination in meters/second.
- MaxVelocity = The maximum speed the car can reach.
- Acceleration = The change in velocity per second. The car can change its acceleration every second to any number in the range [0, Acceleration].
- Deceleration = The change in velocity per second. The car can change its deceleration every second to any number in the range [0, Deceleration].
So to be as clear as I can, here's the math that runs every second to update the car simulation:
Acceleration = some amount between [Deceleration, Acceleration] as chosen by the computer
Velocity = Velocity + Acceleration
RemainingDistance = RemainingDistance - Velocity
So my question is: Every time the car "thinks", what formula(s) should it use to determine the ideal value of Acceleration in order to reach its destination (with a 0 final velocity) in as little time as possible?
(In the event that the car's initial velocity is too high and it can't decelerate fast enough to achieve 0 velocity by the time it reaches its destination, then it should come to a rest as close as possible to the destination.)
Please let me know if you have any questions.

Optaplanner, How to catch the current time at the beginning of the rule to use it in score?

I have something like that:
scoreHolder.addSoftConstraintMatch(kcontext, (System.currentTimeMillis()-$time.getTime()));
I want to use the current time at the beginning of firing the rule only, and not to be updated during running the rule. just to catch the current time at the first moment the rule is fired and does not change till the end of solving.
I'm using optaplanner 6.1.
thanks in advance.
That would break OptaPlanner, as the Score of the same Solution would change over time (which also implies that comparing 2 different Solutions can not be done fairly - so if a new working score is compared to the best score (which was calculated x seconds ago) it breaks).
Instead, before the solver starts, set the current time millis in a singleton:
myParametrization.setStartingMillis(System.currentMillis());
... = solver.solve(...);
and add that as a problem fact and use it in the score rules (see examination example's InstitutionParameterization).

Optaplanner; Stuck at local optima according to input data sorting

Using Optaplanner version 6.2.0
My DROOLS Rules:
rule "Transition Time Constraint"
when
$leftImageStrip:ImageStrip($selected : selected,
$satellite : satellite,
selected != null,
$timeslot : timeslot,
leftId : id,
lGain : gain,
lRollAngle : rollAngle,
$duration : duration)
$rightImageStrip : ImageStrip(selected == $selected,
satellite == $satellite,
Math.abs(timeslot.getTime() - $timeslot.getTime()) <= 460000,
this != $leftImageStrip,
rGain : gain,
rRollAngle : rollAngle)
then
x = new SimpleDateFormat("dd-MM-yyyy HH:mm:ss.SSS");
scoreHolder.addHardConstraintMatch(kcontext, -1);
end
rule "Shoot Strip once"
when
$leftImageStrip: ImageStrip($selected : selected, $stripList : stripList,
leftId : id, selected != null)
$rightImageStrip: ImageStrip(selected == $selected, stripList == $stripList,
this != $leftImageStrip)
then
scoreHolder.addMediumConstraintMatch(kcontext, -1);
end
rule "Maximization of Selected Parameters"
when
$imageStrip: ImageStrip(selected != null)
then
scoreHolder.addSoftConstraintMatch(kcontext, $imageStrip.gain);
end
I stuck with a local optima that differs according to the input data sorting.
How can I overcome this problem to obtain the same optimum solution what ever the input data sorting is?, and of course it should be the global optimum one.
Can one assures that the obtained result using Optaplanner is the global optima? independent of the input data sorting?
Looks like you have a score trap in the rule "Transition Time Constraint". See docs for the keyword "Score Trap". Fixing that can make a very big difference to escape local optima and get good results. Something like this would fix it (from the top of my head, might be inverted):
scoreHolder.addHardConstraintMatch(kcontext, ... 460000 - Math.abs(timeslot.getTime() - $timeslot.getTime()) ...);
Once that's fixed, looked at your average score count per second (last log line or benchmarker report).
That being said, metaheuristics (such as Local Search) aren't about finding the global optimum solution. They're about finding the best possible solution when scaling out in reasonable time, better than any other technology can give you in the time you have available when scaling out (see research compo's such as ROADEF, ITT and INRC for proof - all technologies that claim to find the global optimum don't scale). The downside of this is that metaheuristics don't always scale down well (only up).
Note: A common sales trick is to present a technology that finds the global optimum and then apply partitioning to scale out later during development. The result is not the global optimum of course (due to the nature of NP-complete problems) and it's not any good (relatively speaking). The quality loss due to partitioning is huge.

measuring time between two rising edges in beaglebone

I am reading sensor output as square wave(0-5 volt) via oscilloscope. Now I want to measure frequency of one period with Beaglebone. So I should measure the time between two rising edges. However, I don't have any experience with working Beaglebone. Can you give some advices or sample codes about measuring time between rising edges?
How deterministic do you need this to be? If you can tolerate some inaccuracy, you can probably do it on the main Linux OS; if you want to be fancy pants, this seems like a potential use case for the BBB's PRU's (which I unfortunately haven't used so take this with substantial amounts of salt). I would expect you'd be able to write PRU code that just sits with an infinite outerloop and then inside that loop, start looping until it sees the pin shows 0, then starts looping until the pin shows 1 (this is the first rising edge), then starts counting until either the pin shows 0 again (this would then be the falling edge) or another loop to the next rising edge... either way, you could take the counter value and you should be able to directly convert that into time (the PRU is states as having fixed frequency for each instruction, and is a 200Mhz (50ns/instruction). Assuming your loop is something like
#starting with pin low
inner loop 1:
registerX = loadPin
increment counter
jump if zero registerX to inner loop 1
# pin is now high
inner loop 2:
registerX = loadPin
increment counter
jump if one registerX to inner loop 2
# pin is now low again
That should take 3 instructions per counter increment, so you can get the time as 3 * counter * 50 ns.
As suggested by Foon in his answer, the PRUs are a good fit for this task (although depending on your requirements it may be fine to use the ARM processor and standard GPIO). Please note that (as far as I know) both the regular GPIOs and the PRU inputs are based on 3.3V logic, and connecting a 5V signal might fry your board! You will need an additional component or circuit to convert from 5V to 3.3V.
I've written a basic example that measures timing between rising edges on the header pin P8.15 for my own purpose of measuring an engine's rpm. If you decide to use it, you should check the timing results against a known reference. It's about right but I haven't checked it carefully at all. It is implemented using PRU assembly and uses the pypruss python module to simplify interfacing.

Enumerable.Count not working

I am monitoring the power of a laser and I want to know when n consecutive measurements are outside a safe range. I have a Queue(Of Double) which has n items (2 in my example) at the time it's being checked. I want to check that all items in the queue satisfy a condition, so I pass the items through a Count() with a predicate. However, the count function always returns the number of items in the queue, even if they don't all satisfy the predicate.
ElseIf _consecutiveMeasurements.AsEnumerable().Count(Function(m) m <= Me.CriticalLowLevel) = ConsecutiveCount Then
_ownedISetInstrument.Disable()
' do other things
A view of the debugger with the execution moving into the If.
Clearly, there are two measurements in the queue, and they are both greater than the CriticalLowLevel, so the count should be zero. I first tried Enumerable.Where(predicate).Count() and I got the same result. What's going on?
Edit:
Of course the values are below the CriticalLowLevel, which I had mistakenly set to 598 instead of 498 for testing. I had over-complicated the problem by focusing my attention on the code when it was my test case which was faulty. I guess I couldn't see the forest for the trees, so they say. Thanks Eric for pointing it out.
Based on your debug snapshot, it looks like both of your measurements are less than the critical level of 598.0, so I would expect the count to match the queue length.
Both data points are <= Me.CriticalLowLevel.
Can you share an example where one of the data points is > Me.CriticalLowLevel that still exhibits this behavior?