OptaPlanner, shadow variable's corrupted value changed to uncorrupted value - optaplanner

optaplanner-bom ver 7.45.0.Final.
#PlanningEntity
public class Task {
#PlanningVariable(valueRangeProviderRefs = "timeGrainRange")
private TimeGrain startingTimeGrain;
#CustomShadowVariable(variableListenerClass = DurationUpdatingVariableListener.class,
sources = { #PlanningVariableReference(variableName = "startingTimeGrain") })
private Long durationInGrains;
......
in class DurationUpdatingVariableListener:
public void afterVariableChanged(ScoreDirector scoreDirector, Task e) {
if (null == e.getStartingTimeGrain()) {
return;
}
Schedule s = (Schedule)scoreDirector.getWorkingSolution();
List<Task> tasksToBeUpdated = s.getTasksToBeUpdated(e); // calculate all updates
for (Task t: tasksToBeUpdated) {
scoreDirector.beforeVariableChanged(t, NAME_DURATION);
t.setDurationInGrains(convertToGrain(t.getDurationInSecs()));
scoreDirector.afterVariableChanged(t, NAME_DURATION);
}
}
The logic is, when one task's startingTimeGrain changes, some tasks' duration will be affected. The problem is, when variable "tasksToBeUpdated" contains only one task, no error. When it contains more than one task, got following error:
ERROR 30844 --- [pool-1-thread-1] o.o.c.impl.solver.DefaultSolverManager : Solving failed for problemId (1).
java.lang.IllegalStateException: The move thread with moveThreadIndex (3) has thrown an exception. Relayed here in the parent thread.
at org.optaplanner.core.impl.heuristic.thread.OrderByMoveIndexBlockingQueue.take(OrderByMoveIndexBlockingQueue.java:147) ~[optaplanner-core-7.45.0.Final.jar:7.45.0.Final]
at org.optaplanner.core.impl.constructionheuristic.decider.MultiThreadedConstructionHeuristicDecider.forageResult(MultiThreadedConstructionHeuristicDecider.java:186) ~[optaplanner-core-7.45.0.Final.jar:7.45.0.Final]
......
Caused by: java.lang.IllegalStateException: VariableListener corruption after completedAction (Undo(id=1, ...., startingTimeGrain=null {null -> {"grainIndex":1,"id":1}})):
The entity (id=2, ..., startingTimeGrain=1)'s shadow variable (Task.durationInGrains)'s corrupted value (4) changed to uncorrupted value (3) after all VariableListeners were triggered without changes to the genuine variables.
Maybe the VariableListener class (DurationUpdatingVariableListener) for that shadow variable (Task.durationInGrains) forgot to update it when one of its sources changed.
at org.optaplanner.core.impl.score.director.AbstractScoreDirector.assertShadowVariablesAreNotStale(AbstractScoreDirector.java:545) ~[optaplanner-core-7.45.0.Final.jar:7.45.0.Final]
......
It's sure that Task.setDurationInGrains() is only be called in the afterVariableChanged(). Why the error happens? Here is the code of Schedule.getTasksToBeUpdated(Task task).

Make sure your duration updating listener is able to properly "clean up" the shadow variable of all tasks that are affected by an undo move. So, for example, if you have:
Task(id=1, startingTimeGrain=null, durationInGrains=null)
Task(id=2, startingTimeGrain={"grainIndex":2,"id":1}, durationInGrains=3)
and you do
Move(id=1, ...., startingTimeGrain=null {null -> {"grainIndex":1,"id":1}})
your duration updating listener should probably result in something like:
Task(id=1, startingTimeGrain={"grainIndex":1,"id":1}, durationInGrains=1)
Task(id=2, startingTimeGrain={"grainIndex":2,"id":1}, durationInGrains=4)
Notice that Task 2 was affected by the move that changed Task 1 and its duration was updated by the listener.
If this is true, then after the move above is undone:
Undo(id=1, ...., startingTimeGrain=null {null -> {"grainIndex":1,"id":1}})
this is what absolutely must happen in order to avoid the listener corruption:
Task(id=1, startingTimeGrain=null, durationInGrains=null)
Task(id=2, startingTimeGrain={"grainIndex":2,"id":1}, durationInGrains=3)
Notice that the duration updating listener is responsible for:
Setting durationInGrains to null on Task 1 after the undo move unassigned it from Grain 1.
Recalculating duration of Task 2 that was affected by the moves on Task 1 and set its duration to the same value (3) as it was before the move on Task 1 was done.

Related

Chained Through Time Implementation

I have been developing a project that is similar to the task assigning project which uses the chained through time pattern. I have a difficulty comparator that prioritizes the tasks that must be finished earlier. My problem is that whenever a new entity is added to the chain, it's added to the top, pushing down the tasks that must be done earlier.
Is that supposed to happen? or is my implementation bad?
This is my code in the listener. It's basically the same as in the task assigning project.
protected void updateStartTime(ScoreDirector scoreDirector,Calcado calcado){
CalcadoA calcadoAnterior = calcado.getCalcadoAnterior();
Calcado shadowCalcado = calcado;
Integer previousEndTime = (calcadoAnterior == null ? null : calcadoAnterior.getEndTime());
Integer startTime = previousEndTime;
while (shadowCalcado != null && !Objects.equals(shadowCalcado.getStartTime(), startTime)) {
scoreDirector.beforeVariableChanged(shadowCalcado, "startTime");
shadowCalcado.setStartTime(startTime);
scoreDirector.afterVariableChanged(shadowCalcado, "startTime");
previousEndTime = shadowCalcado.getEndTime();
shadowCalcado = shadowCalcado.getNextCalcado();
startTime = previousEndTime;
}
I have tried changing it to this...
protected void updateStartTime(ScoreDirector scoreDirector,Calcado calcado) {
CalcadoA calcadoAnterior = calcado.getCalcadoAnterior();
Calcado shadowCalcado = calcado;
Integer previousEndTime = (calcadoAnterior == null ? null : calcadoAnterior.getEndTime());
Integer startTime = previousEndTime;
scoreDirector.beforeVariableChanged(shadowCalcado, "startTime");
shadowCalcado.setStartTime(startTime);
scoreDirector.afterVariableChanged(shadowCalcado, "startTime");
}
but I end up getting a score corruption error.
The entity (Calcado{ 472 descricao= calcado22 tempoInicial: 2 Maquina{nome='Pos 1'}'})'s shadow variable (Calcado.startTime)'s corrupted value (1) changed to uncorrupted value (2) after all VariableListeners were triggered without changes to the genuine variables.
Maybe the VariableListener class (StartTimeUpdatingVariableListener) for that shadow variable (Calcado.startTime) forgot to update it when one of its sources changed.
Basically my goal is to whenever a entity is added to the chain, to be added to the end of the chain instead of the top.
Yes, that's suppose to happen.
An entity isn't really added to a chain, that's a side effect.
It's inserted after another entity or anchor. That entity or anchor is already in a chain, so it implies that the entity is also added to a chain.
Why does that matter? Well, given a chain of [Anchor-A, Entity-A1, Entity-A2, Entity-A3], OptaPlanner can choose to add an Entity-X after Anchor-A, Entity-A1, Entity-A2 or Entity-A3, resulting in 4 different possible new solution states. It will pick the best one. Turn on TRACE logging to see that happen.

Check the condition when a timer is running in CAPL (CANoe)

I am running a script in CAPL where I am supposed to notice a change in the value of a signal (for example: signal B) coming from the ECU. At the start of the timer, I change the value of another signal (for example: signal A) and sent it to ECU over CAN Bus. While the timer is running, I want to see the changed value of signal B coming from ECU as a response to the changed value of signal A. After the timer has run out, I want to reset the signal A back to its original value.
*Note: I have called the different signals as Signal A and Signal B only for understanding the question more clearly
Signal A changes the value from 2 to 0.
Signal B has original value of 61, and the changed value can be any number between 0-60.
Timer runs for 4 seconds.
I am using while loop and command (isTimerActive(timer)==1), to check for the change in the value of signal B when the timer is running.
Below is the attached Code ->
variables
{
msTimer Execute;
}
on key 'c'
{
setTimer(Execute,4000);
Write("Test starts");
SetSignal(Signal A, 2);
while (isTimerActive(Execute)==1)
{
if ($Signal B != 61)
{
Write("Test pass");
}
else
{
Write("Test fail");
}
}
}
on timer Execute
{
write("Test over");
setSignal(Signal A, 0);
}
I am executing this code and the value of signal A changes to 2 but
there's no change in the value of signal B. I am using the
(isTimerActive (timer) ==1) in the while loop, is it the correct command
for my problem?
Also, when I run (isTimerActive (timer) ==1), CANoe becomes inactive and
I have to stop CANoe using Task manager.
Any ideas how can I correct my code and get the desired response?
Thanks and Best
CAPL is event-driven. Your only choice is to react on events by programming event handlers, i.e. the functions starting with on ....
During execution of an event handler, the system basically blocks everything until the event handler has finished.
Literally nothing else happens, no sysvars change, no signals change, no timers expire, no bus messages are handled, and so on.
For test-modules and -units the story is a little bit different. There you have the possibility to wait during execution of your code using the various testWaitFor... methods.
With your current implementation of on key ‘c‘you basically block the system, since you have an while loop there waiting for an Timer to expire.
As stated above, this blocks everything and you have to kill CANoe.
Fortunately changes of signals are also events that can be handled.
Something like this should do:
Remove the while block and instead add another event handler like this:
on signal SignalB
{
if(isTimerActive(Execute))
{
if ($SignalB != 61)
{
Write("Test pass");
}
else
{
Write("Test fail");
}
}
}
The code is called when SignalB changes. It then checks whether the Timer is still running and checks the value of the signal.
Instead of $SignalB inside of the handler you can also write this.
In an event handler this is always the object that has caused the event.

Custom command to go back in a process instance (execution)

I have a process where I have 3 sequential user tasks (something like Task 1 -> Task 2 -> Task 3). So, to validate the Task 3, I have to validate the Task 1, then the Task 2.
My goal is to implement a workaround to go back in an execution of a process instance thanks to a Command like suggested in this link. The problem is I started to implement the command by it does not work as I want. The algorithm should be something like:
Retrieve the task with the passed id
Get the process instance of this task
Get the historic tasks of the process instance
From the list of the historic tasks, deduce the previous one
Create a new task from the previous historic task
Make the execution to point to this new task
Maybe clean the task pointed before the update
So, the code of my command is like that:
public class MoveTokenCmd implements Command<Void> {
protected String fromTaskId = "20918";
public MoveTokenCmd() {
}
public Void execute(CommandContext commandContext) {
HistoricTaskInstanceEntity currentUserTaskEntity = commandContext.getHistoricTaskInstanceEntityManager()
.findHistoricTaskInstanceById(fromTaskId);
ExecutionEntity currentExecution = commandContext.getExecutionEntityManager()
.findExecutionById(currentUserTaskEntity.getExecutionId());
// Get process Instance
HistoricProcessInstanceEntity historicProcessInstanceEntity = commandContext
.getHistoricProcessInstanceEntityManager()
.findHistoricProcessInstance(currentUserTaskEntity.getProcessInstanceId());
HistoricTaskInstanceQueryImpl historicTaskInstanceQuery = new HistoricTaskInstanceQueryImpl();
historicTaskInstanceQuery.processInstanceId(historicProcessInstanceEntity.getId()).orderByExecutionId().desc();
List<HistoricTaskInstance> historicTaskInstances = commandContext.getHistoricTaskInstanceEntityManager()
.findHistoricTaskInstancesByQueryCriteria(historicTaskInstanceQuery);
int index = 0;
for (HistoricTaskInstance historicTaskInstance : historicTaskInstances) {
if (historicTaskInstance.getId().equals(currentUserTaskEntity.getId())) {
break;
}
index++;
}
if (index > 0) {
HistoricTaskInstance previousTask = historicTaskInstances.get(index - 1);
TaskEntity newTaskEntity = createTaskFromHistoricTask(previousTask, commandContext);
currentExecution.addTask(newTaskEntity);
commandContext.getTaskEntityManager().insert(newTaskEntity);
AtomicOperation.TRANSITION_CREATE_SCOPE.execute(currentExecution);
} else {
// TODO: find the last task of the previous process instance
}
// To overcome the "Task cannot be deleted because is part of a running
// process"
TaskEntity currentUserTask = commandContext.getTaskEntityManager().findTaskById(fromTaskId);
if (currentUserTask != null) {
currentUserTask.setExecutionId(null);
commandContext.getTaskEntityManager().deleteTask(currentUserTask, "jumped to another task", true);
}
return null;
}
private TaskEntity createTaskFromHistoricTask(HistoricTaskInstance historicTaskInstance,
CommandContext commandContext) {
TaskEntity newTaskEntity = new TaskEntity();
newTaskEntity.setProcessDefinitionId(historicTaskInstance.getProcessDefinitionId());
newTaskEntity.setName(historicTaskInstance.getName());
newTaskEntity.setTaskDefinitionKey(historicTaskInstance.getTaskDefinitionKey());
newTaskEntity.setProcessInstanceId(historicTaskInstance.getExecutionId());
newTaskEntity.setExecutionId(historicTaskInstance.getExecutionId());
return newTaskEntity;
}
}
But the problem is I can see my task is created, but the execution does not point to it but to the current one.
I had the idea to use the activity (via the object ActivityImpl) to set it to the execution but I don't know how to retrieve the activity of my new task.
Can someone help me, please?
Unless somethign has changed in the engine significantly the code in the link you reference should still work (I have used it on a number of projects).
That said, when scanning your code I don't see the most important command.
Once you have the current execution, you can move the token by setting the current activity.
Like I said, the code in the referenced article used to work and still should.
Greg
Referring the same link in your question, i would personally recommend to work with the design of you your process. use an exclusive gateway to decide whether the process should end or should be returned to the previous task. if the generation of task is dynamic, you can point to the same task and delete local variable. Activiti has constructs to save your time from implementing the same :).

How to limit JProfiler to a subtree

I have a method called com.acmesoftware.shared.AbstractDerrivedBean.getDerivedUniqueId(). When I JProfiler the application, this method, getDerivedUniqueId(), is essentially buried 80 methods deep as expected. The method is invoked on behalf of every bean in the application. I'm trying to record CPU calltree starting with this method down to leaf node (ie, one of the excluded classes).
I tried the following but it didn't produce the expected outcome:
Find a method above the method targeted for profiling, eg, markForDeletion().
set trigger to start recording at getDerivedUniqueId()
set trigger to STOP recording at markForDeletion()
I was expecting to only see everything below markForDeletion(), but I saw everything up to but not INCLUDING getDerivedUniqueId(), which is the opposite of my intended goal. Worse yet, even with 5ms sampling, this trigger increased the previous running time from 10 minutes to "I terminated after 3 hours of running". It seems the trigger is adding a giant amount of overhead on top of the overhead. Hence, even if I figure out how to correctly enable the trigger, the added overhead would seem to render it ineffective.
The reason I need to limit the recording to just this method is: When running in 5ms sampling mode, the application completes in 10 minutes. When I run it in full instrumentation, I've waited 3 hours and it still hasn't completed. Hence, I need to turn on full instrumentation ONLY after getDerivedUniqueId() is invoked and pause profiling when getDerivedUniqueId() is exited.
-- Updated/Edit:
Thank you Ingo Kegel for your assistance.
I am likely not clear on how to use triggers. In the code below, I set triggers as shown after the code. My expectation is that when I JProfile the application (both sampling and full instrumentation) with the below configured triggers, if boolean isCollectMetrics is false, I should see 100% or 99.9% of cpu in filtered classes. However, that is not the case. The CPU tree seems not to take into account the triggers.
Secondly, when isCollectMetrics is true, the jprofiler call tree I expect would start with startProfiling() and end at stopProfiling(). Again, this is not the case either.
The method contains() is the bottleneck. It eventually calls one of 150 getDerivedUniqueId(). I am trying to pinpoint which getDerivedUniqueId() is causing the performance degradation.
public static final AtomicLong doEqualContentTime = new AtomicLong();
public static final AtomicLong instCount = new AtomicLong();
protected boolean contentsEqual(final InstanceSetValue that) {
if (isCollectMetrics) {
// initialization code removed for clarity
// ..........
// ..........
final Set<Instance> c1 = getReferences();
final Set<Instance> c2 = that.getReferences();
long st = startProfiling(); /// <------- start here
for (final Instance inst : c1) {
instCount.incrementAndGet();
if (!c2.contains(inst)) {
long et = stopProfiling(); /// <------- stop here
doEqualContentTime.addAndGet(et - st);
return false;
}
}
long et = stopProfiling(); /// <------- stop here
doEqualContentTime.addAndGet(et - st);
return true;
} else {
// same code path as above but w/o the profiling. code removed for bravity.
// ......
// ......
return true;
}
}
public long startProfiling() {
return System.nanoTime();
}
public long stopProfiling() {
return System.nanoTime();
}
public static void reset() {
doEqualContentTime.set(0);
instCount.set(0);
}
The enabled triggers:
startProfiling trigger:
stopProfiling trigger:
I've tried 'Start Recordings' or 'Record CPU' buttons separately to capture the call tree only
If the overhead with instrumentation is large, you should refine your filters. With good filters, the instrumentation overhead can be very small,
As for the trigger setup, the correct actions are:
"Start recording" with CPU data selected
"Wait for the event to finish"
"Stop recording" with CPU data selected

Asynchronously start only one Task to process a static Queue, stopping when it's done

Basically I have a static custom queue of objects I want to process. From multiple threads, I need to kick off a singular Task that will process the queued objects, stopping the task when all items are dequeued.
Some psuedo code:
static CustomQueue _customqueue;
static Task _processQueuedItems;
public static void EnqueueSomething(object something) {
_customqueue.Enqueue(something);
StartProcessingQueue();
}
static void StartProcessingQueue() {
if(_processQueuedItems != null) {
_processQueuedItems = new Task(() => {
while(_customqueue.Any()) {
var stuffToDequeue = _customqueue.Dequeue();
/* do stuff */
}
});
_processQueuedItems.Start();
}
if(_processQueuedItems.Status != TaskStatus.Running) {
_processQueuedItems.Start();
}
}
If it makes a difference my custom queue is a queue that essentially holds items until they reach a certain age, then allows them to dequeue. Everytime an item is touched its timer starts again. I know this piece works fine.
The part I'm struggling with is the parallelism. (Clearly, I don't know what I'm doing here). What I want is to have one thread process the queue until it's complete, then go away. If another call comes in it doesn't start a new thread unless it has to.
I hope that explains my issue okay.
You might want to consider using BlockingCollection<T> here. You could make your custom queue implement IProducerConsumerCollection, in which case BC could use it directly.
You'd then just need to start a long running Task to call blockingCollection.GetConsumingEnumerable() and process the items in a foreach. The task will automatically block when the collection is empty, and restart when a new item is Enqueued.