Is this the correct way to stop a function and restart it with new data? - kotlin

I have a function which does a bunch of searching and setting fields. Now, I want it to stop doing that and restart as soon as I change anything, is this the correct way of doing that?
The plan is that I don't have work being done that is no longer used, and that slower updates don't finish after a more recent, faster update
private var autoUpdateJob: Job = Job()
(...)
// Something happens (eg. an onTextChangedListener in an EditText gets triggered)
launch {
autoUpdateJob.cancelAndJoin()
autoUpdateJob = launch { flight = autoValues() }
}

Related

How to nicely init state from database in Jetpack Compose?

I have a Jetpack Compose (desktop) app with a database, and I want to show some UI based on data from the db:
val data = remember { mutableStateListOf<Dto>() }
Column {
data.forEach { /* make UI */ }
}
My question is, at which point should I execute my database query to fill the list?
I could do
val data = remember { mutableStateListOf<Dto>() }
if (data.isEmpty()) data.addAll(database.queryDtos())
The isEmpty check is needed to prevent requerying on re-compose, so this is obviously not the way to go.
Another option would be
val data = remember {
val state = mutableStateListOf<Dto>()
state.addAll(database.queryDtos())
state
}
This way I can't reuse a database connection, since it's scoped inside the remember block. And queries should probably happen async, not inside this initializer
So, how to do this nicely?
In Android the cleanest way is using view model, and call such code in init.
In Desktop it depends on the operation. The main benefit of this platform is that there's no such thing as Android configuration change, so remember/LaunchedEffect are not gonna be re-created.
If the initialization code is not heavy, you can run it right inside remember.
val data = remember { database.queryDtos() }
In case you need to update the list later, add .toMutableStateList()
If it's something heavy, it's better to go for LaunchedEffect. It will have the same lifecycle as remember - run the containing code only the first time the view appears:
val data = remember { mutableStateListOf<Dto>() }
LaunchedEffect(Unit) {
data.addAll(database.queryDtos())
}

Custom command to go back in a process instance (execution)

I have a process where I have 3 sequential user tasks (something like Task 1 -> Task 2 -> Task 3). So, to validate the Task 3, I have to validate the Task 1, then the Task 2.
My goal is to implement a workaround to go back in an execution of a process instance thanks to a Command like suggested in this link. The problem is I started to implement the command by it does not work as I want. The algorithm should be something like:
Retrieve the task with the passed id
Get the process instance of this task
Get the historic tasks of the process instance
From the list of the historic tasks, deduce the previous one
Create a new task from the previous historic task
Make the execution to point to this new task
Maybe clean the task pointed before the update
So, the code of my command is like that:
public class MoveTokenCmd implements Command<Void> {
protected String fromTaskId = "20918";
public MoveTokenCmd() {
}
public Void execute(CommandContext commandContext) {
HistoricTaskInstanceEntity currentUserTaskEntity = commandContext.getHistoricTaskInstanceEntityManager()
.findHistoricTaskInstanceById(fromTaskId);
ExecutionEntity currentExecution = commandContext.getExecutionEntityManager()
.findExecutionById(currentUserTaskEntity.getExecutionId());
// Get process Instance
HistoricProcessInstanceEntity historicProcessInstanceEntity = commandContext
.getHistoricProcessInstanceEntityManager()
.findHistoricProcessInstance(currentUserTaskEntity.getProcessInstanceId());
HistoricTaskInstanceQueryImpl historicTaskInstanceQuery = new HistoricTaskInstanceQueryImpl();
historicTaskInstanceQuery.processInstanceId(historicProcessInstanceEntity.getId()).orderByExecutionId().desc();
List<HistoricTaskInstance> historicTaskInstances = commandContext.getHistoricTaskInstanceEntityManager()
.findHistoricTaskInstancesByQueryCriteria(historicTaskInstanceQuery);
int index = 0;
for (HistoricTaskInstance historicTaskInstance : historicTaskInstances) {
if (historicTaskInstance.getId().equals(currentUserTaskEntity.getId())) {
break;
}
index++;
}
if (index > 0) {
HistoricTaskInstance previousTask = historicTaskInstances.get(index - 1);
TaskEntity newTaskEntity = createTaskFromHistoricTask(previousTask, commandContext);
currentExecution.addTask(newTaskEntity);
commandContext.getTaskEntityManager().insert(newTaskEntity);
AtomicOperation.TRANSITION_CREATE_SCOPE.execute(currentExecution);
} else {
// TODO: find the last task of the previous process instance
}
// To overcome the "Task cannot be deleted because is part of a running
// process"
TaskEntity currentUserTask = commandContext.getTaskEntityManager().findTaskById(fromTaskId);
if (currentUserTask != null) {
currentUserTask.setExecutionId(null);
commandContext.getTaskEntityManager().deleteTask(currentUserTask, "jumped to another task", true);
}
return null;
}
private TaskEntity createTaskFromHistoricTask(HistoricTaskInstance historicTaskInstance,
CommandContext commandContext) {
TaskEntity newTaskEntity = new TaskEntity();
newTaskEntity.setProcessDefinitionId(historicTaskInstance.getProcessDefinitionId());
newTaskEntity.setName(historicTaskInstance.getName());
newTaskEntity.setTaskDefinitionKey(historicTaskInstance.getTaskDefinitionKey());
newTaskEntity.setProcessInstanceId(historicTaskInstance.getExecutionId());
newTaskEntity.setExecutionId(historicTaskInstance.getExecutionId());
return newTaskEntity;
}
}
But the problem is I can see my task is created, but the execution does not point to it but to the current one.
I had the idea to use the activity (via the object ActivityImpl) to set it to the execution but I don't know how to retrieve the activity of my new task.
Can someone help me, please?
Unless somethign has changed in the engine significantly the code in the link you reference should still work (I have used it on a number of projects).
That said, when scanning your code I don't see the most important command.
Once you have the current execution, you can move the token by setting the current activity.
Like I said, the code in the referenced article used to work and still should.
Greg
Referring the same link in your question, i would personally recommend to work with the design of you your process. use an exclusive gateway to decide whether the process should end or should be returned to the previous task. if the generation of task is dynamic, you can point to the same task and delete local variable. Activiti has constructs to save your time from implementing the same :).

How to limit JProfiler to a subtree

I have a method called com.acmesoftware.shared.AbstractDerrivedBean.getDerivedUniqueId(). When I JProfiler the application, this method, getDerivedUniqueId(), is essentially buried 80 methods deep as expected. The method is invoked on behalf of every bean in the application. I'm trying to record CPU calltree starting with this method down to leaf node (ie, one of the excluded classes).
I tried the following but it didn't produce the expected outcome:
Find a method above the method targeted for profiling, eg, markForDeletion().
set trigger to start recording at getDerivedUniqueId()
set trigger to STOP recording at markForDeletion()
I was expecting to only see everything below markForDeletion(), but I saw everything up to but not INCLUDING getDerivedUniqueId(), which is the opposite of my intended goal. Worse yet, even with 5ms sampling, this trigger increased the previous running time from 10 minutes to "I terminated after 3 hours of running". It seems the trigger is adding a giant amount of overhead on top of the overhead. Hence, even if I figure out how to correctly enable the trigger, the added overhead would seem to render it ineffective.
The reason I need to limit the recording to just this method is: When running in 5ms sampling mode, the application completes in 10 minutes. When I run it in full instrumentation, I've waited 3 hours and it still hasn't completed. Hence, I need to turn on full instrumentation ONLY after getDerivedUniqueId() is invoked and pause profiling when getDerivedUniqueId() is exited.
-- Updated/Edit:
Thank you Ingo Kegel for your assistance.
I am likely not clear on how to use triggers. In the code below, I set triggers as shown after the code. My expectation is that when I JProfile the application (both sampling and full instrumentation) with the below configured triggers, if boolean isCollectMetrics is false, I should see 100% or 99.9% of cpu in filtered classes. However, that is not the case. The CPU tree seems not to take into account the triggers.
Secondly, when isCollectMetrics is true, the jprofiler call tree I expect would start with startProfiling() and end at stopProfiling(). Again, this is not the case either.
The method contains() is the bottleneck. It eventually calls one of 150 getDerivedUniqueId(). I am trying to pinpoint which getDerivedUniqueId() is causing the performance degradation.
public static final AtomicLong doEqualContentTime = new AtomicLong();
public static final AtomicLong instCount = new AtomicLong();
protected boolean contentsEqual(final InstanceSetValue that) {
if (isCollectMetrics) {
// initialization code removed for clarity
// ..........
// ..........
final Set<Instance> c1 = getReferences();
final Set<Instance> c2 = that.getReferences();
long st = startProfiling(); /// <------- start here
for (final Instance inst : c1) {
instCount.incrementAndGet();
if (!c2.contains(inst)) {
long et = stopProfiling(); /// <------- stop here
doEqualContentTime.addAndGet(et - st);
return false;
}
}
long et = stopProfiling(); /// <------- stop here
doEqualContentTime.addAndGet(et - st);
return true;
} else {
// same code path as above but w/o the profiling. code removed for bravity.
// ......
// ......
return true;
}
}
public long startProfiling() {
return System.nanoTime();
}
public long stopProfiling() {
return System.nanoTime();
}
public static void reset() {
doEqualContentTime.set(0);
instCount.set(0);
}
The enabled triggers:
startProfiling trigger:
stopProfiling trigger:
I've tried 'Start Recordings' or 'Record CPU' buttons separately to capture the call tree only
If the overhead with instrumentation is large, you should refine your filters. With good filters, the instrumentation overhead can be very small,
As for the trigger setup, the correct actions are:
"Start recording" with CPU data selected
"Wait for the event to finish"
"Stop recording" with CPU data selected

Asynchronously start only one Task to process a static Queue, stopping when it's done

Basically I have a static custom queue of objects I want to process. From multiple threads, I need to kick off a singular Task that will process the queued objects, stopping the task when all items are dequeued.
Some psuedo code:
static CustomQueue _customqueue;
static Task _processQueuedItems;
public static void EnqueueSomething(object something) {
_customqueue.Enqueue(something);
StartProcessingQueue();
}
static void StartProcessingQueue() {
if(_processQueuedItems != null) {
_processQueuedItems = new Task(() => {
while(_customqueue.Any()) {
var stuffToDequeue = _customqueue.Dequeue();
/* do stuff */
}
});
_processQueuedItems.Start();
}
if(_processQueuedItems.Status != TaskStatus.Running) {
_processQueuedItems.Start();
}
}
If it makes a difference my custom queue is a queue that essentially holds items until they reach a certain age, then allows them to dequeue. Everytime an item is touched its timer starts again. I know this piece works fine.
The part I'm struggling with is the parallelism. (Clearly, I don't know what I'm doing here). What I want is to have one thread process the queue until it's complete, then go away. If another call comes in it doesn't start a new thread unless it has to.
I hope that explains my issue okay.
You might want to consider using BlockingCollection<T> here. You could make your custom queue implement IProducerConsumerCollection, in which case BC could use it directly.
You'd then just need to start a long running Task to call blockingCollection.GetConsumingEnumerable() and process the items in a foreach. The task will automatically block when the collection is empty, and restart when a new item is Enqueued.

Dojo: Is there an event after drag & drop finished

I've got two dojo.dnd.Sources with items. Whenever an item is dropped I need to persist the new order of the items in the Sources using an xhr.
Is there an dojo event or topic that is fired after an dnd operation has (successfully) finished? What would be the best way to use it?
Probably I don't understand the problem in all details but I don't see why you need to process events or topics. The best way to record changes is to intercept updating methods on relevant sources. Specifically you need to intercept insertNodes() for drops or any other additions.
Simple example (pseudo-code):
var source1, source2;
// ...
// initialize sources
// populate sources
// ...
function getAllItems(source){
var items = source.getAllNodes().map(function(node){
return source.getItem(node.id);
});
return items;
}
function dumpSource(source){
var items = getAllItems(source);
// XHR items here to your server
}
function recordChange(){
// now we know that some change has occured
// it could be a drop or some programmatic updates
// we don't really care
dumpSource(source1);
dumpSource(source2);
}
dojo.connect(source1, "insertNodes", recordChanges);
dojo.connect(source2, "insertNodes", recordChanges);
// now any drop or other change will trigger recordChanges()
// after the change has occurred.
You can try to be smart about that and send some diff information instead of a whole list, but it is up to you to generate it — you have everything you need for that.
You can use dojo.subscribe to do something when a drop is finished like so:
dojo.subscribe("/dnd/drop", function(source, nodes, copy, target) {
// do your magic here
});
There's examples of using subscribe on the dojotoolkit tests site. More info about dojo publish and subscribe too.
Alternately, you could connect to the onDndDrop method.
var source = new dojo.dnd.Source( ... );
dojo.connect( source, "onDndDrop", function( source, nodes, copy, target ) {
// make magic happen here
});
connect methods are called at the end so the items will be there at that point.
I'm keeping this note for dojo Tree folks just like me who would run in to this problem. Solutions given here was not quite worked well in my situation. I was using a dijit.tree.dndSource with Dojo tree and subscribing to "/dnd/drop" allows me to capture the event even though at that point my underlying data store hadn't been updated with latest changes. So I tried waiting as Wienczny explains, that doesn't solve the problem completely as I can't rely on a timeout to do the waiting job. Time taken for store update could be vary, i.e. shorter or very long depends on how complex your data structure is. I found the solution with overriding the onDndDrop method of the dndController. Simply you can specify the onDndDrop : on your tree initialization. One thing I found odd though you can not hitch this method, you will get weird behavior during dnd.
Tree
this._tree = new MapConfigTree({
checkAcceptance: this.dndAccept,
onDndDrop: this.onDndDrop,
betweenThreshold:5,
method
onDndDrop : function(source, nodes, copy, target){
if(source.dropPosition === 'Over' && (target.targetAnchor.item.type[0] == 'Test layer')) {
this.inherited(arguments);
// do your bit here
} else {
this.onDndCancel();
}
}