How to handle the compensation event in bpmn? - bpmn

In the compensation event, can the compensation handler continue its course after the compensation is thrown, or should it wait for the two compensation processes to take place?
In the figure below, after task 1, task 3 is executed and does not wait for the two compensation processes 1.1 and 2.2?
Or after the implementation of compensation 1, task 3 is implemented and the two compensation processes 1.1 and 2.2 are canceled?
Or the implementation of compensation 1, task 3 is not executed and the two compensation processes 1.1 and 2.2 are executed first and task 3 waits and after execution 1.1 and 2.2, task 3 is executed.
enter image description here

Related

Which BPMN gateway do I have to use?

I'm building this BPMN in which a user has to fill 6 forms (do 6 tasks). After he's completed all 6 he should get some results, but if any of those tasks are missing, then we do not have the results.
Which gateway should I use? The one I thought suited the most was the inclusive gateway, but all 6 tasks can be completed in any order.
Should I use a complex gateway and just describe the process? Or the parallel gateway works just fine
If exactly 6 tasks have to be performed in any desired order, and the flow shall not continue before all 6 are completed, the simplest would be to use a parallel gateway:
As soon as a token arrives at the first gate, a token is send on every outgoing flow, which activates each of the tasks. The gate at the left will generate a token to continue the flow once every task is complete and passed its token to the gate.
The complex gateway could also address your needs, since it allows complex conditions on the incoming and outgoing flows, including with multiple tokens on multiple branch. This would be necessary for example if 5 out of 6 tasks would have to be completed, or if you only want some combinations of completed tasks to continue the flow. But it seems overkill for your problem.
The inclusive gateway is not a solution for your needs, since it only allows one of the outgoing branch to be activated.

.net core multiple dynamic consumers of a BlockingCollection

I am looking for a reliable and easy pattern to consume (execute) background tasks in .net core in parallel.
I found this https://stackoverflow.com/a/49814520/1448545 answer, but the problem is there is always single consumer of the tasks.
What if there is 1 new task to perform per 100ms, while each task takes 500ms to complete (e.g. long running API call). In such case the tasks will pile up.
How to make it dynamic, so if there is more items in BlockingCollection<TaskSettings> _tasks the .net core will create more tasks executors (consumer) dynamically?

JMeter How to run two parallel thread sets consecutively

I need to create a Thread Group Using JMeter with 5 Thread Groups.
This is how I want this test to run:
Thread 1 and 2 Starts Parallelly. (But Thread 1 only runs once and Thread 2 runs till it gets a success)
Once Thread 2 finishes running, Thread 3 and 4 should start running parrallaly.(But Thread 3 only runs once and Thread 4 runs till it gets a success)
Once Thread 4 finishes Thread 5 needs to start.
Really appreciate if you can guide me to achieve this task.
Thanks in Advance.
If you don't need to pass anything between the Thread Groups the easiest would be just putting all the requests under one Thread Group and control the concurrency using Parallel Controller
If you need to pass something between Thread Groups, i.e. Thread3 requires some data from Thread2 - consider using Inter-Thread Communication Plugin
Both plugins can be installed using JMeter Plugins Manager

Why send rate is lower than configured rate in config.yaml (hyperledger caliper) even after use of only one client?

I configured send rate at 500 tps and I am using only one client so send rate should be around 500tps but in generated report send rate is around 130-40 tps. Why there is so much deviation?
I am using fabric ccp version of caliper.
I expect the send rate around 450-480 but the actual send rate is around 130-40 tps.
Node.js is a single-threaded framework (async/await just means deferred execution, not parallel execution). Caliper runs a loop with the following step:
Waiting for the rate controller to enable the next TX
Creates an async operation in which the user module will call the blockchain adapter.
All of the pending TXs eat up some CPU time (when not waiting for I/O), plus other operations are also scheduled (like sending updates about TXs to the master process).
To reach 500 TPS, the rate controller must enable a TX every 2ms. That's not a lot of time. Try spawning more than 1 local clients, so the load will be shared among them (100 TPS/client for 5 clients, 50 TPS/client for 10 clients, etc).

VxWorks signals

I have a question regarding previous question asked in VxWorks forum.
My goal is when the high priority function generates a signal the low priority function will handle it immidiately(the high priority function must be preempted)
The code is:
sig_hdr () { ... }
task_low_priority() {
...
// Install signal handler for SIGUSR1
signal(SIGUSR1, sig_hdr);
...
}
task_high_priority() {
...
kill(pid, SIGUSR1); //pid is the ID of task_low_priority
...
}
After the line:
signal(SIGUSR1, sig_hdr);
i added
taskDelay(0).
I wanted to block the high priority task so the low priority task can gain the CPU in order to execute the signal handler but it does not happen unless i do taskDelay(1).
Can any one explain why it does not work with taskDelay(0)?
Indeed, taskDelay(0) will not let lower priority tasks run because of the following:
high priority task is executing
high priority task issues taskDelay(0)
Scheduler is invoked and it scans for the next task to run, it will select the highest priority task that is "ready"
The task that issued the taskDelay(0) is ready because the delay has expired (i.e. 0 ticks have elapsed)
So the high priority task is rescheduled immediately, in this case taskDelay(0) is effectively a waste of CPU cycles.
Now in the case where you issue taskDelay(1) the same steps are followed, but the difference is that the high priority task isn't in the ready state because one tick has not elapsed, so a lower priority task that is ready can have 1 tick of CPU time then it will be preempted by the high priority task.
Now there are some poorly designed systems out there that do things like:
taskLock();
...
taskDelay(0);
...
taskUnlock();
With the intention of having a low priority task hog the CPU until some point where it then allows a high priority task to take over by issuing a taskDelay(0). However if you play games like this then you should reconsider your design.
Also in your case I would consider a more robust system, rather than doing a taskDelay() to allow a low priority task to process an event, you should send a message to a low priority task and have that low priority task to process the message queue. While your high priority task blocks on a semaphore that is given by your event handler or some thing similar. In this situation you are hoping to force a ping pong between two different tasks to get a job done, but if you add a queue that will act as a buffer, so as long as your system is schedulable (i.e. there is enough time to respond to all events, queue them up and fully process them) then it will work.
Update
I assume your system is supposed to be something like this:
Event occurs (interrupt driven?).
High priority task runs to gather data.
Data is processed by low priority task.
If this is the case the pattern you want to follow is actually quite simple, and in fact could be accomplished with just 1 task:
Interrupt handler gathers data, and sends a message (msgQSend()) to task.
Task is pending on the message queue with msgQReceive.
But it might help if I knew more about your system (what are you really trying to do) and also why you are using posix calls rather than native vxworks calls.
If you are new to real time systems, you should learn about Rate monotonic analysis, there is a very brief summary on wikipedia:
http://en.wikipedia.org/wiki/Rate-monotonic_scheduling
Also note that in VxWorks a "high priority" is 0, and "low priority" is 255, the actual numbers are inversely related to their meaning :D
this is exactly the point i dont understand how the low priority task will get some CPU time when the high priority task is running?
High priority task will continue run till it gets blocked. OInce it gets blocked, lower priority task that are ready run will run.
My answer has 2 parts:
1. How to use correctly task Delay with vxWorks
2. TaskDelay is not the correct solution for your problem
First part:
TaskDelay in vxWorks can confused:
taskDelay(0) – don't perform delay at all!!!
It is a command to the scheduler to remove the current task from the CPU. If this is still the highest priority task in the system, it will return to the head of the queue with no delay at all. You will use this command if the scheduler configured to FIFO in case tasks in the same priority and your task have a CPU real time consumer function to run, the can try to release the CPU for other tasks in the same priority (nice).
BTW, it is the same as taskDelay(NO_WAIT).
TaskDelay(1) – this will delay the calling task sometime between zero (!!!) to 1 system tick. The delay in vxWorks finish at a round system tick.
TaskDelay(2) – sometime between 1 system tick to 2 system ticks.
3 …… (understood…)
TaksDelay(-1) (A.K.A taskDelay(WAIT_FOREVER)) – will delay the task forever (not recommended).
Second part:
Using taskDelay to enable low priority task might be a wrong idea. You didn't provided the all problem information but please note that delaying the high priority task will not ensure your low priority task will run (regardless the sleep time you'll write). Other tasks in highest priority from your high & low priority tasks might run for the all 'sleep time'.
There are several synchronized methods in vxWorks, like binary semaphores, changing task priority, signals, …