What is the execution order for this BPD? - bpmn

I have the following diagram:
Could you please help me understanding why:
if C1&C2 are true, the second inclusive gateway will receive first the output from Task 3/Task 4;
if C1 true, C2 false, the second inclusive gateway will receive the output from Task 3 before the output from Task 2;
What I don’t understand is the execution order.How do we know what activity is finished first given the above info?

Based on the diagram there is no way to predict which activity will be finished first.
What we know is that "Task 2" will be executed and additionally "Task 3" and/or "Task 4" might be executed in parallel to "Task 2".
The second inclusive gateway will receive the token from one of the previously activated tasks in no particular order. It all depends on the tasks execution duration.

Here is what Camunda Docs says on gateways:
Gateways control token flow in a process. They allow modeling
decisions based on data and events as well as fork / join concurrency.
So complementing Antoine Mottier's answer, I would even say that the diagram deliberately gives no particular execution order, but shows that the tasks 2 and 3 and/or 4 all run in parallel.
Also refer to their documentation of parallel gateways and inclusive gateways.

Related

Anylogic: how to compare parameter and variable value within statechart?

In my Anylogic model I have a hub which can store 5 containers. So it has a capacity parameter with value 5. I also have given it a variable with the numberOfContainers stored at the hub at that moment. When I run the model I see that the variable works (it changes over time to the number of containers that is stored at that moment).
Now I want another agent in my model to make a decision based on whether the capacity of the hub is reached at that moment (within its statechart). I tried to create a branche with the following condition:
main.hub.numberOfContainers > main.hub.capacity
but it doesn't work, the statechart acts like the capacity is never reached, even if the number of containers is much higher than the capacity. Does anybody know how to make this work?
Typically, condition branches are tricky because the condition may not be evaluated at the time you want it to be. Here is an example.
At time n there are 3 containers in the hub
At time n+1 there are 10 containers in the hub
At time n+2 there are 2 containers in the hub
The model may have missed evaluating the condition at time (n+1) which is why your transition would not be triggered.
To address this issue, I have 3 possible suggestions:
Do not use a condition transition. Instead, use a message. For example, if you are storing the containers in a queue, then, on the "On Enter" and "On Exit" fields of the queue, add the condition:
if(queue.size >= main.hub.numberOfContainers)
<send msg to the statechart>
Use a cyclic event to check if the condition is met every second or millisecond or whatever time period makes sense to you. When the condition is met, send a message to trigger the transition. But the problem with this method is that it might slow down your model with poor performance.
Use the onChange() function. This function is used to signal to your model that a change happened and the condition trigger needs to be evaluated. So, you need to make sure to place onChange() whenever a change that might cause the condition to become true happens. In the example provided under option 1 above, that would be in the fields of the queue "On Enter" and "On Exit".

Purpose of setting the loop count

What is the purpose of setting the loop count? Is it just depend on how many times i want to run the test? Or it has other purpose of test with different loop count? Will it affect the final test result?
"If you give loop count as 2 then every request two times to the server"
I found this online, but i don't understand what it means.
Based on my understanding, the loop count set to 2 because of i want to repeat the test twice only. After the first test end, then the threads in first round test in dead before the second test starts. Then the new thread group will send the request to the server. Why "every request two times to the server"?
The loop count means each thread of your thread group will run the steps inside the loop twice if iteration is set to 2
The thread will start based on delay and rampup and not related to this setting
If your server has concurrent users limit, for example 100, and you want to execute more, as 600, you can set loop count as 6 and execute 600 requests with given server limits
It's the number of times for each JMeter thread (virtual user) to execute Samplers inside the Thread Group
Each JMeter thread executes Samplers upside down (or according to the Logic Controllers) so if there are no more Samplers to execute the thread will shut down. And it might be the case you won't be able to achieve the desired concurrency because some threads have already finished execution and some haven't been yet started like it's described in the JMeter Test Results: Why the Actual Users Number is Lower than Expected so you might want to increase the number of iterations or even set it to "Infinite" and control the test duration using "Duration" section of the Thread Group or Runtime Controller

Collect statistics on current traffic with Bro

I want to collect statistics on traffic every 10 seconds and the only tool that I found is connection_state_remove event,
event connection_state_remove(c: connection)
{
SumStats::observe( "traffic", [$str="all"] [$num=c$orig$num_bytes_ip] );
}
how to deal with those connections that did not removed by the end of this period. How to get statistics from them?
The events you're processing are independent of the time interval at which the SumStats framework reports statistics. First, you need to define what exactly are the statistics you care about — for example, you may want to count the number of connections for which Bro completes processing in a given time interval. Second, you need to define the time interval (in your case, 10 seconds) and how to process the statistical observations in the SumStats framework. This latter part is missing in your snippet: you're only making an observation but not telling the framework what to do with it.
The examples in the SumStats documentation are very close to what you're looking for.

Understanding Domain Class in Project Job Scheduling

I am new to optaplanner, and right now I focus on trying to understand the project job scheduling. I trying to run this examples using the sample data from optaplanner manual like in this picture below:
I have some question about the domain classes in this example :
What is the difference of GlobalResource and LocalResource? In the examples, all the resource is GlobalResource right? Then what the use of LocalResource?
There are 3 JobType: SOURCE, STANDARD, SINK, what is the meaning each one of them? It is SOURCE mean the job should be the first to start before the others? STANDARD mean it is should be run after the predecessor job finished but not after the SINK job? SINK mean it is the last job to do after all job finished?
What is the meaning of property releaseDate and criticalPathDuration in Project class? If we related it with the picture above, what is the value for project Book1 and Book2?
What is the meaning of requirement in ResourceRequirement?
I will be really thankful if someone can help me create the xml sample data like in optaplanner distribution, cause it will help me more faster to understand this example. Thanks & Regards.
A LocalResource belongs to a specific Project, a GlobalResource is shared between the projects.
So a LocalResource only has to be worry about being used by other jobs in the same Project too, while a GlobalResource has to worry about all other tasks.
That's an implementation trick. The source and sink jobs are dummy's basically. Because a project might start with multiple jobs in parallel, a SOURCE job is put in front of it, to have a single root. Same for the end: it can end with multiple, so a SINK job is put after it, to have a single tail. This makes it easier and faster to determine makespan etc.
IIRC, releaseDate is the first date we are allowed to start the first job. For example: you have to create a book, but you 'll only get the actual final content next Monday, so the releaseDate is next Monday (you can't start any work before that date).
The criticalPathDuration is a theoretical minimum duration (if we can happily ignore resources IIRC). For example: if job A takes 5 days and job B takes 2 days and B has to be done AFTER A, then the critical path duration is 7 days. Adding job C which takes 1 day and can be done in parallel with the others, don't affect that.
ResourceRequirement is the many2many relationship between ExecutionMode and Resource. Remember that ExecutionMode belongs to a specific Job. For example: doing job A in executionMode A1 requires 1 laborers and 5 days. Doing job A in executionMode A2 requires 2 laborers and 3 days.

How do I do the Delayed::Job equivalent of Process#waitall?

I have a large task that proceeds in several major steps: Step A must complete before Step B can be started, etc. But each major step can be divided up across multiple processes, in my case, using Delayed::Job.
The question: Is there a simple technique for starting Step B only after all the processes have completed working on Step A?
Note 1: I don't know a priori how many external workers have been spun up, so keeping a reference count of completed workers won't help.
Note 2: I'd prefer not to create a worker whose sole job is to busy wait for the other jobs to complete. Heroku workers cost money!
Note 3: I've considered having each worker examine the Delayed::Job queue in the after callback to decide if it's the last one working on Step A, in which case it could initiate Step B. This could work, but seems potentially fraught with gotchas. (In the absence of better answers, this is the approach I'm going with.)
I think it really depends on the specifics of what you are doing, but you could set priority levels such that any jobs from Step A run first. Depending on the specifics, that might be enough. From the github page:
By default all jobs are scheduled with priority = 0, which is top
priority. You can change this by setting
Delayed::Worker.default_priority to something else. Lower numbers have
higher priority.
So if you set Step A to run at priority = 0, and Step B to run at priority = 100, nothing in Step B will run until Step A is complete.
There's some cases where this will be problematic -- in particular, if you have a lot of jobs and are running a lot of workers, you will probably have some workers running Step B before the work in Step A is finished. Ideally in this setup, Step B has some sort of check to make check if it can be run or not.