I have a process containing a sub-process. The sub-process can end either normally (with an Untyped End Event) or through Cancel End Event.
In the latter case I wanted to use a Boundary Interrupting Cancel Event to indicate next action taken in such case. I can't find a way to do that though. I can add other types of Boundary Events (both Interrupting and Non-Interrupting), but Cancel is not on the list.
This is a simplification of my process, with Escalation Events used in place of Cancel:
My situation is essentially similar to one described here.
I'm using Bizagi Modeller.
Do I oversee something or is it a limitation of Bizagi?
Cancel Intermediate events can only be stuck to the boundary of a Transaction Sub-Process according to the BPMN standard. It looks like you have used a reusable Sub-Process in your model. You have to right-click on the sub-process shape and select "Is transaction". Afterwards you can right-click on it again and attach the cancel event.
You will notice that the Transaction Sub-Process has a double lined boundary.
Related
I’m currently modeling a process with 2 exception statuses (a patient dies & No Neurologist found).
If no Neurologist is found (this can only happen once in my process), the process stops.
Another exception status is triggered when a patient dies at any point during the process. If this exception status occurs, the process stops.
I have difficulties modeling these exception statuses. Attached you can find my current attempt. I’m not 100% sure it is correct.
Example of my attempt
Terminating events are rarely needed. There are usually more elegant, clearer solution than this 'kill all switch'. Their purpose is to terminate any parallel activities / consume any tokes which exist in the same scope. The same can usually be achieved with interrupting (e.g. conditional) boundary events, which get triggered e.g. by a data change. A boundary event makes it clearly visible in the process where a cancellation can occur, under which circumstances, and allows ending a process in more controlled manner.
In your particular use case (diagram you attached) you don't need to use the terminating events at all. You are using two interrupting boundary events (escalation and error) on a scope created by the embedded sub process. The scope of the embedded sub process is already terminated when these events interrupting occur. A subsequent terminating event in the parent process' scope would cancel everything in this scope. In your case the parent scope is the root process instance, but since there is no token flow parallel to the embedded sub process, there is nothing to cancel.
Also see:
https://docs.camunda.org/manual/latest/reference/bpmn20/events/terminate-event/
https://docs.camunda.org/manual/latest/reference/bpmn20/events/error-events/#error-boundary-event
How to model running multiple tasks/branches in parallel, and wait for just the first one to finish. Then the other (running) branches should be cancelled. To illustrate what I'm asking (what to use instead of the X gateway):
As far as I know, the exclusive gateway's join function is to immediately proceed. It neither stops/cancels the other branches, nor does it stop further executions of the output (so multiple tokens can pass through it).
Is this the answer?
Or perhaps this is even better?
I would do the following:
Starting off from your third diagramme, wrap the tasks ‘a’ and ‘b’ inside your subprocess into another transaction subprocess (but still inside the bigger subprocess that you had already used.
At the boundary of this new sub process as well as the boundary of the task ‘c’, you should add interrupting boundary signal events that lead to a None end event.
After task ‘b’ and ‘c’, add a signal end event. Each of these two signal end events should be caught by the interrupting boundary signal events of the other subprocess or task that you want to stop. So, if task ‘c’ is completed, the signal that is thrown right after that should be caught by the boundary on the transaction subprocess of tasks ‘a’ and ‘b’. The signal end event after ‘b’ should be caught by the boundary event on task ‘c’.
After the bigger subprocess, which contains tasks ‘c’ as well the inner subprocess for ‘a’ and ‘b’, you continue just like in your third diagramme with a merging exclusive gateway and the “Do once” task. I would keep the timer boundary event on the bigger subprocess like you did in your third diagramme.
Here is how this would look like:
However, you could also draw a simpler diagramme with an additional exclusive gateway before the "Do only once" activity that filters out all remaining process instances if that activity has already been carried out. You diagramme would be easier to understand but the process would be slightly different from your requirements: You would allow a situation where activity b will be carried out even though activity c has already been completed. So, instead of cancelling one process instance you would ignore it. Depending on your business context, this might have certain implications.
A third option would be to use a terminate end event instead of a none end event. That way, all remaining process instances will be deleted as soon as the first one reaches the end. However, semantically, that might not be the most elegant solution because a termination is intended to signal that your process has finished abnormally.
After the Task 1 is completed, we need to spawn an optional task, based on a condition. The process completion does not depend on this optional task completion.
What is the correct way to design this model ?
the desired behaviour can be modeled like this:
After Task1 completes Task2 is triggered, if the optional Condition is true, the optional Task is triggered as well.
The Instance is terminated after Task2 is finished. If the optional Task was still active it will be terminated.
You should use conditional marker for the optional flow.
Exclusive gateway in your diagram will always execute mandatory Task 2, optional task will always be ignored even when the condition for its execution is true.
Parallel gateway can not be used as it will wait for the optional task to complete for successful merge.
Are conditional markers BPMN 2.0 ok? not even seen them before except they remind me good old UML.
I think this should be solved using a XOR gateway.
Using non-interrupting (message/signal/escalation) events will help your scenario.
Alternatively, using event subprocess in this process.
Let me know if you understand how to use it. Otherwise, I will draw an example for you
UPDATE
NOTE:
1. I am only using bpmn.io to draw example instead of Camunda. However, this is basic BPMN and I assume Camunda must have this type of model. I am only familiar with JBPM.
EXPLANATION:
Basically, you don't really have to use message event. It can be signal/escalation depending on what scenarios you have. Theoretically, message event is used if there is an incoming message to create other activities and this event is the most common among the others. Yet, one thing you must consider is whether the event is interrupting or not. In your case, it doesn't interrupt, therefore I put non-interrupt message event.
Interrupt message event will abort the Task 1 immediately as soon as the event is triggered while non-interrupting is only adding additional task/event without aborting Task 1.
Hope this example helps.
I am learning how to use NSRunLoop and reading Apple Doc about Run Loops chapter.
I am confused about some description about how the doc states about the the run loop sequence of events
Because observer notifications for timer and input sources are delivered before those events actually occur, there may be a gap between the time of the notifications and the time of the actual events. If the timing between these events is critical, you can use the sleep and awake-from-sleep notifications to help you correlate the timing between the actual events
Here is doc link
It says observer notifications for timer and input sources are delivered before those events actually occur. Since those events being not happened, how does run loop knows those events are about to happen and sends notifications for thoes events in advance?
After many searches, It may help.
The doc says the followings in Custom Input Sources section.
In addition to defining the behavior of the custom source when an event arrives, you must also define the event delivery mechanism. This part of the source runs on a separate thread and is responsible for providing the input source with its data and for signaling it when that data is ready for processing. The event delivery mechanism is up to you but need not be overly complex.
More details link1 and link2
The event may occur, but that event may not be ready for using, cause the actual data produced by that event, processed on a separate thread, may not be enough to trigger the thread, which is listening that notification. So their is a gap between the notification posted by run loop and the finished happen event.
There are also other processes that leads to that gap, such as timer input source etc.
Anyone has better explanation?
I am unclear as to whether it is permissible in a BPMN 2.0 model for a timer to be the Start Event for an event sub-process, such as in the simplified example below:
The BPMN 2.0 documentation (version 2.0.1 dated 2013-09-02) on page 174 (section 10.3.5, Event Sub-processes) suggests this is not permissible:
The Start Event of an Event Sub-Process MUST have a defined trigger. The Start Event trigger (EventDefinition) MUST be from the following types: Message, Error,
Escalation, Compensation, Conditional, Signal, and Multiple(see page 259 for more details)
On page 241 (section 10.5.2, Start Event), the specification states that a Timer is allowed as a Start Event:
A Start Event can also initiate an inline Event Sub-Process (see page 174). In that case, the same Event types as for boundary Events are allowed (see Table 10.86), namely: Message, Timer, Escalation, Error, Compensation, Conditional, Signal, Multiple, and Parallel.
Which of these sections would apply in the case of the above example?
Not a BPMN expert but have some experience using BPMN 2.0 so I'll give this a go.
The example you posted doesn't look like a completely spec-approved way of doing it, but I can't be entirely sure. I see a few different ways to do this that should be within bounds.
Here are my two suggestions:
Unless you want to model a third event like "Out of stock" I would prefer option A for its simplicity.
Also I'd like to through out a recommendation for "BPMN Method and Style, 2nd ed." by Bruce Silver.
I'm going to conclude this is almost certainly an error in §10.5.2 of the spec, and that the timer as the start event in an event sub-process is allowed.
Tables 10.86 and 10.93 are both explicit in that the timer can be the trigger for an event sub-process.
The non-interrupting timer start event is only useful in an event sub-process. That symbol would have no use if a timer event were not allowed to trigger an event sub-process.
Section 10.5.6 consistently allows the use of the timer as the start event trigger
The issue was reported to OMG in 2010 (Issue 15532), although no further action was taken.
The same principle applies to Parallel Multiple events, which are similarly omitted from the same list in §10.5.2, but permitted in other sections.
I don't remember now the terminology, but what I would do to achieve what you want is put purchase parts + unpack parts in a subprocess (or sub-task?) and have a timer on it. This seems easier, clearer to read and does what you want.
Regarding documentation: I would say one part talks about the trigger and the other about the start of the event sub-process. So a timer can't trigger the event sub-process, but the start event of the event sub-process can be a timer.