Bpmn - How to model an optional task - bpmn

After the Task 1 is completed, we need to spawn an optional task, based on a condition. The process completion does not depend on this optional task completion.
What is the correct way to design this model ?

the desired behaviour can be modeled like this:
After Task1 completes Task2 is triggered, if the optional Condition is true, the optional Task is triggered as well.
The Instance is terminated after Task2 is finished. If the optional Task was still active it will be terminated.

You should use conditional marker for the optional flow.
Exclusive gateway in your diagram will always execute mandatory Task 2, optional task will always be ignored even when the condition for its execution is true.
Parallel gateway can not be used as it will wait for the optional task to complete for successful merge.

Are conditional markers BPMN 2.0 ok? not even seen them before except they remind me good old UML.
I think this should be solved using a XOR gateway.

Using non-interrupting (message/signal/escalation) events will help your scenario.
Alternatively, using event subprocess in this process.
Let me know if you understand how to use it. Otherwise, I will draw an example for you
UPDATE
NOTE:
1. I am only using bpmn.io to draw example instead of Camunda. However, this is basic BPMN and I assume Camunda must have this type of model. I am only familiar with JBPM.
EXPLANATION:
Basically, you don't really have to use message event. It can be signal/escalation depending on what scenarios you have. Theoretically, message event is used if there is an incoming message to create other activities and this event is the most common among the others. Yet, one thing you must consider is whether the event is interrupting or not. In your case, it doesn't interrupt, therefore I put non-interrupt message event.
Interrupt message event will abort the Task 1 immediately as soon as the event is triggered while non-interrupting is only adding additional task/event without aborting Task 1.
Hope this example helps.

Related

Nsb: Custom behavior after every handler

We want to log every occurrence of a handler running to completion and we're wondering what's the cleanest way to do it.
More specifically, when a Handler completes, we want to write some basic information like the type of the message that was processed etc, to a Db.
One way to do it is by creating and sending a new message (publishing an event) at the end of each handler.
But we're wondering if there is another way to do this without "polluting" the message handlers with those extra line of code :) For example, if after a Handler runs to completion, another method defined elsewhere would pick up execution and handle the logic of writing to the database.
Hope I made myself clear enough. Thanks
You could use the auditing pipeline and forward the audit messages to your audit queue and handle a copy of all messages there...
Here is some more info: https://docs.particular.net/nservicebus/operations/auditing?version=core_7.2
Does that make sense?

AXON framework synchronous response

I am new to AXON framework and are using it for our development. We have a requirement where command (command side) is created for the persisting data, for the same event is triggered which is consumed at query side. Now we need to have a response back to command side from query side which says if the record is persisted into database successfully (custom successful message) or if failed then the reason of the failure (custom exception message as response). Kindly help if there is any way to achieve such scenario.
Here command side and query side are 2 different micro-services and we are using Rabbit Mq for event driven technique.
Thanks in advance
I think that what you are asking is if there is a way for the command and event to be processed in a single transaction?
If you use a subscribing event processor, running in the same JVM, the event is processed synchronously and the whole transaction is rolled back in case of an exception in an event handler. This is not the case here, because you have loosely coupled separate services, which is good.
It's best practice for the aggregate with the command handler to have all the information available to decide whether or not the command can successfully be processed, and when an event is applied, this is a signal that it has happened, and the other services (the query side in this case) have to be informed. It's not good practice for a query module to overrule this ("you say it happened, I say it didn't"). If there is an error in the query side, you fix it, and replay the event.
If it really is an error in the event handler that the whole system must know about, that is really a separate event. You can apply such an event directly on the event bus and notify the whole system. Something like this:
#Autowired
private EventBus eventBus;
(...)
CatastrophicFailureEvent failureEvent = new CatastrophicFailureEvent("OH NO!");
eventBus.publish(GenericEventMessage.asEventMessage(failureEvent));
I think you might need to reconsider your architecture. Keep in mind that events should encapsulate the irreversible state changes of your system. These state changes should not be questioned after they have happened. Your query side should only need to care about projecting these valid state changes that your command side has decided on.
If you need to check whether a user already existed, you need to do this on the command side in your aggregate. The aggregate can keep a list of all the existing usernames and throw an exception if an invalid command is given. The command response (tip: using the sendAndWait() method on the CommandGateway returns a response) can then be used as the system to inform your user about the success/failure of its action.
The following flow might solve your problem, but keep in mind that the user will get a callback on the success of the action even though the query side might not have processed its result yet. This part is eventually consistent.
Command Side:
Request from frontend handled by a Controller class and creates an corresponding command
The above command is invoked and handled by a command handler which creates the corresponding event or throws an exception if the user already exists.
The invoker of the command is informed about the success of the command or the exception is handled and the error shown to the user.
The above event is published through rabbit mq event bus if the command was successful.
Query side:
The event that is published in the step 4 is consumed by the event handler in query side. No checks or validations should be necessary, since they were already handled on the command side.
#Mzzl
Series of activities
Command Side:
1. Request from frontend handled by a Controller class and creates an corresponding command
2. The above command is invoked and handled by a command handler which in return create corresponding event
3. The above event is then published through rabbit mq event bus.
Query Side:
4. The event that is published in the step 3 is consumed by the event handler in query side.
5. The event handler has the logic to perform db transaction (lets assume add a user). Once a user is added then a success message or failure message (lets assume user already available in the DB so could not create duplicate entry) should flow from query side to command side and eventually back to UI as a repsonse.
I'm not sure I've fully understand your issue (especially the microservice part :)),
but if your problem is related to having the query side up to date after the command execution, then you can have a look at this project.
In this example, you can see that he uses a SubscriptionQueryResult in conjunction with a QueryUpdateEmitter (see here)
Basically you will subscribe to query side changes before the command is issued, and you will block after the command execution until the query side send a notification when it is up to date.
This way you can avoid the eventual consistency.

Understanding Eventual Consistency, BacklogItem and Tasks example from Vaughn Vernon

I'm struggling to understand how to implement Eventual Consistency with the exposed example of BacklogItems and Tasks from Vaughn Vernon. The statement I've understood so far is (considering the case where he splits BacklogItem and Task into separate aggregate roots):
A BacklogItem can contain one or more tasks. When all remaining hours from a the tasks of a BacklogItem are 0, the status of the BacklogItem should change to "DONE"
I'm aware about the rule that says that you should not update two aggregate roots in the same transaction, and that you should accomplish that with eventual consistency.
Once a Domain Service updates the amount of hours of a Task, a TaskRemainingHoursUpdated event should be published to a DomainEventPublisher which lives in the same thread as the executing code. And here it is where I'm at a loss with the following questions:
I suppose that there should be a subscriber (also living in the same thread I guess) that should react to TaskRemainingHoursUpdated events. At which point in your Desktop/Web application you perform this subscription to the Bus? At the very initialization of your app? In the application code? Is there any reasoning to place domain subscriptors in a specific place?
Should that subscriptor (in the same thread) call a BacklogItem repository and perform the update? (But that would be a violation of the rule of not updating two aggregates in the same transaction since this would happen synchronously, right?).
If you want to achieve eventual consistency to fulfil the previously mentioned rule, do I really need a Message Broker like RabbitMQ even though both BacklogItem and Task live inside the same Bounded Context?
If I use this message broker, should I have a background thread or something that just consumes events from a RabbitMQ queue and then dispatches the event to update the product?
I'd appreciate if someone can shed some clear light over this since it is quite complex to picture in its completeness.
So to start with, you need to recognize that, if the BacklogItem is the authority for whether or not it is "Done", then it needs to have all of the information to compute that for itself.
So somewhere within the BacklogItem is data that is tracking which Tasks it knows about, and the known state of those tasks. In other words, the BacklogItem has a stale copy of information about the task.
That's the "eventually consistent" bit; we're trying to arrange the system so that the cached copy of the data in the BacklogItem boundary includes the new changes to the task state.
That in turn means we need to send a command to the BacklogItem advising it of the changes to the task.
From the point of view of the backlog item, we don't really care where the command comes from. We could, for example, make it a manual process "After you complete the task, click this button here to inform the backlog item".
But for the sanity of our users, we're more likely to arrange an event handler to be running: when you see the output from the task, forward it to the corresponding backlog item.
At which point in your Desktop/Web application you perform this subscription to the Bus? At the very initialization of your app?
That seems pretty reasonable.
Should that subscriptor (in the same thread) call a BacklogItem repository and perform the update? (But that would be a violation of the rule of not updating two aggregates in the same transaction since this would happen synchronously, right?).
Same thread and same transaction are not necessarily coincident. It can all be coordinated in the same thread; but it probably makes more sense to let the consequences happen in the background. At their core, events and commands are just messages - write the message, put it into an inbox, and let the next thread worry about processing.
If you want to achieve eventual consistency to fulfil the previously mentioned rule, do I really need a Message Broker like RabbitMQ even though both BacklogItem and Task live inside the same Bounded Context?
No; the mechanics of the plumbing matter not at all.

mule filter in a sub flow is not stopping flow processing, continuing with main flow

I used mule filter in a subflow but it is not stopping the complete process when the condition is false. It is exiting from subflow and continuing with mailflow. Actually it should exit from main flow as well. Please suggest how I stop complete process when filter condition is false.
If you put filter in sub flow, it should stop processing entire flow and control will never comes back to main flow. Please find sample to check this.
The flow does not know you have put a filter into a sub-flow. It does not know details of the function of the sub-flow, rather, it sees it as a single step. The sub-flow is called, it executes, it returns, and does not return any information saying you have included a filter and want the main flow to stop processing.
What you seem to be trying to do is set a signal in the sub-flow that tells the main flow if it should continue processing. This can be done, but not automatically through a filter mechanism. You could for instance use the filter in the sub-flow to set a value with session scope which the main flow would then look at and choose a processing route to accomplish the flow pattern you seem to be trying.
This is not the most elegant approach, there are many alternatives, but that is the first approach that comes to mind which closely approximates what you describe.

Two "start" needed in the same lane in BPMN 1.2

I know in BPMN there is just a "start event" for each pool. In my case I have a pool that can begin when a message is caught or because the actor decide to do it by his own decision.
How can I model that? I'm not sure I can use an event-based exclusive XOR.
Maybe a complex gateway?
As stated in many best practice how-tos, it is NOT RECOMMENDED to use multiple start events in a pool. BPMN specification 1.2 contains this note too:
9.3.2.
...
It is RECOMMENDED that
this feature be used sparingly and that
the modeler be aware that other readers of the Diagram may have difficulty
understanding the intent of the Diagram.
...
On the other side, the common rule for the case with omitted start event is
If the Start Event is not used, then all Flow Objects that do not have
an incoming Sequence Flow SHALL be instantiated when the Process is instantiated.
I assume this will be fair enough for the case of manual process start too. Even if the process has only message start event it will be correctly started because Message Start Event is a fair flow object with no incoming sequence flow and thus it complies to the above rule.
However, if you want to be 100% sure the process will go the way you want then the Event Based Exclusive Gateway (which is available since version 1.1) is your choice. Placing it before multiple different start events will make the process choose either of them for start.
Further explanation can be found in this blog.
Unlimited process instances
If you don't mind that during execution of your process the pool could be used multiple times (e. g. once started by a message and 3 times by an actor) then you can simply use multiple start events (BPMN 1.2 PDF Spec 9.3.2 page 37 allows this):
Single instance
If you can only allow a single run of the pool, you might have to instantiate it manually at the start of your execution and then decide whether to use it and when. Here is an example of how this can be done:
The Event-Based Gateway (Spec 9.5.2.4) will "decide" what to do with your pool:
If Actor decides to start or a message comes from the main pool, some actions will take place;
If the process is "sure" that additional pool will not be required, a signal is cast to terminate its instance.