mule filter in a sub flow is not stopping flow processing, continuing with main flow - mule

I used mule filter in a subflow but it is not stopping the complete process when the condition is false. It is exiting from subflow and continuing with mailflow. Actually it should exit from main flow as well. Please suggest how I stop complete process when filter condition is false.

If you put filter in sub flow, it should stop processing entire flow and control will never comes back to main flow. Please find sample to check this.

The flow does not know you have put a filter into a sub-flow. It does not know details of the function of the sub-flow, rather, it sees it as a single step. The sub-flow is called, it executes, it returns, and does not return any information saying you have included a filter and want the main flow to stop processing.
What you seem to be trying to do is set a signal in the sub-flow that tells the main flow if it should continue processing. This can be done, but not automatically through a filter mechanism. You could for instance use the filter in the sub-flow to set a value with session scope which the main flow would then look at and choose a processing route to accomplish the flow pattern you seem to be trying.
This is not the most elegant approach, there are many alternatives, but that is the first approach that comes to mind which closely approximates what you describe.

Related

Nsb: Custom behavior after every handler

We want to log every occurrence of a handler running to completion and we're wondering what's the cleanest way to do it.
More specifically, when a Handler completes, we want to write some basic information like the type of the message that was processed etc, to a Db.
One way to do it is by creating and sending a new message (publishing an event) at the end of each handler.
But we're wondering if there is another way to do this without "polluting" the message handlers with those extra line of code :) For example, if after a Handler runs to completion, another method defined elsewhere would pick up execution and handle the logic of writing to the database.
Hope I made myself clear enough. Thanks
You could use the auditing pipeline and forward the audit messages to your audit queue and handle a copy of all messages there...
Here is some more info: https://docs.particular.net/nservicebus/operations/auditing?version=core_7.2
Does that make sense?

Bpmn - How to model an optional task

After the Task 1 is completed, we need to spawn an optional task, based on a condition. The process completion does not depend on this optional task completion.
What is the correct way to design this model ?
the desired behaviour can be modeled like this:
After Task1 completes Task2 is triggered, if the optional Condition is true, the optional Task is triggered as well.
The Instance is terminated after Task2 is finished. If the optional Task was still active it will be terminated.
You should use conditional marker for the optional flow.
Exclusive gateway in your diagram will always execute mandatory Task 2, optional task will always be ignored even when the condition for its execution is true.
Parallel gateway can not be used as it will wait for the optional task to complete for successful merge.
Are conditional markers BPMN 2.0 ok? not even seen them before except they remind me good old UML.
I think this should be solved using a XOR gateway.
Using non-interrupting (message/signal/escalation) events will help your scenario.
Alternatively, using event subprocess in this process.
Let me know if you understand how to use it. Otherwise, I will draw an example for you
UPDATE
NOTE:
1. I am only using bpmn.io to draw example instead of Camunda. However, this is basic BPMN and I assume Camunda must have this type of model. I am only familiar with JBPM.
EXPLANATION:
Basically, you don't really have to use message event. It can be signal/escalation depending on what scenarios you have. Theoretically, message event is used if there is an incoming message to create other activities and this event is the most common among the others. Yet, one thing you must consider is whether the event is interrupting or not. In your case, it doesn't interrupt, therefore I put non-interrupt message event.
Interrupt message event will abort the Task 1 immediately as soon as the event is triggered while non-interrupting is only adding additional task/event without aborting Task 1.
Hope this example helps.

How to make a Saga handler Reentrant

I have a task that can be started by the user, that could take hours to run, and where there's a reasonable chance that the user will start the task multiple times during a run.
I've broken the processing of the task up into smaller batches, but the way the data looks it's very difficult to tell what's still to be processed. I batch it using messages that each process a bite sized chunk of the data.
I have thought of using a Saga to control access to starting this process, with a Saga property called Processing that I set at the start of the handler and then unset at the end of the handler. The handler does some work and sends the messages to process the data. I check the value at the start of the handler, and if it's set, then just return.
I'm using Azure storage for Saga storage, if it makes a difference for the next bit. I'm also using NSB 6
I have a few questions though:
Is this the correct approach to re-entrancy with NSB?
When is a change to Saga data persisted? (and is it different depending on the transport?)
Following on from the above, if I set a Saga value in a handler, wait a while and then reset it to its original value will it change the persistent storage at all?
Seem to be cross posted in the Particular Software google group:
https://groups.google.com/forum/#!topic/particularsoftware/p-qD5merxZQ
Sagas are very often used for such patterns. The saga instance would track progress and guard that the (sub)tasks aren't invoked multiple times but could also take actions if the expected task(s) didn't complete or is/are over time.
The saga instance data is stored after processing the message and not when updating any of the saga data properties. The logic you described would not work.
The correct way would be having a saga that orchestrates your process and having regular handlers that do the actual work.
In the saga handle method that creates the saga check if the saga was already created or already the 'busy' status and if it does not have this status send a message to do some work. This will guard that the task is only initiated once and after that the saga is stored.
The handler can now do the actual task, when it completes it can do a 'Reply' back to the saga
When the saga receives the reply it can now start any other follow up task or raise an event and it can also 'complete'.
Optimistic concurrency control and batched sends
If two message are received that create/update the same saga instance only the first writer wins. The other will fail because of optimistic concurrency control.
However, if these messages are not processed in parallel but sequential both fail unless the saga checks if the saga instance is already initialized.
The following sample demonstrates this: https://github.com/ramonsmits/docs.particular.net/tree/azure-storage-saga-optimistic-concurrency-control/samples/azure/storage-persistence/ASP_1
The client sends two identical message bodies. The saga is launched and only 1 message succeeds due to optimistic concurrency control.
Due to retries eventually the second copy will be processed to but the saga checks the saga data for a field that it knows would normally be initialized by by a message that 'starts' the saga. If that field is already initialized it assumes the message is already processed and just returns:
It also demonstrates batches sends. Messages are not immediately send until the all handlers/sagas are completed.
Saga design
The following video might help you with designing your sagas and understand the various patterns:
Integration Patterns with NServiceBus: https://www.youtube.com/watch?v=BK8JPp8prXc
Keep in mind that Azure Storage isn't transactional and does not provide locking, it is only atomic. Any work you do within a handler or saga can potentially be invoked more than once and if you use non-transactional resources then make sure that logic is idempotent.
So after a lot of testing
I don't believe that this is the right approach.
As Archer says, you can manipulate the saga data properties as much as you like, they are only saved at the end of the handler.
So if the saga receives two simultaneous messages the check for Processing will pass both times and I'll have two processes running (and in my case processing the same data twice).
The saga within a saga faces a similar problem too.
What I believe will work (and has done during my PoC testing) is using a database unique index to help out. I'm using entity framework and azure sql, so database access is not contained within the handler's transaction (this is the important difference between the database and the saga data). The database will also operate across all instances of the endpoint and generally seems like a good solution.
The table that I'm using has each of the columns that make up the saga 'id', and there is a unique index on them.
At the beginning of the handler I retrieve a row from the database. If there is a row, the handler returns (in my case this is okay, in others you could throw an exception to get the handler to run again). The first thing that the handler does (before any work, although I'm not 100% sure that it matters) is to write a row to the table. If the write fails (probably because of the unique constraint being violated) the exception puts the message back on the queue. It doesn't really matter why the database write fails, as NSB will handle it.
Then the handler does the work.
Then remove the row.
Of course there is a chance that something happens during processing of the work, so I'm also using a timestamp and another process to reset it if it's busy for too long. (still need to define 'too long' though :) )
Maybe this can help someone with a similar problem.

Paging in Mule ESB

We have a Mule flow which processes a bunch of records. We want to implement paging because one of the steps in the process is calling an external system which can only take a set amount of records at a time.
We have attempted to solve this by adding a choice in the flow that checks if there are more records to process and if yes then call the same flow again (self reference the flow) but this caused stackoverflow errors.
We have also tried using the until-successful scope but we need errors to break out of the loop and be caught by the exception strategy.
Thanx
Mule possesses the ability to process messages in batches
http://www.mulesoft.org/documentation/display/current/Batch+Processing
It is the best option for your requirement.

How to figure out if mule flow message processing is in progress

I have a requirement where I need to make sure only one message is being processed at a time by a mule flow.Flow is triggered by a quartz scheduler which reads one file from FTP server every time
My proposed solution is to keep a global variable "FLOW_STATUS" which will be set to "RUNNING" when a message is received and would be reset to "STOPPED" once the processing of message is done.
Any messages fed to the flow will check for this variable and abort if "FLOW_STATUS" is "RUNNING".
This setup seems to be working , but I was wondering if there is a better way to do it.
Is there any best practices around this or any inbuilt mule helper functions to achieve the same instead of relying on global variables
It seems like a more simple solution would be to set the maxActiveThreads for the flow to 1. In Mule, each message processed gets it's own thread. So setting the maxActiveThreads to 1 would effectively make your flow singled threaded. Other pending requests will wait in the receiver threads. You will need to make sure your receiver thread pool is large enough to accommodate all of the potential waiting threads. That may mean throttling back your quartz scheduler to allow time process the files so the receiver thread pool doesn't fill up. For more information on the thread pools and how to tune performance, here is a good link: http://www.mulesoft.org/documentation/display/current/Tuning+Performance