How to run sequence mediator parallel - wso2-esb

I'm trying to create the sequence to call 2 another sequence using sequence mediator on wso2 esb 4.0.3.
My sequence information like below:
When i try to run it. i have problem when the first sequence is error, the second sequence can't run.
I want when the first sequence is error, the second sequence can be run individually.
Please help me to fix this.

This can be done using the Clone mediator in WSO2 ESB. Below clone mediator configuration would split a message into two messages and those two messages will be sent to two sequences(test1 and test2). Then the messages will be processed in two sequences in parallel.
<clone>
<target sequence="test2"/>
<target sequence="test1"/>
</clone>

I used to use <clone> (like CHarithaM answered), but this will kill your transactions (i.e. JMS) etc.
So I prefer to use a JMS Topic with two consumers. Then both of them have their own messageContext and run completely independently in a propper way.

Related

Are StackExchange.Redis fire-and-forget calls guaranteed to be delivered in order?

If I make multiple StackExchange.Redis calls from a single thread using fire-and-forget, are they guaranteed to be delivered in order?
Use case: I am adding an item to a Redis stream and then using pub/sub to trigger another service to process that stream. To avoid a race condition, I need to make sure that the item is added to the stream before the pub/sub message is delivered.
While most StackExchange.Redis APIs are thread-safe, the order of delivery of commands sent through SE.Redis can't be guaranteed out-of-the-box in your scenario for several reasons:
your topology could have multiple nodes, where each message of your sequence is delivered to a different node for a change in the topology or according to your own preferences (CommandFlags.Prefer* / CommandFlags.Demand*);
your thread could host multiple tasks whose continuations do not respect the intended delivery order;
being fire-and-forget, a failure in the delivery of the first command would not stop sending the subsequent ones;
I need to make sure that the item is added to the stream before the pub/sub message is delivered.
I suggest using a Lua script to solve this, which would execute your commands within the same atomic unit and against the same node:
redis.call('XADD', 'foo', '*', 'bar', 'baz')
redis.call('PUBLISH', 'foo-added', '')

Multiple mule-flow instances

I have mule app with single mainFlow. Flow receives jms-message from MQ by quartz:inbound-endpoint, processes it and puts in another queue. Messages are independed. Is it possible to run multiple instances of my flow for concurrent message processing?
enter image description here
Yes. As long as your flow is neither transactional or two-way (request-response), you can set-up various asynchronous processing strategies.
You can add it either visually in the Studio in the flow's properties or by manually adding the attribute processingStrategy="...yourStrategy" to the flow's XML declaration.
For an in-depth guide to what everything means and does, see https://docs.mulesoft.com/mule-user-guide/v/3.8/flow-processing-strategies

AMQP/RabbitMQ - How to avoid race conditions

I have the following architecture:
Architecture
There are a fixed number of input sources. Each input source is equivalent.
The AMQP broker. I am using RabbitMQ in my case.
Currently, there are 2 consumers. Again, each consumer is equivalent.
The input sources are sending commands to be processed. These commands are forwarded by the broker and picked up by one of the two consumers.
I need the following behaviour:
If one input source sends multiple commands, all commands must be processed sequentially. That is, in the example of 2 commands, it is not allowed that consumer 1 is processing command 1 while consumer 2 is processing command 2 at the same time.
However, two commands originating from two different input sources can be processed simultaneously.
Is it possible to enforce this behaviour with AMQP/RabbitMQ?
You can cover your scenario using one consumer for each queue.
Each queue can process the message sequentially.
Another way is to use only one queue and use the envelope.getExchange() to understand the source, or tag your messages using the AMQP.BasicProperties properties
In this way, for example, you can consume the messages in multi-threading and assign one thread for each tag
To guarantee sequence you may need to aggregate the messages. You can batch the commands from one source into a message before publishing to the queue, so the message into the queue can contain one or more commands that will be executed by the consumer.

Composite Source with WMQ Node in Mule ESB

By using composite source in Mule ESB, it is possible to get input from different queue at a time. Is there any method to get to know the input WMQ node name.e.g.
I have 2 queue (ABC & XYZ) from which input can be obtained and further transformation can be applied. Is there any possible way to get to know the queue name from which the message received.
There is no need to use composite-source if you need to behave differently based of the queue name.
What I would recomend is to design your flows acordingly to your needs:
flow(queueA)->flowVars.queueName="queueA"->flow-ref(realwork)
flow(queueB)->flowVars.queueName="queueB"->flow-ref(realwork)
flow(realwork)->dotherealworkhere

Paging in Mule ESB

We have a Mule flow which processes a bunch of records. We want to implement paging because one of the steps in the process is calling an external system which can only take a set amount of records at a time.
We have attempted to solve this by adding a choice in the flow that checks if there are more records to process and if yes then call the same flow again (self reference the flow) but this caused stackoverflow errors.
We have also tried using the until-successful scope but we need errors to break out of the loop and be caught by the exception strategy.
Thanx
Mule possesses the ability to process messages in batches
http://www.mulesoft.org/documentation/display/current/Batch+Processing
It is the best option for your requirement.