Mulesoft Flow processing startegies - mule

could you please explain Queued-Asynchronous Flow Processing Strategy with an example
i found many documents with the explanation but not getting any example that shows how it to process and flow creation for this
i found one link where it explains synchrounus and non blocking processing strategies
https://www.ricston.com/blog/synchronous-non-blocking-processing-strategies/

The Queued Asynchronous Flow Processing Strategy works by having a thread pool for the message source of the flow (for example a JMS inbound transport), a thread pool for flow execution, and a queue of Mule events between the two threads. So when a JMS message arrives to start the flow, it is handled by a thread of the transport, then it queues in the internal queue as a Mule message, to be picked up by one of the flow threads to execute the rest of the flow. The source threads are freed to keep listening for new messages, that is the 'asynchronous' part. They don't wait for the flow to be processed.

Related

Consume message from queue after service complete the processing of previous message

I am doing a POC to work with RabbitMQ and have a questions about how to listen to queues conditionally!
We are consuming messaging from a queue and once consumed, the message will be involved in an upload process that takes longer times based on the file size. And as the file sizes are larger, sometimes the external service we invoke running out of memory if multiple messages are consumed and upload process is continuing for the previous messages.
That said, we would like to only consume the next message from the queue once the current/previous message is processed completely. I am new to JMS and wondering how to do it.
My current thought is, the code flow will manually pull the next message from the queue when it completes the process of previous message as the flow knows that it has completed the processing but if that listener is only used in code flow to manually call, how it will pull the very first message!
The JMS spec says that message consumers work sequentially:
The session used to create the message consumer serializes the
execution of all message listeners registered with the session
If you create a MessageListener and use that with your consumer, the JMS spec states the listener's onMessage will be called sequentially, i.e. once per message after each message has been processed by the listener. So in effect each message waits until the previous has completed.

Producer, consumer, and handler with messaging queue

I am involved in a project which consists of apps below:
Producer application: receives messages from clients via ASP.NET web api, and enqueues messages into a message queue.
Consumer application: dequeues messages from the message queue above, and sends messages to Handler application below.
Handler application: receives messages from Consumer application, and sends the message to external application, if that failed, sends them to dead queue.
The problem is that:
Consumer dequeues messages off the queue, and send them to Handler. Then Consumer is blocked (via background threads using async) waiting for Handler's process. That is, Consumer performs RPC call to Handler app.
If Handler either successfully sends the messages to external app, or if that failed, successfully enqueues them to a dead queue, Consumer commits the dequeuing. (removes message off the queue)
If either of both (external app or dead queue) above failed, consumer rollbacks the dequeuing (puts message back to queue)
My question is that
What is the pros and cons of using Handlers app, comparing Consumer performs Handler's logic in addition to Consumer's current logic?
Is it better to remove Handler application, and integrates Handler's logic to Consumer application? So Consumer talks to external application directly, and handles dead queue. One fewer application to maintain.
Let's be perfectly clear: in the abstract sense, you have two entities - a producer and a consumer. The producer sends the original message, and the consumer processes it. There is no need to muddy the water by adding details about "handler" as it is a logical part of the consuming process.
It seems then that your real question (and also mine) is "what value does consumer (your definition) add?" Keep in mind that none are "talking" directly to one another - they are communicating via a message queue. In that regard, if it is easier to have the ultimate processing piece dequeue the message directly, rather than having some intermediate pipe, then do that.

How to hold Mule process until JMS consume complete processing

I have JMS in my mule flow where producer reads records from cache, put in queue and consumer consumes messages and do further processing. Following is the flow for understanding.
Service 1 (Read data from file) -> Service 2 (put each line in cache)
-> JMS Service 3 (Producer Read data from cache line by line and put in queue) and Consumer read from queue -> Service 4
In above flow, from JMS component, flow becomes asynch hence as soon as producer puts all records in queue response goes back to client saying process completed but it is possible that consumer still going to consume messages.
I want to hold process from producer to send back response until consumer consumes all the messages.
Any idea on this how to achieve?
Since the async takes the copy of the exact thread and process independently, it may be possible that the producer putting the message in the queue as fast as before the consumer actually able to consume it.
One way I can think to hold the process of putting the message into the queue is by putting a sleep() before it.
You can use a Groovy component and use sleep() in it to hold the flow or slow down the process.
for example, if you put the following:
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[
sleep(10000);
return message.payload;]]>
</scripting:script>
</scripting:component>
before the putting the message into the queue, the process will slow down a bit and will hold the flow for 10000 ms till on the other side the consumer actually consume it.
Polling for completion status as described above may work OK but there's still a risk of some transactions not being completed after wait time, or waiting long after all messages have been processed.
Depending on the end goal of this exercise, you could perhaps leverage Mule batch, which already implements the splitting of the inbound request into individual messages, processing the messages in one or multiple consumer threads, keeping track of the chunks processed and remaining, and reporting the results / executing final steps once all data is processed.
If you can't use batch and need to reassemble the processed messages into a single list or map, you may be able to get the Collection Aggregator do the job of tracking the messages by correlation ID and setting the timeouts.
The crude DIY way to implement it is to build some sort of dispatcher logic for the JMS publishing component. It will submit all messages to JMS then wait for each consumer / worker thread to respond back (via a separate JMS queue) with completion message with the same correlation ID. The dispatcher will then track all submitted / processed messages in the in-memory or persistent storage and respond back once the last message in the batch has been acknowledged, or by pre-defined timeout. Which is very close to what Mule batch already does.
Cheers!
Dima
You can use exchange pattern value as request-response so that flow will wait for response from JMS.

Mule Queued-Asynchronous Flow Queue vs. VM Queue

While studying and using Mule, I couldn't figure out if there's a difference between a queued-asynchronous flow queue and a VM queue.
My question is, are they the same queues (just with different names along the documentation) or different ones?
In a concrete example:
<flow name="fooFlow" processingStrategy="queued-asynchronous">
<vm:inbound-endpoint path="foo" exchange-pattern="one-way"/>
<component class="com.foo.FooComponent"/>
</flow>
Does the VM inbound-endpoint receives messages from one queue, and the flow has another queue to receive the messages from the inbound-endpoint? Or are they the same SEDA queue?
These are two very different concepts, one is based on the way a flow is processed and the other is a lightweight queuing mechanism. VM is a transport and it has persistent queuing capabilities as well as transactions.
Please see (the last link to understand the flow execution model):
http://www.mulesoft.org/documentation/display/current/Flow+Processing+Strategies
http://www.mulesoft.org/documentation/display/current/VM+Transport+Reference
http://www.mulesoft.org/documentation/display/current/Tuning+Performance
To add some details on the specific example you showed.
You do not need to specify the processing-strategy explicitly, Mule chooses the default processing strategy based on the exchange-pattern of the inbound endpoint. So as you have a non transactions one-way endpoint the processing strategy is queued-asynchronous.
Does the VM inbound-endpoint receives messages from one queue, and the
flow has another queue to receive the messages from the
inbound-endpoint? Or are they the same SEDA queue?
To recieve messages Mule will use the thread pool dedicated to the VM connector (receiver threads are tied to the transport). Once the message has been received it will be processed using a thread from the flow's thread pool. (Would be great if I could be validated or corrected :)
(Most of the information is from the links posted in the earlier answer)

Re-queue Amqp message at tail of Queue

I have a project setup using Spring and RabbitMQ. Currently it is possible for my application to receive an amqp message that cannot be processed until another asynchronous process has completed (legacy and totally detached, i have no control). So the result is i may have to wait on processing a message for some amount of time. The result of this is an exception in a transformer.
When the message is NACK'd back to rabbitMQ it is putting it back into the head of the queue and re-pulling it immediately. If i get unprocessable messages equal to the number of concurrent listeners my workflow locks up. It spins its wheels waiting for messages to become processable, even though there are valid processable messages waiting behind in the queue.
Is there a way to reject and amqp message and have it go back to the tail of the queue instead? From my research rabbitMQ worked this way at one time, but now i appear to get the head of the queue exclusively.
My config is rather straight forward, but for continuity here it is...
Connection factory is: org.springframework.amqp.rabbit.connection.CachingConnectionFactory
RabbitMQ 3.1.1
Spring Integration: 2.2.0
<si:channel id="channel"/>
<si-amqp:inbound-channel-adapter
queue-names="commit" channel="channel" connection-factory="amqpConnectionFactory"
acknowledge-mode="AUTO" concurrent-consumers="${listeners}"
channel-transacted="true"
transaction-manager="transactionManager"/>
<si:chain input-channel="channel" output-channel="nullChannel">
<si:transformer ref="transformer"></si:transformer>
<si:service-activator ref="activator"/>
</si:chain>
You are correct that RabbitMQ was changed some time ago. There is nothing in the API to change the behavior.
You can, of course, put an error-channel on the inbound adapter, followed by a transformer (expression="payload.failedMessage"), followed by an outbound adapter configured with an appropriate exchange/routing-key to requeue the message at the back of the queue.
You might want to add some additional logic in the error flow to check the exception type (payload.cause) and decide which action you want.
If the error flow itself throws an exception, the original message will be requeued at the head, as before; if it exits normally, the message will be acked.