How to hold Mule process until JMS consume complete processing - rabbitmq

I have JMS in my mule flow where producer reads records from cache, put in queue and consumer consumes messages and do further processing. Following is the flow for understanding.
Service 1 (Read data from file) -> Service 2 (put each line in cache)
-> JMS Service 3 (Producer Read data from cache line by line and put in queue) and Consumer read from queue -> Service 4
In above flow, from JMS component, flow becomes asynch hence as soon as producer puts all records in queue response goes back to client saying process completed but it is possible that consumer still going to consume messages.
I want to hold process from producer to send back response until consumer consumes all the messages.
Any idea on this how to achieve?

Since the async takes the copy of the exact thread and process independently, it may be possible that the producer putting the message in the queue as fast as before the consumer actually able to consume it.
One way I can think to hold the process of putting the message into the queue is by putting a sleep() before it.
You can use a Groovy component and use sleep() in it to hold the flow or slow down the process.
for example, if you put the following:
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[
sleep(10000);
return message.payload;]]>
</scripting:script>
</scripting:component>
before the putting the message into the queue, the process will slow down a bit and will hold the flow for 10000 ms till on the other side the consumer actually consume it.

Polling for completion status as described above may work OK but there's still a risk of some transactions not being completed after wait time, or waiting long after all messages have been processed.
Depending on the end goal of this exercise, you could perhaps leverage Mule batch, which already implements the splitting of the inbound request into individual messages, processing the messages in one or multiple consumer threads, keeping track of the chunks processed and remaining, and reporting the results / executing final steps once all data is processed.
If you can't use batch and need to reassemble the processed messages into a single list or map, you may be able to get the Collection Aggregator do the job of tracking the messages by correlation ID and setting the timeouts.
The crude DIY way to implement it is to build some sort of dispatcher logic for the JMS publishing component. It will submit all messages to JMS then wait for each consumer / worker thread to respond back (via a separate JMS queue) with completion message with the same correlation ID. The dispatcher will then track all submitted / processed messages in the in-memory or persistent storage and respond back once the last message in the batch has been acknowledged, or by pre-defined timeout. Which is very close to what Mule batch already does.
Cheers!
Dima

You can use exchange pattern value as request-response so that flow will wait for response from JMS.

Related

Mulesoft Flow processing startegies

could you please explain Queued-Asynchronous Flow Processing Strategy with an example
i found many documents with the explanation but not getting any example that shows how it to process and flow creation for this
i found one link where it explains synchrounus and non blocking processing strategies
https://www.ricston.com/blog/synchronous-non-blocking-processing-strategies/
The Queued Asynchronous Flow Processing Strategy works by having a thread pool for the message source of the flow (for example a JMS inbound transport), a thread pool for flow execution, and a queue of Mule events between the two threads. So when a JMS message arrives to start the flow, it is handled by a thread of the transport, then it queues in the internal queue as a Mule message, to be picked up by one of the flow threads to execute the rest of the flow. The source threads are freed to keep listening for new messages, that is the 'asynchronous' part. They don't wait for the flow to be processed.

RabbitMQ pause a queue

I am using a RabbitMQ Server (v3.8.9) with Java clients.
Use case is:
Our Backend creates messages for different clients. We send them out to their respective Endpoints.
1 Producer -> Outbound Queue -> 1 Consumer
The producer creates messages for n clients
Which the consumer should send out to the clients' endpoints
Messages must be kept in the correct order regarding each client
Works fine, unless all clients are up and running. Problem: If one client becomes unavailable, we need to have a bulletproof retry mechanism for that.
Say:
Wait 1 Minute and try again
All following messages must NOT be delivered before the first failed one and kept in the correct order
If a retry works, then ALL other messages should be send to the client immediately
As you can see, it is not a solution to just "supsend" the consumer, because it should still deliver msg to the other (alive) clients. Due to application limitations and a dynamic number of clients, we cannot spawn one consumer per client queue.
My best approach right now is to dynamically create one queue per client, which are then routed to a single outbound queue. If one msg to one client cannot be delivered by the consumer, I would like to "pause" the clients queue for x minutes. An API call like "queue_pause('client_q1', '5 Minutes')" would help. But even then I have to deal with the other, already routed messages to that particular client and keep them in the correct order...
Any better ideas?
I think the key here is that a single consumer script can consume from multiple queues. So if I'm understanding correctly, you could model this as:
Each client has its own queue. These could be created by the consumer script when it starts up, or by a back-end process when a new client is created.
The consumer script subscribes to each queue separately
When a message is received, the consumer tries to send it immediately to the client; if it succeeds, it is manually acknowledged with basic.ack, and the consumer is ready to send the next message to that client.
When a message cannot be delivered to the client, it is requeued (basic.nack or basic.reject with requeue=1), retaining its position in the client's queue.
The consumer then needs to pause consuming from that particular queue. Depending on how its written, that could be as simple as a sleep in that particular thread, but if that's not practical, you can effectively "pause" the subscription to the queue:
Cancel the subscription to that queue, leaving other subscriptions in tact
Store the queue name and the retry time in an appropriate variable
If the consumer script is implemented with an event/polling loop, check the list of "paused" subscriptions each time around that loop; if the retry time has been reached, re-subscribe.
Alternatively, if the library / framework supports it, register a delayed event that will fire at the appropriate time and re-subscribe the queue. The exact mechanics of this depend on the technologies you're using.
All the other subscriptions will continue, so messages to other clients will be delivered. The queue with no subscribers will retain the messages for the offline client in order until the consumer script starts consuming them again.

Consume message from queue after service complete the processing of previous message

I am doing a POC to work with RabbitMQ and have a questions about how to listen to queues conditionally!
We are consuming messaging from a queue and once consumed, the message will be involved in an upload process that takes longer times based on the file size. And as the file sizes are larger, sometimes the external service we invoke running out of memory if multiple messages are consumed and upload process is continuing for the previous messages.
That said, we would like to only consume the next message from the queue once the current/previous message is processed completely. I am new to JMS and wondering how to do it.
My current thought is, the code flow will manually pull the next message from the queue when it completes the process of previous message as the flow knows that it has completed the processing but if that listener is only used in code flow to manually call, how it will pull the very first message!
The JMS spec says that message consumers work sequentially:
The session used to create the message consumer serializes the
execution of all message listeners registered with the session
If you create a MessageListener and use that with your consumer, the JMS spec states the listener's onMessage will be called sequentially, i.e. once per message after each message has been processed by the listener. So in effect each message waits until the previous has completed.

Mule ESB: How to achieve typical ReTry Mechanism in MULE ESB

I need to implement a logic on Retry. Inbound endpoint pushes the messages to Rest (Outbound). If the REST is unavailable, I need to retry for 1 time and put it in the queue. But the second upcoming messages should not do any retry, it has to directly put the messages in to queue until the REST service is available.
Once the service is available, I need to pushes all the messages from QUEUE to REST Service (in ordering) via batch job.
Questions:
How do I know the service is unavailable for my second message? If I use until Successful, for every message it do retry and put in queue. Plm is 2nd message shouldn't do retry.
For batch, I thought of using poll, but how to tell to poll, when the service becomes available to begin the batch process. (bcz,Poll is more of with configuring timings to run batch)?
Other ticky confuses me is - Here ordering has to be preserved. once the service is available. Queue messages ( i,e Batch) has to move first to REST Services then with real time. I doubt whether Is it applicable.
It will be very helpful for the quick response to implement the logic.
Using Mule: 3.5.1
I could try something like below: using flow controls
process a message; if exception or bad response code, set a variable/property like serviceAvailable=false.
subsequent message processing will first check the property serviceAvailable to process the messages. if property is false, en-queue the messages to a DB table with status=new/unprocessed
create a flow/scheduler to process the messages from DB sequentially, but it will not check the property serviceAvailable and call the rest service.
If service throws exception it will not store the messages in db again but if processes successfully change the property serviceAvailable=true and de-queue the messages or change the status. Add another property and set it to true if there are more messages in db table like moreDBMsg=true.
New messages should not be processed/consumed until moreDBMsg=false
once moreDBMsg=false and serviceAvailable=true start processing the messages from queue.
For the timeout I would still look at the response code and catch time-outs to determine if the call was successful or requires a retry. Practically you normally do multi threading anyway, so you have multiple calls in parallel anyway. Or simply one call starts before the other ends.
That is just quite normal.
But you can simply retry calls in a queue that time out. And after x amounts of time-outs you "skip" or defer the retry.
But all of this has been done using actual Mule flow components like either:
MEL http://www.mulesoft.org/documentation/display/current/Mule+Expression+Language+Reference
Or flow controls: http://www.mulesoft.org/documentation/display/current/Choice+Flow+Control+Reference
Or for example you reference a Spring Bean and do it in native Java code.
One possibility for the queue would be to persist it in a database. Mule has database connector that has a "poll" feature, see: http://www.mulesoft.org/documentation/display/current/JDBC+Transport+Reference#JDBCTransportReference-PollingTransport

Can any of my consumer take the messages from queue?

I am developing an app. and I am using activemq. Is there any way to do that one producer always send messages to one broker but on the opposite side there 3 consumers.Each consumer listens broker and can take any of message from queue.Is this possible?
I am using activemq for writing my app. logs to db.As u know writing logs to db is time taking process.That's why consumer is more and more slow than producer.For ex. I send 100.000 message(huge objects).Producer finishes sending messages in 20 mins.But When the producer finished, consumer has finished 4.000 message processing yet.
Yes, what you are describing is possible. In fact, you can have any number of consumers listening on a single queue. The messages are dispatched in a round-robin fashion between consumers.
What you should be aware of is that ActiveMQ performs much better sending small messages than large ones. If you need to send very large payloads (e.g. 100mb), you are far better off saving the message to a location that is accessible by both the producer and consumers (e.g. a network file system), and sending the location of the message instead. The consumer can then use that to read the message manually. This way you get a relatively small amount of traffic through the message broker.