Mule - How to stop the JMS input endpoint - activemq

There is a situation where I need to have a suspend kind of behavior on my Active MQ JMS inbound endpoint when my output endpoint is down. So that I don't process the messages from the queue. Once Then output endpoint is up I want to resume the queue fetching operations. Is this possible in Mule? If yes how ?

You should be able to do something like this:
Connector connector = muleRegistry.lookupConnector(connectorName);
connector.stop();
However, this is not really a straight forward solution if you care about message loss. There will be some time before the connector goes down and the very message at hand, which triggered this "stop" might be lost, if you don't handle those cases with care.
There is another option. Say your other resource is down just for some small amount of time,, you might want to just use JMS transactions and roll back the message to the queue (which it will be when the output endpoint fails) then it will retry the transaction over and over.

Related

What is the best approach for dealing with RabbitMQ DLQ messages in Spring AMQP

I am using Spring AMQP to listen RabbitMQ queue. While listening queue, depending on business logic, my service can throw RuntimeException and in this case message will retry several times. After max count retries, message will stay in DLQ. And I am wondering, what is the best approach to deal with these messages in DLQ? I read from blogs that I can use ParkingLot Queue. But also in this case, how to monitor the queue and notify people about dead-letter messages?
P.S. Sorry for my English. Hope that I was able to explain my problem :)
You can use the RabbitMQ REST api (Hop client) to get the status of the DLQ.
You can also use Spring's RabbitAdmin.getQueueProperties("my.dlq") to get the message count.
https://docs.spring.io/spring-amqp/docs/current/reference/html/#broker-configuration
Other options include adding another listener on the DLQ and run it periodically to either send it back to the original queue or send it to a parking lot queue if it fails too many times.
There's an example of that in the spring cloud stream documentation.

auto process with DLQ configuration in spring xd

1)I want to configure DLQ for my stream
stream create --name httptest7 --definition "http | http-client --url='''http://localhost:8080/mock-sentmessage/customers/send-email''' --httpMethod=GET | log"
stream deploy httptest7 --properties module.*.consumer.autoBindDLQ=true
2)I have made
autoBindDLQ=true
I had one doubt if suppose spring xd fails to process my messages and post it to dlq .Will they me automatically moved to My original queue to retry or should i write a processor to move my DLQ messages to my original queue
3)Now i bring down my webservice http://localhost:8080/mock-sentmessage/customers/send-email i can see message filling in my dlq.
4)When i bring up my service up . But as per my understanding I thought from DLQ the message will be retried again when my service is up.
But From DLQ its not retried again .Any configuration I need to set for ?
As per documentation:
There is no automated mechanism provided to move dead lettered messages back to the bus queue.
I am not sure what your question is, or even if you have one; you seem to have answered your own question by quoting the documentation:
There is no automated mechanism provided to move dead lettered messages back to the bus queue.
So, no; there is no "setting" you can change.
There are a couple of things you can do - write your own code to move the messages back to the main queue from the DLQ; it would just take a few lines of Java using Spring AMQP, or in any language of your choice.
You can also use the technique described here whereby you set a message TTL on the DLQ, and configure it to route back to the main queue when the TTL expires.
Just so you know, You can use shovel plugin in Rabbitmq to do the movement from DLQ back to the bus queue.

Implementing the reliability pattern in CloudHub with VM queues

I have more-or-less implemented the Reliability Pattern in my Mule application using persistent VM queues CloudHub, as documented here. While everything works fine, it has left me with a number of questions about actually ensuring reliable delivery of my messages. To illustrate the points below, assume I have http-request component within my "application logic flow" (see the diagram on the link above) that is throwing an exception because the endpoint is down, and I want to ensure that the in flight message will eventually get delivered to the endpoint:
As detailed on the link above, I have observed that when the exception is thrown within my "application logic flow", and I have made the flow transactional, the message is put back on the VM queue. However all that happens is the message then repeatedly taken off the queue, processed by the flow, and the exception is thrown again - ad infinitum. There appears to be no way of configuring any sort of retry delay or maximum number of retries on VM queues as is possible, for example, with ActiveMQ. The best work around I have come up with is to surround the http-request message processor with the until-successful scope, but I'd rather have these sorts of things apply to my whole flow (without having to wrap the whole flow in until-successful). Is this sort of thing possible using only VM queues and CloudHub?
I have configured my until-successful to place the message on another VM queue which I want to use as a dead-letter-queue. Again, this works fine, and I can login to CloudHub and see the messages populated on my DLQ - but then it appears to offer no way of moving messages from this queue back into the flow when the endpoint comes back up. All it seems you can do in CloudHub is clear your queue. Again, is this possible using VM queues and CloudHub only (i.e. no other queueing tool)?
VM queues are very basic, whether you use them in CloudHub or not.
VM queues have no capacity for delaying redelivery (like exponential back-offs). Use JMS queues if you need such features.
You need to create a flow for processing the DLQ, for example one that regularly consumes the queue via the requester module and re-injects the messages into the main queue. Again, with JMS, you would have better control.
Alternatively to JMS, you could consider hosted queues like CloudAMQP, Iron.io or AWS SQS. You would lose transaction support on the inbound endpoint but would gain better control on the (re)delivery behaviour.

Mule ESB: How to achieve typical ReTry Mechanism in MULE ESB

I need to implement a logic on Retry. Inbound endpoint pushes the messages to Rest (Outbound). If the REST is unavailable, I need to retry for 1 time and put it in the queue. But the second upcoming messages should not do any retry, it has to directly put the messages in to queue until the REST service is available.
Once the service is available, I need to pushes all the messages from QUEUE to REST Service (in ordering) via batch job.
Questions:
How do I know the service is unavailable for my second message? If I use until Successful, for every message it do retry and put in queue. Plm is 2nd message shouldn't do retry.
For batch, I thought of using poll, but how to tell to poll, when the service becomes available to begin the batch process. (bcz,Poll is more of with configuring timings to run batch)?
Other ticky confuses me is - Here ordering has to be preserved. once the service is available. Queue messages ( i,e Batch) has to move first to REST Services then with real time. I doubt whether Is it applicable.
It will be very helpful for the quick response to implement the logic.
Using Mule: 3.5.1
I could try something like below: using flow controls
process a message; if exception or bad response code, set a variable/property like serviceAvailable=false.
subsequent message processing will first check the property serviceAvailable to process the messages. if property is false, en-queue the messages to a DB table with status=new/unprocessed
create a flow/scheduler to process the messages from DB sequentially, but it will not check the property serviceAvailable and call the rest service.
If service throws exception it will not store the messages in db again but if processes successfully change the property serviceAvailable=true and de-queue the messages or change the status. Add another property and set it to true if there are more messages in db table like moreDBMsg=true.
New messages should not be processed/consumed until moreDBMsg=false
once moreDBMsg=false and serviceAvailable=true start processing the messages from queue.
For the timeout I would still look at the response code and catch time-outs to determine if the call was successful or requires a retry. Practically you normally do multi threading anyway, so you have multiple calls in parallel anyway. Or simply one call starts before the other ends.
That is just quite normal.
But you can simply retry calls in a queue that time out. And after x amounts of time-outs you "skip" or defer the retry.
But all of this has been done using actual Mule flow components like either:
MEL http://www.mulesoft.org/documentation/display/current/Mule+Expression+Language+Reference
Or flow controls: http://www.mulesoft.org/documentation/display/current/Choice+Flow+Control+Reference
Or for example you reference a Spring Bean and do it in native Java code.
One possibility for the queue would be to persist it in a database. Mule has database connector that has a "poll" feature, see: http://www.mulesoft.org/documentation/display/current/JDBC+Transport+Reference#JDBCTransportReference-PollingTransport

Re-queue Amqp message at tail of Queue

I have a project setup using Spring and RabbitMQ. Currently it is possible for my application to receive an amqp message that cannot be processed until another asynchronous process has completed (legacy and totally detached, i have no control). So the result is i may have to wait on processing a message for some amount of time. The result of this is an exception in a transformer.
When the message is NACK'd back to rabbitMQ it is putting it back into the head of the queue and re-pulling it immediately. If i get unprocessable messages equal to the number of concurrent listeners my workflow locks up. It spins its wheels waiting for messages to become processable, even though there are valid processable messages waiting behind in the queue.
Is there a way to reject and amqp message and have it go back to the tail of the queue instead? From my research rabbitMQ worked this way at one time, but now i appear to get the head of the queue exclusively.
My config is rather straight forward, but for continuity here it is...
Connection factory is: org.springframework.amqp.rabbit.connection.CachingConnectionFactory
RabbitMQ 3.1.1
Spring Integration: 2.2.0
<si:channel id="channel"/>
<si-amqp:inbound-channel-adapter
queue-names="commit" channel="channel" connection-factory="amqpConnectionFactory"
acknowledge-mode="AUTO" concurrent-consumers="${listeners}"
channel-transacted="true"
transaction-manager="transactionManager"/>
<si:chain input-channel="channel" output-channel="nullChannel">
<si:transformer ref="transformer"></si:transformer>
<si:service-activator ref="activator"/>
</si:chain>
You are correct that RabbitMQ was changed some time ago. There is nothing in the API to change the behavior.
You can, of course, put an error-channel on the inbound adapter, followed by a transformer (expression="payload.failedMessage"), followed by an outbound adapter configured with an appropriate exchange/routing-key to requeue the message at the back of the queue.
You might want to add some additional logic in the error flow to check the exception type (payload.cause) and decide which action you want.
If the error flow itself throws an exception, the original message will be requeued at the head, as before; if it exits normally, the message will be acked.