Time to live for a Mule message whilst in a VM queue - mule

Is it possible for a Mule message to expire (i.e. the container will discard the message) after a configured amount of time (like the JMS TTL property)?
If there is please can you point me to the documentation or example?
Can we use the attribute queueTimeout (see http://www.mulesoft.org/documentation/display/current/VM+Transport+Reference) to achieve this?
Cheers

No, the queueTimeout attribute does not control the TTL for messages on the queue. It is used when performing blocking operations on the queue (like dispatching a message or polling for a message).
This feature is not built into the VM transport. You might be able to accomplish the same idea by setting a message property with a timestamp before publishing it to the VM queue, and then filtering on the message age in the comsuming flow.

Related

How to handle stuck RabbitMQ Dynamic Shovel messages

We are currently using RabbitMQ Dynamic Shovels to forward messages to Azure Event Hub. Recently we setup a new Queue to be forwarded to Event Hub. Some messages in this Queue have a size of over 1MB which is the limit for messages on Event Hub. Because of this limit the messages bounce back and are sent again a few times each second. This creates a lot of network traffic which can be an issue.
Is there any way to send messages that bounce back to a DLX (dead letter exchange) or to a different queue? We have looked for some Dynamic Shovel options but could not find any that would be of any use.
Thank you Jesse Squire. Posting your suggestion as an answer to help other community members.
Generally, for cases when your payload is (or may be) larger than the allowable size, we recommend considering the claim check pattern where you store your payload in some other durable store (such as Blob storage) and then publish the event with a body that points to that resource.
You can refer to Dead-lettering dead-lettered messages in RabbitMQ.
You can also open an issue on GitHub: rabbitmq-server

Message not showing up in queue(RabbitMQ)

I am using RabbitMQ to queue up all the messages and send the messages as SMS to respective consumers. I am using a Direct exchange and I have correctly created a binding to a queue with a routing key. The problem is, when I try to publish a message, I get some activity in the Message rates chart, but the message doesn't show up in the queue
Could certainly use some help here. I am sure the binding is done correctly.
Am I missing some other configuration?
I would recommend to "use specific exchanges", not sending message without specified exchange. I had same issue, when I published it to amq.direct or amq.fanout it worked as I wanted to.
If your configuration is correct, and you also have an active consumer that listens to that queue, I don't think anything is wrong. Doesn't those metrics depicts that the event was published and then delivered and acknowledged by the consumer ? So of course you won't have any queued events since it was consumed as soon as it was published.
It looks like the message is delivered to a consumer (as you can see in the chart). Remove the consumer and try to publish the message again, and you will see that it ends up in the queue instead.
In my case I was creating custom queue so I had to provide custom queue ID as a routing key.

Implementing the reliability pattern in CloudHub with VM queues

I have more-or-less implemented the Reliability Pattern in my Mule application using persistent VM queues CloudHub, as documented here. While everything works fine, it has left me with a number of questions about actually ensuring reliable delivery of my messages. To illustrate the points below, assume I have http-request component within my "application logic flow" (see the diagram on the link above) that is throwing an exception because the endpoint is down, and I want to ensure that the in flight message will eventually get delivered to the endpoint:
As detailed on the link above, I have observed that when the exception is thrown within my "application logic flow", and I have made the flow transactional, the message is put back on the VM queue. However all that happens is the message then repeatedly taken off the queue, processed by the flow, and the exception is thrown again - ad infinitum. There appears to be no way of configuring any sort of retry delay or maximum number of retries on VM queues as is possible, for example, with ActiveMQ. The best work around I have come up with is to surround the http-request message processor with the until-successful scope, but I'd rather have these sorts of things apply to my whole flow (without having to wrap the whole flow in until-successful). Is this sort of thing possible using only VM queues and CloudHub?
I have configured my until-successful to place the message on another VM queue which I want to use as a dead-letter-queue. Again, this works fine, and I can login to CloudHub and see the messages populated on my DLQ - but then it appears to offer no way of moving messages from this queue back into the flow when the endpoint comes back up. All it seems you can do in CloudHub is clear your queue. Again, is this possible using VM queues and CloudHub only (i.e. no other queueing tool)?
VM queues are very basic, whether you use them in CloudHub or not.
VM queues have no capacity for delaying redelivery (like exponential back-offs). Use JMS queues if you need such features.
You need to create a flow for processing the DLQ, for example one that regularly consumes the queue via the requester module and re-injects the messages into the main queue. Again, with JMS, you would have better control.
Alternatively to JMS, you could consider hosted queues like CloudAMQP, Iron.io or AWS SQS. You would lose transaction support on the inbound endpoint but would gain better control on the (re)delivery behaviour.

Service Bus for Windows Server - Deferred messages with TTL behavior

When using deferred messages the time-to-live is being ignored, is it possible to have this behavior and send the messages to dead letter queues?
If so, how do I achieve this?
For those looking into the same scenario - Using TTL on defered messages are not possible as of this writing.

Multiple servers to interact with a Rabbit MQ

I'm working for a company where we're considering Mule ESB. We would need to set up Mule in a clustered configuration to get what Mule coins a Mule High Availability (HA) Cluster.
Now, we need to persist incoming messages to a queue in case of power outage or disk failure. As far as I understand, we can either go with the default Mule Object Store which "persists" messages to a shared memory grid. However, my first thought here is that this can't be any good if we get a power outage which takes the entire cluster out of action.
Our other option is to use a separate queue product such as RabbitMQ or ActiveMQ. However, do these integrate alright with a HA cluster? Are there any mechanism in these products which ensures that the same message won't be picked up by two machines at the same time?
Consider this scenario (based on the observer pattern):
Mule receives a message, puts it on a queue and responds with an OK
to the client which delivered the message.
Mule picks up a message from the queue, and attempts to deliver it to a subscriber.
The subscriber accepts the message, and Mule removes it from the queue.
What happens if another Mule instance in the HA cluster attempts to pick up the message between 2 and 3 above? Is there a mechanism where Mule can indicate that a message is picked up from the queue to be "attempted delivered" but then, if the delivery fails, update the message on the queue as "not delivered" if delivery fails?
Both RabbitMQ and ActiveMQ will give you the once-and-only-once functionality I think you are looking for.
Both platforms ensure that each message in a queue is received by only one subscriber.
In ActiveMQ, to return a message to a queue in the event of a failure, you can use explicit message acknowledgement or JMS transactions. Here's a quick overview.
In RabbitMQ, you do it using acknowledgements.
Also, you might want to consider reliability for your message broker. Both ActiveMQ and RabbitMQ offer highly available broker configuration options.