How to set a dispatch rate to ActiveMQ queue - activemq

Given a queue in ActiveMQ with 50+ consumers, Is there a way to dispatch at the most 1 event per second to consumer? This is to control a flood of events dispatch.
Event producers are outside my application. Hence I need to handle the controlled dispatch from consumers side.
I have a jms prefetch policy configured to as low as 5.
I do not want to dispatch 100's of messages to consumers in a span of few seconds. Instead I want it to be a steady flow.
How do I configure the Queue consumers to dispatch in a controlled flow?

As far as I know, there is no way to throttle the Consumers.
What you can do is to limit the flow to the consumer queue using built in Camel-routes. Maybe you can find a way to use this feature for your case?
copy examples/camel.xml to your conf folder.
edit the connection factory in the camel.xml file. In a default setup, change broker uri to vm://localhost?create=false
include camel.xml in your activemq.xml <include resource="camel.xml"/>
Edit the route in camel.xml to something like this (1msg/1000ms)
<route>
<description>Throttler 1 msg/s</description>
<from uri="activemq:msgs.in"/>
<throttle timePeriodMillis="1000" asyncDelayed="true">
<constant>1</constant>
<to uri="activemq:msgs.out"/>
</throttle>
</route>

Related

What is the best approach for dealing with RabbitMQ DLQ messages in Spring AMQP

I am using Spring AMQP to listen RabbitMQ queue. While listening queue, depending on business logic, my service can throw RuntimeException and in this case message will retry several times. After max count retries, message will stay in DLQ. And I am wondering, what is the best approach to deal with these messages in DLQ? I read from blogs that I can use ParkingLot Queue. But also in this case, how to monitor the queue and notify people about dead-letter messages?
P.S. Sorry for my English. Hope that I was able to explain my problem :)
You can use the RabbitMQ REST api (Hop client) to get the status of the DLQ.
You can also use Spring's RabbitAdmin.getQueueProperties("my.dlq") to get the message count.
https://docs.spring.io/spring-amqp/docs/current/reference/html/#broker-configuration
Other options include adding another listener on the DLQ and run it periodically to either send it back to the original queue or send it to a parking lot queue if it fails too many times.
There's an example of that in the spring cloud stream documentation.

auto process with DLQ configuration in spring xd

1)I want to configure DLQ for my stream
stream create --name httptest7 --definition "http | http-client --url='''http://localhost:8080/mock-sentmessage/customers/send-email''' --httpMethod=GET | log"
stream deploy httptest7 --properties module.*.consumer.autoBindDLQ=true
2)I have made
autoBindDLQ=true
I had one doubt if suppose spring xd fails to process my messages and post it to dlq .Will they me automatically moved to My original queue to retry or should i write a processor to move my DLQ messages to my original queue
3)Now i bring down my webservice http://localhost:8080/mock-sentmessage/customers/send-email i can see message filling in my dlq.
4)When i bring up my service up . But as per my understanding I thought from DLQ the message will be retried again when my service is up.
But From DLQ its not retried again .Any configuration I need to set for ?
As per documentation:
There is no automated mechanism provided to move dead lettered messages back to the bus queue.
I am not sure what your question is, or even if you have one; you seem to have answered your own question by quoting the documentation:
There is no automated mechanism provided to move dead lettered messages back to the bus queue.
So, no; there is no "setting" you can change.
There are a couple of things you can do - write your own code to move the messages back to the main queue from the DLQ; it would just take a few lines of Java using Spring AMQP, or in any language of your choice.
You can also use the technique described here whereby you set a message TTL on the DLQ, and configure it to route back to the main queue when the TTL expires.
Just so you know, You can use shovel plugin in Rabbitmq to do the movement from DLQ back to the bus queue.

Mule Queued-Asynchronous Flow Queue vs. VM Queue

While studying and using Mule, I couldn't figure out if there's a difference between a queued-asynchronous flow queue and a VM queue.
My question is, are they the same queues (just with different names along the documentation) or different ones?
In a concrete example:
<flow name="fooFlow" processingStrategy="queued-asynchronous">
<vm:inbound-endpoint path="foo" exchange-pattern="one-way"/>
<component class="com.foo.FooComponent"/>
</flow>
Does the VM inbound-endpoint receives messages from one queue, and the flow has another queue to receive the messages from the inbound-endpoint? Or are they the same SEDA queue?
These are two very different concepts, one is based on the way a flow is processed and the other is a lightweight queuing mechanism. VM is a transport and it has persistent queuing capabilities as well as transactions.
Please see (the last link to understand the flow execution model):
http://www.mulesoft.org/documentation/display/current/Flow+Processing+Strategies
http://www.mulesoft.org/documentation/display/current/VM+Transport+Reference
http://www.mulesoft.org/documentation/display/current/Tuning+Performance
To add some details on the specific example you showed.
You do not need to specify the processing-strategy explicitly, Mule chooses the default processing strategy based on the exchange-pattern of the inbound endpoint. So as you have a non transactions one-way endpoint the processing strategy is queued-asynchronous.
Does the VM inbound-endpoint receives messages from one queue, and the
flow has another queue to receive the messages from the
inbound-endpoint? Or are they the same SEDA queue?
To recieve messages Mule will use the thread pool dedicated to the VM connector (receiver threads are tied to the transport). Once the message has been received it will be processed using a thread from the flow's thread pool. (Would be great if I could be validated or corrected :)
(Most of the information is from the links posted in the earlier answer)

Implementing the reliability pattern in CloudHub with VM queues

I have more-or-less implemented the Reliability Pattern in my Mule application using persistent VM queues CloudHub, as documented here. While everything works fine, it has left me with a number of questions about actually ensuring reliable delivery of my messages. To illustrate the points below, assume I have http-request component within my "application logic flow" (see the diagram on the link above) that is throwing an exception because the endpoint is down, and I want to ensure that the in flight message will eventually get delivered to the endpoint:
As detailed on the link above, I have observed that when the exception is thrown within my "application logic flow", and I have made the flow transactional, the message is put back on the VM queue. However all that happens is the message then repeatedly taken off the queue, processed by the flow, and the exception is thrown again - ad infinitum. There appears to be no way of configuring any sort of retry delay or maximum number of retries on VM queues as is possible, for example, with ActiveMQ. The best work around I have come up with is to surround the http-request message processor with the until-successful scope, but I'd rather have these sorts of things apply to my whole flow (without having to wrap the whole flow in until-successful). Is this sort of thing possible using only VM queues and CloudHub?
I have configured my until-successful to place the message on another VM queue which I want to use as a dead-letter-queue. Again, this works fine, and I can login to CloudHub and see the messages populated on my DLQ - but then it appears to offer no way of moving messages from this queue back into the flow when the endpoint comes back up. All it seems you can do in CloudHub is clear your queue. Again, is this possible using VM queues and CloudHub only (i.e. no other queueing tool)?
VM queues are very basic, whether you use them in CloudHub or not.
VM queues have no capacity for delaying redelivery (like exponential back-offs). Use JMS queues if you need such features.
You need to create a flow for processing the DLQ, for example one that regularly consumes the queue via the requester module and re-injects the messages into the main queue. Again, with JMS, you would have better control.
Alternatively to JMS, you could consider hosted queues like CloudAMQP, Iron.io or AWS SQS. You would lose transaction support on the inbound endpoint but would gain better control on the (re)delivery behaviour.

ActiveMQ + Stomp: Multi-subscriber queue

I'm interacting with ActiveMQ via STOMP. I have one application which publishes messages and a second application which subscribes and processes the messages.
If I am writing messages to a queue I can be certain that, if I have two consumers, each message will only be processed once (because when a message is completed it is removed from the queue) - but is this functionality available from a topic?
For example; I have a third application which is a logger. I want the logger to receive each message the publisher emits, but I also want exactly one of two (or three or four etc…) of the processors to receive the message too.
Is this possible?
EDIT
It occurs to me that a good way of doing this would be to have a topic which the publisher writes to, and a queue which the processors listen to, with something pushing every message from the topic onto the queue. Can ApacheMQ do this internally?
You can do this internally in ActiveMQ using Mirrored Queues and also use Virtual Topics for some other advanced routing semantics. If you want to have the option of other EIP type messaging patterns then I'd recommend you look into Apache Camel which provides a whole host of EIP pattern functionality.