When dequeuing from RabbitMQ it seems that the entire queue of messages is taken into memory and then processed, a minimal example can be the tutorial at https://www.rabbitmq.com/tutorials/tutorial-two-dotnet.html.
Giving for example 1000 messages, they are taken all together in the event Received and the event handler manages 1000 messages: instead, I would to take 100 messages at time in order to divide the message consuming among 2 or more servers using a load balancer as HAProxy.
Is it possible or the only possibility is in the event handler make a manual round robin to other servers?
You have to use the prefecth to decide how many messages you want to download to the client.
instead, I would take 100 messages at a time in order to divide the message consuming among 2 or more servers.
it is what is explained here of "Fair dispatch" section, in your case you have to set the prefetch to 100 instead of 1 and attach more consumers to the same queue.
Related
I have around 300 different consumers / 300 message types / 300 queues with the most wildest functionality behind it.
From the extreme side:
Is the best choice to make 1 windows service (easier to deploy) with 300 consumers listening.
Or 300 windows services (easier to split between devs) each independent 1 consumer but impossible to maintain by support
?
update: from 1 to 300 queues
RabbitMQ can support hundreds of queues simultaneously, and each queue should be responsible for one specific type of message e.g. a response status or an online order information or a stack trace information for further processing by some other unit of work, these three are not same and if you are keeping them all in one then please segregate them into different queues.
If you will keep all the data in one queue it will also effect your application performance as each queue works in a sequential order and since you have 300 consumers that wait for 300 types of messages, almost all of them could be in waiting state and it is also a reason for complex decision making algorithm, if you are using one to figure out the correct consumer.
What could also go wrong with a single queue is that it is now a bottleneck which could obstruct the functioning of the whole application, if that queue fails, because every consumer listens to it. By having different queues the rest of the system can still process if one particular queue faces an issue.
Instead of going for 1 consumer per service you can check if there's anything common and if the services can take up more consumers than one after increasing the number of queues from 1 to many.
In my app, we are using a Camel route to read messages from a RabbitMQ queue.
The configuration looks like that :
from("rabbitmq:myexchange?routingKey=mykey&queue=q")
The producer can send 50k messages within a few minutes, and each message can take 1 second or more to process.
What I can see is that that ALL messages are consumed very fast, but the processing of this messages can take many hours. Many hours of processing is expected but does that mean that the 50k messages are stored in memory ? If so, I would like to disable this behavior because I don't want to loose messages when the process goes down ... Actually, we are loosing most of the messages even when the process stays up, which is even worse. It looks like the connector is not designed to handle so many messages at once, but I cannot say if it is because of the connector himself or because we did not configure it properly.
I tried with the option autoAck :
from("rabbitmq:myexchange?routingKey=mykey&queue=q&autoAck=false")
This way the messages are rollbacked when something goes wrong but keeping 50k messages unacknowledge at the same time does not seem to be a good idea anyway...
There are a couple of things that i will like to share.
AutoAck - Yes in case when you want to process the message ( after receiving it ) you should set AutoAck to False and explicitly acknowledge the message once it is processed.
Setting Consumer PreFetch - You need to fine tune the PreFetch size , the pre fetch size is the max number of messages which RabbitMQ will present to the consumer in a go i.e. at the most your total un-acknowledged message count will be equal to the Pre Fetch size. Depending on your system if every message is critical you can set the pre fetch size to 1 , if you have multi threaded model of processing messages you can set the pre fetch size to match the number of threads where each thread processes one message and likewise.
In a way it acts like a buffer architecturally. If your process goes down while processing those message any message which was un acked before the process went down will still be there in the queue and the consumer will get it again for processing.
How to can config rabbitmq queue for consume 20 messages per second? if i have multiple queues,is it possible to do that for each queue?
such as:
q1-> 20 message per second
q2-> 15 message per second
When using message-driven consumers, you would have to do the throttling in the listener itself - add Thread.sleep() - or add an advice to the listener container's advice chain to separate the logic from your business code.
Generally, when wanting to control the rate of consumption, it might be easier to use a RabbitTemplate.receive() operation (or RabbitTempalte.execute() with channel.basicGet() if you want to defer the acknowledgment until the message is processed).
You can't configure queue in RabbitMQ to serve a limited amount of messages per second, you must do it programmatically.
A ugly technique is to use a single listener for that queue (that consumes a message at a time), and add a Thread.sleep(100L) at the beginning of that method for 10 msg/s or a Thread.sleep(66L) for 15msg/s (more generally, wait for 1000/nMesgPerSecond). This guarantees more-or-less a lower bound on time spent by that method.
We have a Java application that gets messages from rabbitmq using Spring AMQP.
For some of the queues, the number of consumers are not increasing resulting in slower messages delivery rate.
e.g. even though the max consumers is set to 50, number of consumers remained 6 for most of the time for the load of 9000 messages.
However, this is not the case with other queues. i..e consumers count reached till 35 for other queues.
We are using SimpleMessageListenerContainer's setMaxConcurrentConsumers API for setting max consumers.
Can someone please help me to understand this?
Configuration:
number of concurrent consumers: 4
number of max concurrent consumers: 50
When asking questions like this, you must always show configuration. Edit your question with complete details.
It depends on your configuration. By default, a new consumer is only added once every 10 seconds, and only if an existing consumer receives 10 messages without any gaps.
If that still doesn't answer your question, turn on DEBUG logging. If you can't figure it out from that, post the log (covering at least startConsumerMinInterval milliseconds) someplace like pastebin or dropbox.
I am trying to do some stress testing on AMQ 5.5.1.
I have created a queue and using Jmeter Point-to-point to send JMS requests to the queue. Kindly note I haven't configured any consumer so mesages just get stacked up and actually stored in KahaDB store.
I notice if I have used 200 users in the Thread group - it creates exactly 400 threads on ActiveMQ that I can see via jconsole.
Jmeter slowly(actually quite fast) keeps on pushing messages to the queue as I can see the queue size gradually increasing and doesn't do it at one go.
I am using ProducerFlowControl as false and using the default hybrid store cursor on (though I haven't got a ready consumer at the moment).
I am also using Persistent Delivery.
My questions are:
What is restricting Jmeter from pushing all the 200 messages at one go? Is it ActiveMQ or I need to configure something in jmeter to be able to send 200 at one go. I did notice as soon as I start the test on Jmeter straight away 400 threads are created on ActiveMQ which makes me think it establishes connections at one go for 200 users with activemq but messages are pushed in batches but not together.
Why are there 2 threads per consumer on activemq and why do all the threads remain active until all messages have been pushed. Ideally if the users were pushing messages one by one as soon as they have done so and got an acknowledgement back it should have died out. But all 200 X 2 threads die at the same time when all messages have finally been pushed.
Any help is appreciated.