I was trying to perform LoadTest on the RabbitMQ messaging to see to what extent it can take messages into the queue and transfer it to the target machine over shovel.
Steps i followed:
producer has 20 threads. Each thread sends message to a dedicated queue(Say suppose ProducerQueue1 -- ProducerQueue20).The message is of size 51Mb each. The messages are sent in random interval using java.util.Random(50) seconds.
After each message sent at random seconds(A random second between 1- 50),
there is a sleep of 2 minutes.Therefore each of the producer threads sleep for
2 min after every send.
The messages are sent in a infinite while loop.
There are shovels from each dedicated queue to the consumer side dedicated queues(Say suppose ConsumerQueue1 --- ConsumerQueue20).
The link speed is 100mbps.
Issue observed:
Initially the messages are transferred with no issues, but after some time the NETWORK AT CONSUMER SIDE IS CHOKED.
The reason for choking is that after certain period of time, even if 4/5 out of 20 thread's random second coincides, then the consumer receives close to 250Mb message in one shot. Since the network speed is 100mbps as mentioned above, the network gets choked.
Due to this, the shovels will not be able to exchange heartbeats to stay in "running" state. The leads shovels to move from "running" to "terminated" state. The shovels try to establish a connection depending upon the "reconnect delay".
Due to break in shovels at producer side, the queues at the producer starts getting accumulated.
My Question:
The consumer's rabbitMq memory starts increasing as the queues start accumulating more messages. The memory is crossing the water mark. The purpose of water mark is not served. I have 16gb ram and i have set watermark to 40%(i.e 6.4gb ram). But still the memory shoots up to 10gb and doesnt recover and the producer system hangs.
Can any one please answer my question. and also tell me can there be any other reason for network choking which i mentioned above.
Thanks in advance.
Related
I have a publisher that sends messages to a consumer that moves a motor.
The motor has a work queue which I cannot access, and it works slower than the rate of the incoming messages, so I'm trying to control the traffic on the consumer.
To keep updated and relevant data coming to the motor without the queue filling up and creating a traffic jam, I set the RabbitMQ queue size limit to 5 and basicQos to 1.
The idea is that the RabbitMQ queue will drop the old messages when it is filled up, so the newest commands are at the front of the queue.
Also by setting basicQos to 1 I ensure that the consumer doesn't grab all messages from the queue and bombards the motor at once, which is exactly what i'm trying to avoid since I can't do anything once the command was sent to the motor.
This way the consumer takes messages from the queue one by one, while new messages replace the old ones on the queue.
Practically this moves the bottleneck to the RabbitMQ queue instead of the motor's queue.
I also cannot check the motor's work queue, so all traffic control must be done on the consumer.
I added messageId and tested, and found out many messages are still coming and going long after the publisher is being shut down.
I'm expecting around 5 messages after shutdown since that's the size of the queue, but i'm getting hundreds.
I also added a few seconds of sleep inside the callback to make sure this isn't the robot queue that's acting up, but i'm still getting many messages after shutdown, and I can see in the logs the callback is being called every time so it's definitely still getting messages from somewhere.
Please help.
Thanks.
Moving the acknowledgment to the end of the callback solved the problem.
I'm guessing that by setting basicQos to 1 it did execute the callback for each message one after another, but in the background it kept grabbing messages from the queue.
So even when the publisher was shutdown, the consumer still had messages that were taken from the queue in it, and those messages were the ones that I saw being executed.
I have the following use case that I'm trying to setup in rabbit MQ:
Normally process A should handle all messages sent to queue A.
However if process A goes down (is no longer consuming from queue A) Then process B should handle the messages until process A comes back up.
At first it looks like consumer priorities might be the solution. https://www.rabbitmq.com/consumer-priority.html. However that will send messages to process B when process A is just blocked working on other messages. I only want them sent to process B when process A is down.
A 2nd option might be dead lettering. https://www.rabbitmq.com/dlx.html. If process A is not reading from queue A the messages will eventually time out and then move to an exchange that forwards them to a queue that process B reads. However that options requires waiting for the message to timeout which is not ideal. Also the message could timeout even while process A is still working which is not ideal.
Any ideas how rabbit MQ could be configured for the use case described above? Thanks
According to your answers to my questions, I would probably use a priority on consumer so that process A handles a maximum of messages, along with a high prefetch count (if possible, and you must ensure your process can handle such a high number).
Then, process B would handle the messages that process A cannot handle due to the high load, or all the messages when process A is not available. It is probably acceptable that in the case of high load some messages are handled with a higher delay. Do not forget to set a low prefetch count for process B.
Hope this helps.
I'm using the Java Client 3.5.6 for RabbitMQ.
My use case is this:
I have 10-15 Channels to one queue (mostly the same connection, one connection per channel makes no difference).
I get them without autoAck. Every Channel has a prefetch / QoS size of 5000. So let's just assume i have 30 channels, so i can get 150000 messages.
Every full minute, i compute some things and when successful, i use basicAck to acknowledge these messages.
However, the management webinterface shows in that phase that 0 messages are delivered, which is not realistic unless those are somehow "blocked".
I'm using this queue on 3-node-cluster as a HA-queue with TTL set to 1800 seconds. The nodes are connected via internal LAN and the machines are really powerful with plenty RAM.
My Question:
Why does this basicAck operation block the rest of the operations like publishing or delivering new messages?
I am a newbie to RabbitMQ, hence need guidance on a basic question:
Does RabbitMQ send messages to consumer as they arrive?
OR
Does RabbitMQ send messages to consumer as they become available?
At message consumption endpoint, I am using com.rabbitmq.client.QueueingConsumer.
Looking at the sprint client source code, I could figure out that
QueueingConsumer keeps listening on socket for any messages the broker sends to it
Any message that is received is parsed and stored as Delivery in a LinkedBlockingQueue encapsulated inside the QueueingConsumer.
This implies that even if the message processing endpoint is busy, messages will be pushed to QueueingConsumer
Is this understanding right?
TLDR: you poll messages from RabbitMQ till the prefetch count is exceeded in which case you will block and only receive heart beat frames till the fetch messages are ACKed. So you can poll but you will only get new messages if the number of non-acked messages is less than the prefetch count. New messages are put on the QueueingConsumer and in theory you should never really have much more than the prefetch count in that QueueingConsumer internal queue.
Details:
Low level wise for (I'm probably going to get some of this wrong) RabbitMQ itself doesn't actually push messages. The client has to continuously read the connections for Frames based on the AMQP protocol. Its hard to classify this as push or pull but just know the client has to continuously read the connection and because the Java client is sadly BIO it is a blocking/polling operation. The blocking/polling is based on the AMQP heartbeat frames and regular frames and socket timeout configuration.
What happens in the Java RabbitMQ client is that there is thread for each channel (or maybe its connection) and that thread loops gathering frames from RabbitMQ which eventually become commands that are put in a blocking queue (I believe its like a SynchronousQueue aka handoff queue but Rabbit has its own special one).
The QueueingConsumer is a higher level API and will pull commands off of that handoff queue mentioned early because if commands are left on the handoff queue it will block the channel frame gathering loop. This is can be bad because timeout the connection. Also the QueueingConsumer allows work to be done on a separate thread instead of being in the same thread as the looping frame thread mentioned earlier.
Now if you look at most Consumer implementations you will probably notice that they are almost always unbounded blocking queues. I'm not entirely sure why the bounding of these queues can't be a multiplier of the prefetch but if they are less than the prefetch it will certainly cause problems with the connection timing out.
I think best answer is product's own answer. As RMQ has both push + pull mechanism defined as part of the protocol. Have a look : https://www.rabbitmq.com/tutorials/amqp-concepts.html
Rabbitmq mainly uses Push mechanism. Poll will consume bandwidth of the server. Poll also has time gaps between each poll. It will not able to achieve low latency. Rabbitmq will push the message to client once there are consumers available for the queue. So the connection is long running. ReadFrame in rabbitmq is basically waiting for incoming frames
I'm using RabbitMQ to handle app logs (windows server 2008 install). apps send messages to the exchange. I have a dedicated queue that gets messages forwarded to it. I then have a windows service connecting to that queue, pulling messages off, and persisting them to DB. I have a n-number of clients connecting to the exchange in real time to latch on the the stream so there are n-number of connections at a time. It is possible that some of these clients may not Close() their connections in code. Many clients have long running connections.
As messages are pulled off the queue, they are auto-ack'ed, so I don't have any unacknowledged messages on the queue. However, I'm seeing the memory of Rabbit grow over time. It starts at 32K or so when first turned on then creeps up until it exceeds the threshold and blocks incoming connections.
I have both .NET and Java clients--but both are auto-ack.
Reading the docs, I didn't see any description of how Rabbit is using memory--i.e. I don't understand why memory would be bloating over time. The messages are getting pulled off and ack'ed which seems to me would mean that Rabbit wouldn't be holding on to it any more and thus can free the associated memory, causing a stable mem usage profile.
I don't see how fiddling with the memory dial in Rabbit would help either--usage just creeps upwards over time: eventually I'll exceed it.
My guess is that there is something I'm doing wrong with my clients that is causing the memory to grow over time, but I can't think of why that would be.
why does Rabbit memory usage creep up when no messages are kept on any queues?
what coding practices could cause the RabbitMQ server to
retain (and grow) memory?
Is it possible that you have other queues bound to the exchange perhaps? Check the Rabbit admin page under exchanges, click on your exchange, and check for queues bound to it. It may be that one of your clients, when declaring the exchange, is inadvertently binding an unnamed (system random named) queue to the exchange, and messages are piling up in there.
The other thing to check is the QoS settings - if you leave QoS set at the default (infinite) then Rabbit will send out messages immediately to any client regardless of how many messages they are already holding. This results in a lot of book-keeping, like which client has which message on the server, and a large buffer on the client.
Make sure to set your QoS pre-fetch limit to something much more reasonable, like say 100. That way, if you have 1M messages and only 1 client with prefetch of 100, Rabbit will send only 100 to the client and keep the other 999900 on disk on the server, and not use nearly as much memory.
This was a big cause of memory bloat in my application, and now that I've addressed prefetch, everything is fine.