Configuring Spring Integration QueueChannel for RabbitMQ - jvm

I am currently using spring AMQP API to connect to RabbitMQ. Basically in my consumer code I am reading the messages asynchronously and making a Bulk insert in elastic search. when I do ack = AUTO I get a moderate speed of 400-500 msg/sec (reading from Queue). when I do ack = NONE the increase in reading speed is huge i.e. it goes to 5000-6000 msg/sec.
The configuration is as follows:
Linux machine with with 32GB RAM
JVM arguments:
-server
-Xms1g -Xmx1g -Xss384k PermSize=256m MaxPermSize=256m
Now the problem is when I am doing ack = NONE though I am getting a good speed the JVM is getting OutOfMemory after some time and I can see lot of GC is happening in this scenario.
I am planning to use the QueueChannel from Spring integration where I can limit the size of the channel as how much messages it can contain.
How can I implement this with RabbitMQ also is there any other way to achieve a good reading speed like 4000-5000 msg/sec without crashing the JVM?

Try using ack=AUTO, but set the prefetch and txSize to, say 1000; that way, an ack will only be sent every 1000 messages (you can tune this value as needed).
Using ackmode=NONE causes messages to accumulate in an unbounded blocking queue within the listener container and will only work if the listener can keep up (OOM means yours can't).
We have an open JIRA issue to resolve this for ack=NONE, but the above technique is usually sufficient.
You can also improve throughput using the container concurrency settings.

Related

Flow control limitting message rate on single queue

I have a exchange and only one queue bind to it. When the message publishing rate goes over some cap the rabbitmq automatically throttles the incoming message rate.
On further investigation i found this happens due to the "Flow control" trottling mechanism built in rabbitmq. https://www.rabbitmq.com/blog/2014/04/14/finding-bottlenecks-with-rabbitmq-3-3/
As per this document i have connection, channels in flow control and not the queue. which means there is a cpu-bound / disk-bound limit.
My messages are not persistent so i don't have disk limitation. On Searching, i found documents stating a queue is limited to single cpu. https://groups.google.com/forum/#!msg/rabbitmq-users/wzHMV7F0ugU/zhW_9b8ACQAJ
What does it mean ? do the rabbitmq queue process uses only 1 cpu even multiple cores are available in the machine? what is the limitation of cpu with respect to queue flow control?
A queue is handled by one and one only CPU, which mean that you have to design your message flow through rabbit with multiple queue in order to remain scalable.
If you are on one queue only you will be limited to a maximum number of messages no matter if you have 1 or more cores
https://www.rabbitmq.com/queues.html#runtime-characteristics
If you have a specific need to build an architecture with only one logical queue, which is explicitely not recommended ; or if you have a queue with a really high trafic, you can check sharded queues here : Github Sharded queues Plugin
It's a pluggin (take with caution and test everything before going to production, especialy failure and replication) that split a logical queue name into multiple queues.
If you are running a benchmark on rabbitmq, remember to produce and consume on a number of queues superior to the amount of CPU cores present on the server.
Other tips about benchmark, try to produce only, consume only, and both at the same time, with different persistence settings (persistence, message size, lazy queues, ...) and ack settings.

RabbitMQ delayed exchange plugin loads and resources

We are using rabbitmq (3.6.6) to send analysis (millions) to different analyzers. These are very quick and we were planning on use the rabbit-message-plugin to schedule monitorizations over the analyzed elements.
We were thinking about rabbitmq-delayed-exchange-plugin, already made some tests and we need some clarification.
Currently:
We are scheduling millions of messages
Delays range from a few minutes to 24 hours
As previously said, these are tests, so we are using a machine with one core and 4G of RAM which has also other apps running on it.
What happened with a high memory watermark set up at 2.0G:
RabbitMQ eventually (a day or so) starts consuming 100% (only one core) and does not respond to the management interface nor rabbitmqctl. This goes on for at least 18 hours (always end up killing, deleting mnesia delayed file on disk - about 100 / 200 MB - and restarting).
What happened with a high memory watermark set up at 3.6G:
RabbitMQ was killed by kernel, because of high memory usage (4 GB hardware) about a week after working like this.
Mnesia file for delayed exchange is about 1.5G
RabbitMQ cannot start anymore giving to the below trace (we are assuming that because of being terminated by a KILL messages in the delay somehow ended up corrupted or something
{could_not_start,rabbit,
rabbitmq-server[12889]: {{case_clause,{timeout,['rabbit_delayed_messagerabbit#rabbitNode']}},
rabbitmq-server[12889]: [{rabbit_boot_steps,'-run_step/2-lc$^1/1-1-',1,
And right now we are asking ourselves: Are we a little over our heads using rabbit delayed exchange plugin for this volumes of information? If we are, then end of the problem, rethink and restart, but if not, what could be an appropiate hardware and/or configuration setup?
RabbitMQ delayed exchange plugin is not properly designed to store millions of messages.
It is also documented to the plugin page
Current design of this plugin doesn't really fit scenarios with a high
number of delayed messages (e.g. 100s of thousands or millions). See
72 for details.
Read also here: https://github.com/rabbitmq/rabbitmq-delayed-message-exchange/issues/72
This plugin is often used as if RabbitMQ was a database. It is not.

ActiveMQ performance for producing persistent text messages

As advised on the webpage
activemq-performance-module-users-manual I've tried (on an Intel i7 laptop with Windows 7 OS and SSD drive) the performance of producing persistent messages on a ActiveMQ Queue :
mvn activemq-perf:producer -Dproducer.destName=queue://TEST.FOO -Dproducer.deliveryMode=persistent
against the default installation of activemq 5.12.1
The performance which I got is around 300-400 messages per second.
On the page activemq-performance I have been reading much higher numbers:
When running the server on one box and a single producer and consumer thread in separate VMs on the other box, using a single topic we got around 21-22,000 messages/second using 1-2K messages.
On the other hand, when the messages are not persistent, the performance of the producer grows to 49000 messages per second. -Dproducer.deliveryMode=nonpersistent
When the messages are sent asynchrounously.
-Dproducer.deliveryMode=persistent -Dfactory.useAsyncSend=true
I get around 23000 messages sent per second.
From what I see here stackoverflow-activemq-persistent-performance-on-different-operatiing-systems it makes a difference when running activemq on different OS.
Can somebody give me some tips for having a better performance for writing persistent activemq messages?
Performance of sending persistent messages is all about disk based IO as the message must be written to the disk prior to the broker signalling the client that the message send completed. The faster the disk the better your throughput will be, all else being equal.
To work around some of this you can send persistent messages in transactional batches so that the send itself is complete and the synchronization point is reduced to the transaction boundary.
Depending on the size of the text messages you can also gain some performance by using compression, this can be turned on via a option in the ActiveMQConnectionFactory.

RabbitMQ clustering

I have created RabbitMQ cluster on single windows machine with HA policy to all and created two DISC and two RAM node and 1 STAT node. I then ran the PerfTest (rabbitmq client test utility), the result were disappointing, it was around 5000m/sec. But when I ran the same test with single RabbitMQ node it gave me good result i.e. 25000m/sec. I am unable to get what wrong is happening, its result should be impressive if run within cluster, but it is opposite. Anyone have encounter the same or if know the reason behind it.
Thanks
A RabbitMQ Cluster with Mirrored Queues won't go faster than a single node. Why? Clustering is there to improve reliability and fault tolerance, not to improve throughput.
What's the reason for this? When you enable mirrored queues, RabbitMQ needs to coordinate state between nodes, that is, it needs to coordinate publishes, consumers and acks, to not deliver the same message more than once, or to more than one consumer. All this coordination affects performance, but that's the tradeoff with this kind of replication.
If you need decentralised replication, then you could use the Federation Plugin
The throughput rate would depend on couple of factors. In our perf tests for RabbitMQ in a cluster we observed that the rate varied depending on RabbitMQ nodes were DISC or RAM, but a big chunk of the performance variation was observed when running RabbitMQ Cluster with Mirrored Queues vs without. With Mirroring enabled we were seeing a rate of 3500 m/sec, while without it was 5000 m/sec. Also what is your message size when you run your perftest.
As is typical with RabbitMQ, it really depends. Here are a few ways that I have found to improve performance with RabbitMQ clustering:
Push the messages to a set of appropriately sized memory nodes only using a load balancer
Keep the message size very small
Do not use amqp transactions or Publisher Confirms
Only use HA Mirrored queues for a small set of queues that you absolutely have to have the data saved
Set a TTL on all messages or queues using a policy
Just to addon to above comments.. Putting it as FYI
http://www.rabbitmq.com/blog/2012/05/11/some-queuing-theory-throughput-latency-and-bandwidth/
http://www.rabbitmq.com/blog/2012/04/25/rabbitmq-performance-measurements-part-2/
The problems is that you are running a cluster on the same machine with the same resources.
The purpose of a rabbit cluster is to scale out and not scale in.
In other words, to have more network connections available, more disk power of course more CPU power to handle more messages.
When adding nodes on a single machine you don't scale your resources plus you are adding overheads of using a cluster. (As stated above)

how to recover from message store exhaustion?

when a activemq broker gets flooded with messages or the consumer fails it will stop accepting messages once certain (configurable) limits are reached. In Broker Networks this effect can take down the whole cluster.
I'm currently using the default configuration for memory limits and experience the following behavior:
consumer fails or becomes very slow (known problem)
broker A (the one the consumer connects to) gets filled and stops accepting messages
all other brokers get filled up and stop to accept messages
the cluster is basicly down
if the consumer comes back online now it will try to reconnect to one of the cluster nodes but the nodes will not accept the connection becaus this would create advisory messages that can't be handled because the broker is already full.
How do i have to configure the memory limits so that my productive destinations are limited and blocked but the broker will still be able to accept advisories so my consumer can revover?
You should be able to use producerFlowControl to slow producers to not overwhelm your broker. That being said, this is enabled by default, so you are likely using it already...
I would try something like this (assuming an 8GB box or so)...
use the failover transport everywhere (broker/client connections)
increase JVM heap to 4 GB
increase systemUsage limits substantially (memoryUsage 3gb, storeUsage/tempUsage = 10 gb)
enable producer flow control on both topics and queues
set the memory limit to 2GB divided by the total # of topics+queues
in other words, this should in total be substantially less the the memoryUsage limit
exclude the Advisory topics from the producer flow control (they might be already)
This should limit the producers and leave resources for your system to function/recover/accept consumer connections...