I am unable to understand how the two attributes differ:'memoryLimit' and 'maxPageSize'
As per documentation:
'maxPageSize' = 'maximum number of persistent messages to page from store at a time'
'memoryLimit' - corresponds to the amount of memory that's assigned to the in-memory store for the queue
Here is a sample configuration for a queue :
<policyEntry queue="Consumer.normal.queue" producerFlowControl="true" memoryLimit="3200" maxPageSize="4"
maxBrowsePageSize="1000" prioritizedMessages="true" useCache="false" expireMessagesPeriod="0" queuePrefetch="1">
what I have observed is that if the maxPageSize =1 and memoryLimit = "3200" then I can see 2 messages loaded into memory and can be browsed via a jms client ( rest of the messages get stored in kahadb )
however if the maxPageSize = 4 and memoryLimit = "3200" then I can see 4 messages loaded into memory and can be browsed via a jms client
So are the two values meant to serve the same purpose ?
AND
does it mean that whichever of the these two attributes provides the greater number of messages will be used by activemq ?
maxPageSize determines how many messages ActiveMQ loads from the store (in your case, KahaDB) to hand to consumers. The memoryLimit indicates how much memory to allocate to keep messages in memory.
In short, (message size x maxPageSize <= memoryLimit) so that you do not hit producer flow control.
You want your page size to be much higher than 1 or 2 for ActiveMQ to perform (200 to 1000 to start). Numbers that low will have higher latency.
Note: Priority is an anti-pattern in distributed messaging at significant load (over 1M messages per day). It works well in a local embedded broker within your Java VM process. ActiveMQ disables it by default.
To enable priority support, update the <destinationPolicy queue=".." and add this attribute: prioritizedMessages="true"
Related
I'm using the Java Client 3.5.6 for RabbitMQ.
My use case is this:
I have 10-15 Channels to one queue (mostly the same connection, one connection per channel makes no difference).
I get them without autoAck. Every Channel has a prefetch / QoS size of 5000. So let's just assume i have 30 channels, so i can get 150000 messages.
Every full minute, i compute some things and when successful, i use basicAck to acknowledge these messages.
However, the management webinterface shows in that phase that 0 messages are delivered, which is not realistic unless those are somehow "blocked".
I'm using this queue on 3-node-cluster as a HA-queue with TTL set to 1800 seconds. The nodes are connected via internal LAN and the machines are really powerful with plenty RAM.
My Question:
Why does this basicAck operation block the rest of the operations like publishing or delivering new messages?
What is the maximum size that a message can be when publishing to a RabbitMQ queue (pub/sub model) ?
I can't see any explicit limits in the docs but I assume there are some guidelines.
Thanks in advance.
I was doing comparison between Amazon Queue Service and RabbitMQ or other streaming+messaging platforms like kinesis, kafka. As Amazon Queue Service only supports min 2^10 bytes(1 Kilobytes) - max 2^18 bytes (256 Kilobytes), similarly kinesis has size limits too. (Don't know why?)
Anyway In theory AMQueueProtocal would handle 2^64 bytes. So, even for a huge message, RabbitMQ might work in a single broker, definitely taking minutes/hours to persist but would or might not in a cluster of brokers. If the message transfer time between nodes (60seconds?) > heartbeat time between nodes, it will cause the cluster to disconnect and the loose the message.
This thread is useful -> Can RabbitMQ handle big messages?
References
http://grokbase.com/t/rabbitmq/rabbitmq-discuss/127wsy1h92/limiting-the-size-of-a-message
http://comments.gmane.org/gmane.comp.networking.rabbitmq.general/14665
http://rabbitmq.1065348.n5.nabble.com/Max-messages-allowed-in-a-queue-in-RabbitMQ-td26063.html
https://www.rabbitmq.com/heartbeats.html
Can anyone please tell me the difference between maxPageSize and prefetch limit?
maxPageSize: according to apache websitemaximum number of persistent messages to page from store at a time. This is when messages are stored in persistent store (like kahadb). You have reference of the message stored in the database. This maxPageSize limits the number of references for message you can have. You have this reference for faster access from database (like having an index in database which increase the performmance)
Prefetch Limit: is related to sending number of message to consumer to improve performance. If you set prefetch limit as 0, means consumer would keep on polling for message from queue however if you set it as 100 then activemq would send 100 message in advance (prefetched to consumer) to consumer to process which would remove an extra effort which otherwise would have to make by consumer to check for any message on queue.
I'm using RabbitMQ to handle app logs (windows server 2008 install). apps send messages to the exchange. I have a dedicated queue that gets messages forwarded to it. I then have a windows service connecting to that queue, pulling messages off, and persisting them to DB. I have a n-number of clients connecting to the exchange in real time to latch on the the stream so there are n-number of connections at a time. It is possible that some of these clients may not Close() their connections in code. Many clients have long running connections.
As messages are pulled off the queue, they are auto-ack'ed, so I don't have any unacknowledged messages on the queue. However, I'm seeing the memory of Rabbit grow over time. It starts at 32K or so when first turned on then creeps up until it exceeds the threshold and blocks incoming connections.
I have both .NET and Java clients--but both are auto-ack.
Reading the docs, I didn't see any description of how Rabbit is using memory--i.e. I don't understand why memory would be bloating over time. The messages are getting pulled off and ack'ed which seems to me would mean that Rabbit wouldn't be holding on to it any more and thus can free the associated memory, causing a stable mem usage profile.
I don't see how fiddling with the memory dial in Rabbit would help either--usage just creeps upwards over time: eventually I'll exceed it.
My guess is that there is something I'm doing wrong with my clients that is causing the memory to grow over time, but I can't think of why that would be.
why does Rabbit memory usage creep up when no messages are kept on any queues?
what coding practices could cause the RabbitMQ server to
retain (and grow) memory?
Is it possible that you have other queues bound to the exchange perhaps? Check the Rabbit admin page under exchanges, click on your exchange, and check for queues bound to it. It may be that one of your clients, when declaring the exchange, is inadvertently binding an unnamed (system random named) queue to the exchange, and messages are piling up in there.
The other thing to check is the QoS settings - if you leave QoS set at the default (infinite) then Rabbit will send out messages immediately to any client regardless of how many messages they are already holding. This results in a lot of book-keeping, like which client has which message on the server, and a large buffer on the client.
Make sure to set your QoS pre-fetch limit to something much more reasonable, like say 100. That way, if you have 1M messages and only 1 client with prefetch of 100, Rabbit will send only 100 to the client and keep the other 999900 on disk on the server, and not use nearly as much memory.
This was a big cause of memory bloat in my application, and now that I've addressed prefetch, everything is fine.
In my application a specific service has a constant bandwidth (For e.g 100 transactions at a time ) , requests to the service arrive real-time as well as batch jobs (Queues). The real time requests doesnt have a uniform distribution. I need a way to make sure that real time jobs are processed first before the batch jobs and also make sure that at any time I don't exceed the threshold of the service.
Please evaluate the following approach.
Have 2 queues A - real time and B - Batch job. Have a thread pool of size = 100 (Service Threshold ) and let the
thread pool first try to pick msgs from A if any else pick from B.
My application runs on Weblogic , I want to make use of MDBs instead of the thread pool but there is no way to make the MDBs listen to multiple Queues.
Within JMS you can set a message priority which should be respected if possible. This may be something simple to try.
Another option could be to set a JMS property on the message with the client and use a Message Selector on the MDB. You could set MY_MESSAGE_TYPE=batch/rt and then have multiple MDB's deployed that are listening to the same queue but can be assigned to different work managers. Keep in mind that Work Manager != Thread Pool. You can also set a Request Class to ensure that if the batch pool is in use that the RT pool will not be starved for threads/CPU.
With this design I believe that if you have two MDB's, one with a message selector, messages that meet the selector criteria should be delivered to the MDB with that selector (RT) before an MDB with no selectors (BATCH). This would be a fairly simple POC to do - set up a client that sends messages to the queue, some of which have the JMS property set to RT and others that do not have it set.
10.0 referece (which is still applicable): http://docs.oracle.com/cd/E11035_01/wls100/config_wls/self_tuned.html