How to reduce the frequency of ActiveMQ KeepAlive messages - activemq

Whe have configured HTTP transport for ActiveMQ. However, we are noticing that there are thousands of KeepAlive messages. I understand that KeepAlive messages are used to control how "dead" connections are detected and purged by the Inactivity Monitor: http://activemq.apache.org/activemq-inactivitymonitor.html
<org.apache.activemq.command.KeepAliveInfo>
<commandId>0</commandId>
<responseRequired>false</responseRequired>
</org.apache.activemq.command.KeepAliveInfo>
From the documentation, it seems that Inactivity Monitor can be turned off, but what I am trying to figure out if there is a setting to "reduce" the amount of chatter on a line but not completely eliminate. I am OK with one message per second, for example, but we are getting thousands.

A transport connector has a paramater "wireFormat.maxInactivityDuration" that determines the maximum inactivity duration. To reduce the frequency of keepalives, increase this value. The default value is 30000 (30 seconds).
If the default value of 30 seconds is in practice, and you are getting thousands of keepalives per second, I would expect you have tens-of-thousands of connections.
Here is an example of specifying this paramater:
<transportConnectors>
<transportConnector name="openwire"
uri="tcp://0.0.0.0:61616?wireFormat.maxInactivityDuration=30000&wireFormat.maxInactivityDurationInitalDelay=10000"/>
</transportConnectors>
If the other end of the connection specifies a shorter duration than your end, the shorter duration will be used by both ends of the connection. There doesn't appear to be a setting for a "minumum inactivity duration", so you will have to live with that if a client chooses a very short duration.

Related

activemq memory limit configuration Vs maxPageSize

I am unable to understand how the two attributes differ:'memoryLimit' and 'maxPageSize'
As per documentation:
'maxPageSize' = 'maximum number of persistent messages to page from store at a time'
'memoryLimit' - corresponds to the amount of memory that's assigned to the in-memory store for the queue
Here is a sample configuration for a queue :
<policyEntry queue="Consumer.normal.queue" producerFlowControl="true" memoryLimit="3200" maxPageSize="4"
maxBrowsePageSize="1000" prioritizedMessages="true" useCache="false" expireMessagesPeriod="0" queuePrefetch="1">
what I have observed is that if the maxPageSize =1 and memoryLimit = "3200" then I can see 2 messages loaded into memory and can be browsed via a jms client ( rest of the messages get stored in kahadb )
however if the maxPageSize = 4 and memoryLimit = "3200" then I can see 4 messages loaded into memory and can be browsed via a jms client
So are the two values meant to serve the same purpose ?
AND
does it mean that whichever of the these two attributes provides the greater number of messages will be used by activemq ?
maxPageSize determines how many messages ActiveMQ loads from the store (in your case, KahaDB) to hand to consumers. The memoryLimit indicates how much memory to allocate to keep messages in memory.
In short, (message size x maxPageSize <= memoryLimit) so that you do not hit producer flow control.
You want your page size to be much higher than 1 or 2 for ActiveMQ to perform (200 to 1000 to start). Numbers that low will have higher latency.
Note: Priority is an anti-pattern in distributed messaging at significant load (over 1M messages per day). It works well in a local embedded broker within your Java VM process. ActiveMQ disables it by default.
To enable priority support, update the <destinationPolicy queue=".." and add this attribute: prioritizedMessages="true"

rabbitmq performance check

I was trying to perform LoadTest on the RabbitMQ messaging to see to what extent it can take messages into the queue and transfer it to the target machine over shovel.
Steps i followed:
producer has 20 threads. Each thread sends message to a dedicated queue(Say suppose ProducerQueue1 -- ProducerQueue20).The message is of size 51Mb each. The messages are sent in random interval using java.util.Random(50) seconds.
After each message sent at random seconds(A random second between 1- 50),
there is a sleep of 2 minutes.Therefore each of the producer threads sleep for
2 min after every send.
The messages are sent in a infinite while loop.
There are shovels from each dedicated queue to the consumer side dedicated queues(Say suppose ConsumerQueue1 --- ConsumerQueue20).
The link speed is 100mbps.
Issue observed:
Initially the messages are transferred with no issues, but after some time the NETWORK AT CONSUMER SIDE IS CHOKED.
The reason for choking is that after certain period of time, even if 4/5 out of 20 thread's random second coincides, then the consumer receives close to 250Mb message in one shot. Since the network speed is 100mbps as mentioned above, the network gets choked.
Due to this, the shovels will not be able to exchange heartbeats to stay in "running" state. The leads shovels to move from "running" to "terminated" state. The shovels try to establish a connection depending upon the "reconnect delay".
Due to break in shovels at producer side, the queues at the producer starts getting accumulated.
My Question:
The consumer's rabbitMq memory starts increasing as the queues start accumulating more messages. The memory is crossing the water mark. The purpose of water mark is not served. I have 16gb ram and i have set watermark to 40%(i.e 6.4gb ram). But still the memory shoots up to 10gb and doesnt recover and the producer system hangs.
Can any one please answer my question. and also tell me can there be any other reason for network choking which i mentioned above.
Thanks in advance.

What is a reasonable value for heartbeat in RabbitMQ?

RabbitMQ allows you to "heartbeat" a connection, i.e. from time to time the client and the server check (using empty messages) that the other party is still there and available. So far, so good.
Unfortunately, I was not able to find a place in the documentation where a suggestion is made what a reasonable value for this is. I know that you need to specify the heartbeat in seconds, but what is a real-world best practice value?
Obviously, it should not be too often (traffic), but also not too rare (proxies, …). Any suggestions?
Is 15 seconds fine? 30? 60? …?
This answer if for RabbitMQ < 3.5.5, for newer versions see the answer from #bmaupin.
It depends on your application needs. Out of the box it is 10 min for RabbitMQ. If you fail to ack heartbeat twice (20min of inactivity), connection will be closed immediately without sending any connection.close method or any error from the broker side.
The case to use heartbeat is firewalls that closes inactive for a long time connection or some other network settings that doesn't allow you to have waiting connections.
In fact, hearbeat is not a must, from RabbitMQ config doc
heartbeat
Value representing the heartbeat delay, in seconds, that the server sends in the connection.tune frame. If set to 0, heartbeats are disabled. Clients might not follow the server suggestion, see the AMQP reference for more details. Disabling heartbeats might improve performance in situations with a great number of connections, but might lead to connections dropping in the presence of network devices that close inactive connections.
Default: 580
Note, that having hearbeat interval too short may result in significant network overhead. Keep in mind, that hearbeat frames are sent when there are no other activity on the connection for a hearbeat time interval.
The RabbitMQ documentation now provides a recommended heartbeat timeout value between 5 and 20 seconds:
Setting heartbeat timeout value too low can lead to false positives (peer being considered unavailable while it really isn't the case) due to transient network congestion, short-lived server flow control, and so on. This should be taken into consideration when picking a timeout value.
Several years worth of feedback from the users and client library maintainers suggest that values lower than 5 seconds are fairly likely to cause false positives, and values of 1 second or lower are very likely to do so. Values within the 5 to 20 seconds range are optimal for most environments.
Source: https://www.rabbitmq.com/heartbeats.html#false-positives
In addition, as of RabbitMQ 3.5.5 the default heartbeat timeout value is 60 seconds (https://www.rabbitmq.com/heartbeats.html#heartbeats-timeout)

RabbitMQ consumer crash and consumer-count

If a consumer of a RabbitMQ crashes, with no graceful disconnection, will a subsequent declare-ok request fired several milliseconds later report a diminished consumer-count? Or is there an amount of time that needs to pass before the reported number will change?
declare-ok count all known consumers regardless their actual state.
See, in fact, some time after connection get dangled it still marked as alive (exact time depends on OS settings and whether do you use heartbeats and whether there are any network operation over that connection). In RabbitMQ management panel you may see connection and it channels with consumer tags listed some time after connection died.

Will the maximum limit of configuration property MaxReceivedMessageSize in wcf affects service performance?

I'm getting the following communication exception for my wcf service making cal to another wcf service:
"The maximum message size quota for incoming messages (65536) has been exceeded. To increase the quota, use the MaxReceivedMessageSize property on the appropriate binding element."
I resolved this by increasing the size as below:
maxReceivedMessageSize="50000000"
But, here I want to know whether any side effects of increasing message size to such big level.
Yes - it might. The reason WCF keeps this limit low (64K) by default is this: imagine your server is busy responding to requests, say dozens or hundreds, and they all require the maximum message size.
Potentially, your server could have to allocate dozens or hundreds of message buffers at the same time - if you have 100 users and each requests 64K, that's 6.4 MByte - but if you have 200 users and each requests 5 MB - that's a gigabyte of RAM in the server - just for the message buffers, for one service.
So yes - putting a limit on the max message size does make sense and it helps manage your server's memory consumption (and thus performance). If you open it up too wide, an attacker might just do such an attack - flooding your server with bogus requests, each allocating as much memory as they can get, ultimately bringing your server down (Denial of Service attacks like that are quite common).
You need to increase the quota according to the web service. One side effect I can think of if you use very large values is that memory usage will increase but you can safely increase it to a suitable value without any adverse effects.