ActiveMQ inactivity timeout - activemq

I am using ActiveMQ version 5.10.0 with default configuration.
The documentation on Active MQ transport protocols say that by default wireFormat.maxInactivityDuration is 30000 and transport.useKeepAlive is enabled by default.
Does that mean that for default configuration , inactivity timeout will never occur ? as keepAlive messages are enabled and sent by default ?
I have tried leaving my queues idle for a day and I did not see any Inactivity timeout logs.
But the activeMQ page also says
" Using the default values; if no data has been written or read from the connection for 30 seconds, the InactivityMonitor kicks in. The InactivityMonitor throws an InactivityIOException and shuts down the transport associated with the connection."
http://activemq.apache.org/activemq-inactivitymonitor.html

The inactivity timeout would occur when the connection is broken or the broker is experiencing issues such that it cannot respond to the ping request that the client will send it. The timeout does not related to message inactivity or the like but to ping / pong type hearbeats between client and broker. So long as the broker is healthy and sending the requested responses the client will not terminate the connection even if no messages happen to be flowing across it.

Related

RabbitMQ connection with heartbeat

When I create a connection between my web application (as a client) with RabbitMQ (as a server) and set an heartbeat interval time, which side sends these heartbeat requests and which one response to them?
In some documentations, I saw some infos about Client that sends these heartbeat frames
But I tried to dump my local traffic and saw something interesting about heartbeat packets. It seems that the rabbit server is sending some heartbeat packets too.

Why MongooseIM closes the websocket connection after 60 seconds?

I am using MongooseIM as chat server and connecting it over websocket using xmpp.js inside react-native application. The server forcefully closes the connection after 60s on inactivity. I want to know:
If this is the default config?
Should/Can I change it?
Should I set up ping mechanism such that my client must send some pings after every 60s to avoid disconnect
WebSocket connections have default timeout value for inactivity set to infinity. Your configuration most probably contains "{timeout, 60000}" in the "mod_websockets" configuration. In order to keep idle connections connected to the server you can send WebSocket ping frames from time to time.
More info about "mod_websockets" configuration is here:: https://mongooseim.readthedocs.io/en/latest/advanced-configuration/Listener-modules/#http-based-services-bosh-websocket-rest-ejabberd_cowboy
You can even configure the server to send WebSocket's ping frames by specifying the option {ping_rate, ValueInMilliSeconds}

RabbitMQ durable queue losing messages over STOMP

I have a webpage connecting to a rabbit mq broker using javascript/websockets that are exposed by a spring app deployed in tomcat. Messages are produced 1 per second by an external application and are rendered on the webpage. The javascript subscription is durable.
The issue I'm experiencing is that when the network connection is broken on the javascript client for a period of time (say 60 seconds), the first ~24 seconds of messages are missing. I've looked through the logs of the app deployed in tomcat and the missing messages seem to be up until the following log statement:
org.springframework.messaging.simp.stomp.StompBrokerRelayMessageHandler - DEBUG - TCP connection to broker closed in session 14
I think this is the point at which the endpoint realises the javascript client is disconnected and decides to close the connection to the broker resulting in future messages queueing up.
My question is how can I ensure that the messages between the time the network is severed and the time the endpoint realises the client is disconnected are not lost? Should the endpoint put the messages back on the queue somehow? Maybe there's a way to make it transactional?
Thanks in advance.
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Your Tomcat application should not acknowledge messages from RabbitMQ until it confirms that your Javascript client has received them. This way, any messages that aren't ack-ed by the JS client won't be ack-ed by Tomcat, and RabbitMQ will re-deliver them.
I don't know how your JS app and Tomcat interact, but you may have to implement your own ack process there.

RabbitMQ: Server heartbeat must fail 3 times before connection drop?

We have a HA RabbitMQ cluster (v3.2.x) with two nodes that sits behind a load-balancer. Our clients are configured to use a 300s heartbeat. Everything works as expected most of the time.
However, if the client's connection drops (say the client's NIC is disconnected), we have noticed (via TCPDump/wireshark) that the RabbitMQ node will attempt 3 heartbeat messages (in our case nearly 15 mins) before it closes the connection. Why? Why not close it after one failure?
Is there some means to change this behavior on the RabbitMQ server? Or do we have to shorten our heartbeat to something much smaller like 5s or 10s in order to get the connection to close sooner, thoughts?
Related issue...
Looking at the TCPDump (captured on load-balancer), I wonder why the LB doesn't close the connection when it doesn't receive the TCP-ACK from the dead client in response to the proxied RabbitMQ server heartbeat request? In fact, the LB will attempt to send the request several times (never receiving a response, of course). Wouldn't it make sense for the LB to make the assumption the connection has been dropped and close the entire session (including the connection to RabbitMQ node)?
It appears as though RabbitMQ is configured to tolerate two missed heartbeats before it terminates the connection. However, it waits until the next heartbeat would need to be sent before it drops the connection, that's what gives it the appearance of requiring 3 missed heartbeats.
Heartbeat1 (no response) wait Heartbeat2 (no response) wait Heartbeat3 terminate
There is a slight bug in MQ (it sends a 3rd heartbeat but immediately terminates the connection) but it isn't really affecting anything.

FIN pkt with HTTP connection after some time

Opened 2 TCP connections :
1. Normal connection(while implementing echo server,client) &
2. HTTP connection
Opened HTTP connection with curl(modified) utility while running apache as server, where curl is not sending GET request for some time after connection establishment.
For normal connection after connection establishment, server is waiting for request from client.
But as observed, Strangely in HTTP connection after connection establishment, if GET request is not coming from client(for some time), server is sending FIN pkt to client & closing his part of connection.
Is it a mandatory condition for HTTP client to send GET request immediately after initial connection.
Apache got a parameter called Timeout.
Its manual page ( Apache Core - Timeout Directive ) states:
The TimeOut directive defines the length of time Apache will wait for I/O in various circumstances:
When reading data from the client, the length of time to wait for a
TCP packet to arrive if the read buffer is empty.
When writing data to the client, the length of time to wait for an
acknowledgement of a packet if the send buffer is full.
In mod_cgi, the length of time to wait for output from a CGI script.
In mod_ext_filter, the length of time to wait for output from a
filtering process.
In mod_proxy, the default timeout value if ProxyTimeout is not
configured.
I think you fell into case NUMBER ONE
EDIT
I was lurking into W3 HTTP document and I found no refer to timeouts.
but into the chapter 8 (connections) I found:
8.1.4 Practical Considerations
Servers will usually have some time-out value beyond which they will no longer maintain an inactive connection. (...) The use of persistent connections places no requirements on the length (or existence) of this time-out for either the client or the server.
that sounds to me like "every server or client is free to choose his behaviour about inactive connection timeouts"