Why MongooseIM closes the websocket connection after 60 seconds? - react-native

I am using MongooseIM as chat server and connecting it over websocket using xmpp.js inside react-native application. The server forcefully closes the connection after 60s on inactivity. I want to know:
If this is the default config?
Should/Can I change it?
Should I set up ping mechanism such that my client must send some pings after every 60s to avoid disconnect

WebSocket connections have default timeout value for inactivity set to infinity. Your configuration most probably contains "{timeout, 60000}" in the "mod_websockets" configuration. In order to keep idle connections connected to the server you can send WebSocket ping frames from time to time.
More info about "mod_websockets" configuration is here:: https://mongooseim.readthedocs.io/en/latest/advanced-configuration/Listener-modules/#http-based-services-bosh-websocket-rest-ejabberd_cowboy
You can even configure the server to send WebSocket's ping frames by specifying the option {ping_rate, ValueInMilliSeconds}

Related

Moleculer JS "Redis-pub client is disconnected" every 10 minutes

my application (Node.js) is using moleculer for microservices and redis as transporter. However, I find that the application will have this log Redis-pub client is disconnected every 10 minutes, then reconnect with the log Redis-pub client is connected after a few seconds. This is a problem because if a client send a moleculer action during this time, it will fail.
Any idea what is causing this? Let me know if more information is needed.
Azure Cache for Redis currently has a 10-minute idle timeout for connections, so the idle timeout setting in your client application should be less than 10 minutes. Most common client libraries have a configuration setting that allows client libraries to send Redis PING commands to a Redis server automatically and periodically. However, when using client libraries without this type of setting, customer applications themselves are responsible for keeping the connection alive.
More info: https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/cache-best-practices-connection#idle-timeout

ActiveMQ inactivity timeout

I am using ActiveMQ version 5.10.0 with default configuration.
The documentation on Active MQ transport protocols say that by default wireFormat.maxInactivityDuration is 30000 and transport.useKeepAlive is enabled by default.
Does that mean that for default configuration , inactivity timeout will never occur ? as keepAlive messages are enabled and sent by default ?
I have tried leaving my queues idle for a day and I did not see any Inactivity timeout logs.
But the activeMQ page also says
" Using the default values; if no data has been written or read from the connection for 30 seconds, the InactivityMonitor kicks in. The InactivityMonitor throws an InactivityIOException and shuts down the transport associated with the connection."
http://activemq.apache.org/activemq-inactivitymonitor.html
The inactivity timeout would occur when the connection is broken or the broker is experiencing issues such that it cannot respond to the ping request that the client will send it. The timeout does not related to message inactivity or the like but to ping / pong type hearbeats between client and broker. So long as the broker is healthy and sending the requested responses the client will not terminate the connection even if no messages happen to be flowing across it.

RabbitMQ: Server heartbeat must fail 3 times before connection drop?

We have a HA RabbitMQ cluster (v3.2.x) with two nodes that sits behind a load-balancer. Our clients are configured to use a 300s heartbeat. Everything works as expected most of the time.
However, if the client's connection drops (say the client's NIC is disconnected), we have noticed (via TCPDump/wireshark) that the RabbitMQ node will attempt 3 heartbeat messages (in our case nearly 15 mins) before it closes the connection. Why? Why not close it after one failure?
Is there some means to change this behavior on the RabbitMQ server? Or do we have to shorten our heartbeat to something much smaller like 5s or 10s in order to get the connection to close sooner, thoughts?
Related issue...
Looking at the TCPDump (captured on load-balancer), I wonder why the LB doesn't close the connection when it doesn't receive the TCP-ACK from the dead client in response to the proxied RabbitMQ server heartbeat request? In fact, the LB will attempt to send the request several times (never receiving a response, of course). Wouldn't it make sense for the LB to make the assumption the connection has been dropped and close the entire session (including the connection to RabbitMQ node)?
It appears as though RabbitMQ is configured to tolerate two missed heartbeats before it terminates the connection. However, it waits until the next heartbeat would need to be sent before it drops the connection, that's what gives it the appearance of requiring 3 missed heartbeats.
Heartbeat1 (no response) wait Heartbeat2 (no response) wait Heartbeat3 terminate
There is a slight bug in MQ (it sends a 3rd heartbeat but immediately terminates the connection) but it isn't really affecting anything.

FIN pkt with HTTP connection after some time

Opened 2 TCP connections :
1. Normal connection(while implementing echo server,client) &
2. HTTP connection
Opened HTTP connection with curl(modified) utility while running apache as server, where curl is not sending GET request for some time after connection establishment.
For normal connection after connection establishment, server is waiting for request from client.
But as observed, Strangely in HTTP connection after connection establishment, if GET request is not coming from client(for some time), server is sending FIN pkt to client & closing his part of connection.
Is it a mandatory condition for HTTP client to send GET request immediately after initial connection.
Apache got a parameter called Timeout.
Its manual page ( Apache Core - Timeout Directive ) states:
The TimeOut directive defines the length of time Apache will wait for I/O in various circumstances:
When reading data from the client, the length of time to wait for a
TCP packet to arrive if the read buffer is empty.
When writing data to the client, the length of time to wait for an
acknowledgement of a packet if the send buffer is full.
In mod_cgi, the length of time to wait for output from a CGI script.
In mod_ext_filter, the length of time to wait for output from a
filtering process.
In mod_proxy, the default timeout value if ProxyTimeout is not
configured.
I think you fell into case NUMBER ONE
EDIT
I was lurking into W3 HTTP document and I found no refer to timeouts.
but into the chapter 8 (connections) I found:
8.1.4 Practical Considerations
Servers will usually have some time-out value beyond which they will no longer maintain an inactive connection. (...) The use of persistent connections places no requirements on the length (or existence) of this time-out for either the client or the server.
that sounds to me like "every server or client is free to choose his behaviour about inactive connection timeouts"

What is the retry interval for Apache once it has hit it's max connections?

If Apache reaches it's max number of connections, or it's max clients, how long will it wait, before trying again to connect any of the connections that couldn't connect? Or will it just drop the connection the first time and never try again? If it does wait, is there a setting for that?
Apache will simply queue connections into its ListenBackLog. Then once a current connection is done, Apache will attempt to serve the oldest queued connection.
http://httpd.apache.org/docs/2.2/mod/mpm_common.html#listenbacklog
I think the TimeOut Directive does effect what happens with a connection in the ListenBackLog queue.
http://httpd.apache.org/docs/2.2/mod/core.html#timeout
If its been the timeout limit since a connection was queued, that connection is dropped.