RabbitMQ connection with heartbeat - rabbitmq

When I create a connection between my web application (as a client) with RabbitMQ (as a server) and set an heartbeat interval time, which side sends these heartbeat requests and which one response to them?
In some documentations, I saw some infos about Client that sends these heartbeat frames
But I tried to dump my local traffic and saw something interesting about heartbeat packets. It seems that the rabbit server is sending some heartbeat packets too.

Related

CometD Failover Ability - VM Switch During Restart

I have a chat implementation working with CometD.
On front end I have a Client that has a clientId=123 and is talking to VirtualMachine-1
The longpolling connection between the VirtualMachine-1 and the Client is done through the clientId. When the connection is established during the handshake, VirtualMachine-1 registers the 123 clientId as it's own and accepts its data.
For some reason, if VM-1 is restarted or FAILS. The longpolling connection between Client and VM-1 is disconnected (since the VirtualMachine-1 is dead, the heartbeats would fail, thus it would become disconnected).
In which case, CometD loadBalancer will re-route the Client communication to a new VirtualMachine-2. However, since VirtualMachine-2 has different clientId it is not able to understand the "123" coming from the Client.
My question is - what is the cometD behavior in this case? How does it re-route the traffic from VM-1 to a new VM-2 to successfully go through handshaking process?
When a CometD client is redirected to the second server by the load balancer, the second server does not know about this client.
The client will send a /meta/connect message with clientId=123, and the second server will reply with a 402::unknown_session and advice: {reconnect: "handshake"}.
When receiving the advice to re-handshake, the client will send a /meta/handshake message and will get a new clientId=456 from the second server.
Upon handshake, a well written CometD application will subscribe (even for dynamic subscriptions) to all needed channels, and eventually be restored to function as before, almost transparently.
Messages published to the client during the switch from one server to the other are completely lost: CometD does not implement any persistent feature.
However, persisting messages until the client acknowledged them is possible: CometD offers a number of listeners that are invoked by the CometD implementation, and through these listeners an application can persist messages (or other information) into their own choice of persistent (and possibly distributed) store: Redis, RDBMS, etc.
CometD handles reconnection transparently for you - it just takes a few messages between client and the new server.
You also want to read about CometD's in-memory clustering features.

RabbitMQ durable queue losing messages over STOMP

I have a webpage connecting to a rabbit mq broker using javascript/websockets that are exposed by a spring app deployed in tomcat. Messages are produced 1 per second by an external application and are rendered on the webpage. The javascript subscription is durable.
The issue I'm experiencing is that when the network connection is broken on the javascript client for a period of time (say 60 seconds), the first ~24 seconds of messages are missing. I've looked through the logs of the app deployed in tomcat and the missing messages seem to be up until the following log statement:
org.springframework.messaging.simp.stomp.StompBrokerRelayMessageHandler - DEBUG - TCP connection to broker closed in session 14
I think this is the point at which the endpoint realises the javascript client is disconnected and decides to close the connection to the broker resulting in future messages queueing up.
My question is how can I ensure that the messages between the time the network is severed and the time the endpoint realises the client is disconnected are not lost? Should the endpoint put the messages back on the queue somehow? Maybe there's a way to make it transactional?
Thanks in advance.
The RabbitMQ team monitors this mailing list and only sometimes answers questions on StackOverflow.
Your Tomcat application should not acknowledge messages from RabbitMQ until it confirms that your Javascript client has received them. This way, any messages that aren't ack-ed by the JS client won't be ack-ed by Tomcat, and RabbitMQ will re-deliver them.
I don't know how your JS app and Tomcat interact, but you may have to implement your own ack process there.

ActiveMQ inactivity timeout

I am using ActiveMQ version 5.10.0 with default configuration.
The documentation on Active MQ transport protocols say that by default wireFormat.maxInactivityDuration is 30000 and transport.useKeepAlive is enabled by default.
Does that mean that for default configuration , inactivity timeout will never occur ? as keepAlive messages are enabled and sent by default ?
I have tried leaving my queues idle for a day and I did not see any Inactivity timeout logs.
But the activeMQ page also says
" Using the default values; if no data has been written or read from the connection for 30 seconds, the InactivityMonitor kicks in. The InactivityMonitor throws an InactivityIOException and shuts down the transport associated with the connection."
http://activemq.apache.org/activemq-inactivitymonitor.html
The inactivity timeout would occur when the connection is broken or the broker is experiencing issues such that it cannot respond to the ping request that the client will send it. The timeout does not related to message inactivity or the like but to ping / pong type hearbeats between client and broker. So long as the broker is healthy and sending the requested responses the client will not terminate the connection even if no messages happen to be flowing across it.

RabbitMQ: Server heartbeat must fail 3 times before connection drop?

We have a HA RabbitMQ cluster (v3.2.x) with two nodes that sits behind a load-balancer. Our clients are configured to use a 300s heartbeat. Everything works as expected most of the time.
However, if the client's connection drops (say the client's NIC is disconnected), we have noticed (via TCPDump/wireshark) that the RabbitMQ node will attempt 3 heartbeat messages (in our case nearly 15 mins) before it closes the connection. Why? Why not close it after one failure?
Is there some means to change this behavior on the RabbitMQ server? Or do we have to shorten our heartbeat to something much smaller like 5s or 10s in order to get the connection to close sooner, thoughts?
Related issue...
Looking at the TCPDump (captured on load-balancer), I wonder why the LB doesn't close the connection when it doesn't receive the TCP-ACK from the dead client in response to the proxied RabbitMQ server heartbeat request? In fact, the LB will attempt to send the request several times (never receiving a response, of course). Wouldn't it make sense for the LB to make the assumption the connection has been dropped and close the entire session (including the connection to RabbitMQ node)?
It appears as though RabbitMQ is configured to tolerate two missed heartbeats before it terminates the connection. However, it waits until the next heartbeat would need to be sent before it drops the connection, that's what gives it the appearance of requiring 3 missed heartbeats.
Heartbeat1 (no response) wait Heartbeat2 (no response) wait Heartbeat3 terminate
There is a slight bug in MQ (it sends a 3rd heartbeat but immediately terminates the connection) but it isn't really affecting anything.

recv() fails on UDP

I’m writing a simple client-server app which for the time being will be for my own personal use. I’m using Winsock for the net communication. I have not done any networking for the last 10 years, so I am quite rusty. I’d like to use as little external code as possible, so I have written a home-made server discovery mechanism, as follows.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket. The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket. The client then uses this info to connect to the server (on a different socket). This mechanism should allow the server to bind its listening socket to the first port it can within the dynamic port range (49152-65535) and to the clients to discover where the server is and on which port it is listening.
The server part works fine: the server receives the broadcast messages and successfully sends its response.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket).
But the message never makes it to the client app. I’ve tried doing a recv() in blocking and non-blocking mode, and there is never any data available. ioctlsocket() always shows no data is available, even though I know the packet got it to the machine.
The server succeeds on doing a recv() on broadcasted data. But the client fails on doing a recv() of the server’s response which is addressed to its discovery socket.
The question is very vague: what gotchas should I watch for in this scenario? Why would recv() fail to get a packet which has actually arrived to the machine? The sockets are UDP, so the fact that they are not connected is irrelevant. Or is it?
Many thanks in advance.
The client broadcasts a message containing the ‘name’ of a client UDP socket bound to an arbitrary port, which I will call the client’s discovery socket.
The message doesn't need to contain anything. Just broadcast an empty message from the 'discovery socket'. recvfrom() will tell the server where it came from, and it can just reply directly.
The server recv() the broadcast and then sendto() the client discovery socket the ‘name’ of its listening socket.
Fair enough, although actually the server could just broadcast its own TCP listening port every 5 seconds or whatever.
On the client side the firewall log shows that the server’s response arrives to the machine and that it is addressed to the correct port (to the client’s discovery socket). But the message never makes it to the client app
If it got to the host it must get to the application. You must have got the ports mixed up somehow. Simplify it as above and retry.
Well, it was one of those stupid situations: Windows Firewall was active, besides the other firewall, and silently dropping packets. Deactivating it solved the problem.
But I still don’t understand how it works, as it was allowing the server to receive packets sent through broadcasting. And when I got at my wits end and set the server to answer back through a broadcast, THOSE packets got dropped.
Two days of frustration. I hope someone profits from my experience.