I am using the ActiveMQ Artemis Broker and publishing to it through a client application.
Behavior observed:
When my client is IPV4 a TLS handshake is established and data is published as expected, no problems.
When my client is IPV6 , I see frequent re-connections being established between the client and the server(broker) and no data is being published.
Details:
When using IPV6 the client does a 3 way handshake and attempts to send data. It also receives a Server Hello and sends application data.
But the connection terminates and again reconnects. This loop keeps occurring.
The client library, network infrastructure, and broker are all completely the same when using IPv4 and IPv6.
The client logs say:
Idle network reply timeout.
The broker logs show an incoming connection request and also an CONNACK for it from the broker, e.g.:
MQTT(): IN << CONNECT protocol=(MQTT, 4), hasPassword=false, isCleanSession=false, keepAliveTimeSeconds=60, clientIdentifier=b_001, hasUserName=false, isWillFlag=false
MQTT(): OUT >> CONNACK connectReturnCode=0, sessionPresent=true
What wire-shark (tcpdump) tells:
Before every re-connection(3 way handshake is done) I see this:
Id Src Dest
1 Broker(App Data) Client
2 Broker(App Data) Client
3 Client(ACK) Broker
4 Client(ACK) Broker
5 Broker(FIN,ACK) Client
6 Client(FIN,ACK) Broker
7 Broker (ACK) Client
8 Client (SYN) Broker
9 Broker (SYN/ACK) Client
10 Client (ACK) Broker
Then the 3 way handshake (Client hello, Change Cipher Spec, Server Hello) and the above repeats again.
Based on packets 5, 6, & 7 I have concluded that the connection is being terminated by the broker (server). The client acknowledges termination and then again attempts to reconnect as it is an infinite loop attempting re connection and publishing.
I am looking at network level analysis for the first time and even wireshark. I'm not sure if my analysis is right.
Also have hit a wall, not sure why re-connection is occurring only when the device is IPV6. Also I don't see any RST to indicate termination of connection.
Broker is also sending a CONNACK (from broker logs), but still no data is sent, just attempts to reconnect not sure why.
Also, I see a few I see a few:
Out-of-Order TCP (when src is broker)
Spurious Re-transmission
DUP ACK (src is client)
Not sure if this is important.
Any headers on what is going on?
The issue was caused due to a LB setting which had a default connection time out of 30 secs , lesser than the connection timeout set by the client.
Related
we have one client and server our application running as bridge between client and server.
our module responsible for forwarding the traffic between client and server.
During SSL handshake i am trying to interrupt the client hello and respond to the client with serverhello.
we are able to interrupt and send the serverhello to the client but SSL handshake is failed.
captured the packets during SSL handshake.
i could see Server hello reached to the client interface but Client machine retransmitting the clienthello again and again could any one help on this what went wrong why the client not processing serverhello.
Edit:
I think based on the below answer here, it seems the answer is "client and server basically only communicate on one port, 3478 (or equivalent")
rfc 5766 : Issue when Both devices support TURN
==========================.
I have been reading several sources on TURN, including RFC.
I get the whole premise:
Client creates allocation on TURN server
Client sends data to Peer through TURN that relays via the relayed transport address
Same way around from peer --> Server --> client
Most resources focus on setting up the server and what ports need to be configured.
The point that I am unclear is on the client side:
After the allocation is done and the client can start sending data, do they send that data to the relayed transport address that the Server allocated? Or do they send it to the standard TURN port e.g. 3478, and then the server takes care of looking up the allocation for this client and send it through the relayed address to the peer?
Example:
Client address 192.6.12.123:45677 (let's assume it's the NAT)
TURN server listens on 34.45.34.123:3478
TURN server has done an allocation for client on 34.45.34.123:50678
So when the client wants to send to a peer application data, do they send on port 3478 or port 50678?
My assumption (based also on some wireshark captures I tried) is that the client always send everything on port 3478 and the server takes care to send via the relayed address.
My assumption (based also on some wireshark captures I tried) is that the client always send everything on port 3478
The client will pick a random local port (e.g 45677), but traffic sent from this port goes to the server's port 3478 (or 5349 if using TLS) on the server. The server will forward it through its allocated port (50678) to whatever remote port the other client established during ICE negotiation.
I have TLS program and I did some experiments on it.
I start confidential TLS server session and try to connect to it with pure Telnet client.
As expected, the handshake failed and the server is available to the next client but on the Telnet client side I didn't receive any indication that the handshake failed and that the server is accepting other clients.
I can see in Wireshark that even after the handshake failed the Telnet client can send strings; I see [PSH, ACK] from the client answered by [ACK] from the server.
Adding Wireshark snapshot, Telnet failed the handshake, Telnet keep sending messages, followed by success in the TLS handshake and more Telnet messages:
Why is the server ACKing the Telnet client if the handshake failed and he is accepting other clients?
As expected, the handshake failed ...
I cannot see a failed TLS handshake in the packet capture and I'm not sure how you come to this conclusion.
All I can see that the client on source port 60198 (presumable your telnet) is sending 3 bytes several times and the server just ACK'ing these without sending anything back and without closing the connection. Likely the server is still expecting data in the hope that at some time it will be a complete TLS record. Only then it will be processed by the TLS stack and then it might realize that something is wrong with the client.
... the server is available to the next client
It is pretty normal for a server to handle multiple clients in parallel. In contrary, it would be unusual if the server could not do this.
SSL/TLS runs over the TCP layer. Suppose TCP connection is terminated before SSL/TLS session was closed. How would SSL/TLS get to know about this ?
A TLS session is mostly independent from the underlying TCP connections.
For example you can have multiple TCP connections all using the same TLS session and these can coexist even in parallel. This is actually used in practice, for example with web browsers. It is even required in some implementations of FTPS where the control and data connections (different TCP connections) are expected to (re)use the same TLS session. Note that the session does not simply gets continued inside another TCP connection - there is still a TLS handshake required to start the "continuation" of a session, but only an abbreviated handshake.
Similar you can have multiple TLS sessions inside a single TCP connection but only after each other: initiate one TLS session, shutdown it, initiate the next etc. While this is not commonly used it is actually not uncommon that the TLS session does only start after some plain data has been transferred (STARTTLS in SMTP, AUTH TLS in FTPS) or that TLS gets shutdown and then more data are transferred in plain (CCC in FTPS).
How would SSL/TLS get to know about this ?
The exact details depend on the TLS stack and the API provided by this stack. But usually if the underlying TCP connection is closed this is somehow signaled to the TLS stack. For example with OpenSSL a SSL_read will return a value of equal or less than 0 and you need to call SSL_get_error to get more details on what happened. And again, a TCP close does not implicitly invalidate the TLS session.
SSL/TLS runs over the TCP layer.
Correct.
Suppose TCP connection is terminated before SSL/TLS session was closed.
Then (a) the TCP connection has ended, and (b ) the SSL/TLS session persists.
How would SSL/TLS get to know about this?
It doesn't need to know about this. It only needs to know about the end of the TCP connection, which is signalled by the TLS close_notify message, and the end of the session, which happens when it is invalidated. TLS sessions can long outlive TCP connections, and vice versa.
There is a heartbeat protocol that is used by SSL/TLS to check that the connection is still alive or not. So for a heartbeat request a closed connection will response negative. Hence SSL/TLS will know that TCP connection is closed.
I have a strange behavior in ActiveMQ with network connectors. Here is the setup:
Broker A listening for connections via a nio transport connector on 61616
Broker B establishing a duplex connection to broker A
A producer on A sends messages to a known queue, say q1
A consumer on B subscribes to the queue q1
I can clearly see that the duplex connection is established but the consumer on B doesn't receive any message.
On jconsole I can see that the broker A is sending messages, up to the value of the prefetch limit (1000 messages) to the network consumer, which seems fine. The "DispatchedQueue", "DispatchedQueueSize", and more importantly the "MessageCountAwaitingAck" counters have the same value: they are stuck to 1000.
On the broker B, the queue size is 0.
At the system level, I can clearly see an established connection between broker A and broker B:
# On broker A (192.168.x.x)
$ netstat -t -p -n
tcp 89984 135488 192.168.x.x:61616 172.31.x.x:57270 ESTABLISHED 18591/java
# On broker B (172.31.x.x)
$ netstat -t -p -n
tcp 102604 101144 172.31.x.x:57270 192.168.x.x:61616 ESTABLISHED 32455/java
Weird thing: the recv-q and send-q on both brokers A and B seem to have some data not read by the other side. They don't increase or decrease, they are just stuck to these values.
The ActiveMQ logs on both sides don't say much, even in TRACE level.
Seems like neither broker A or broker B are sending acks for the messages to the other side.
How is that possible? What's a potential cause and fix?
EDIT: I should add that I'm using an embedded ActiveMQ 5.13.4 on both sides.
Thanks.