How to use SSH with an unstable internet connection? - ssh

Sometimes, I'm forced to use ssh over an unstable internet connection.
ping some.doma.in
PING some.doma.in (x.x.x.x): 56 data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
64 bytes from x.x.x.x: icmp_seq=3 ttl=44 time=668.824 ms
Request timeout for icmp_seq 4
Request timeout for icmp_seq 5
Request timeout for icmp_seq 6
Request timeout for icmp_seq 7
64 bytes from x.x.x.x: icmp_seq=8 ttl=44 time=719.034 ms
Is there a way to use tools to increase the reliability of tcp connections (above all ssh)?
I imagine something like an SSH proxy, that runs on a machine with a decent connection, that will receive UDP packets, order them using a higher network layer protocol, forward them to the destination server using ssh and reply to the origin.
Or are there any ssh command line switches to enable more data redundancy or anything else to avoid "broken pipes"?
Or maybe a client-server application that uses the bittorrent network to distribute packets, and allows to forward commands to ssh back-and-forth. (=high latency but high reliability)
// I tried screen and stuff but sometimes the connection is just too unreliable to enable efficient working.
Cheers and thx in advance!

After some more research and some luck, I stumbled upon mosh.
http://mosh.mit.edu
It's amazing. A client-server implementation using UDP and lots of small little things (like echo prediction). Everyone should use it.

Related

MQTT Artemis broker, frequent reconnections when the device is on IPV6

I am using the ActiveMQ Artemis Broker and publishing to it through a client application.
Behavior observed:
When my client is IPV4 a TLS handshake is established and data is published as expected, no problems.
When my client is IPV6 , I see frequent re-connections being established between the client and the server(broker) and no data is being published.
Details:
When using IPV6 the client does a 3 way handshake and attempts to send data. It also receives a Server Hello and sends application data.
But the connection terminates and again reconnects. This loop keeps occurring.
The client library, network infrastructure, and broker are all completely the same when using IPv4 and IPv6.
The client logs say:
Idle network reply timeout.
The broker logs show an incoming connection request and also an CONNACK for it from the broker, e.g.:
MQTT(): IN << CONNECT protocol=(MQTT, 4), hasPassword=false, isCleanSession=false, keepAliveTimeSeconds=60, clientIdentifier=b_001, hasUserName=false, isWillFlag=false
MQTT(): OUT >> CONNACK connectReturnCode=0, sessionPresent=true
What wire-shark (tcpdump) tells:
Before every re-connection(3 way handshake is done) I see this:
Id Src Dest
1 Broker(App Data) Client
2 Broker(App Data) Client
3 Client(ACK) Broker
4 Client(ACK) Broker
5 Broker(FIN,ACK) Client
6 Client(FIN,ACK) Broker
7 Broker (ACK) Client
8 Client (SYN) Broker
9 Broker (SYN/ACK) Client
10 Client (ACK) Broker
Then the 3 way handshake (Client hello, Change Cipher Spec, Server Hello) and the above repeats again.
Based on packets 5, 6, & 7 I have concluded that the connection is being terminated by the broker (server). The client acknowledges termination and then again attempts to reconnect as it is an infinite loop attempting re connection and publishing.
I am looking at network level analysis for the first time and even wireshark. I'm not sure if my analysis is right.
Also have hit a wall, not sure why re-connection is occurring only when the device is IPV6. Also I don't see any RST to indicate termination of connection.
Broker is also sending a CONNACK (from broker logs), but still no data is sent, just attempts to reconnect not sure why.
Also, I see a few I see a few:
Out-of-Order TCP (when src is broker)
Spurious Re-transmission
DUP ACK (src is client)
Not sure if this is important.
Any headers on what is going on?
The issue was caused due to a LB setting which had a default connection time out of 30 secs , lesser than the connection timeout set by the client.

TCP windows full - intermittent issues in SSL handshake

I am seeing an intermittent SSL handshake error. Looking at the tcp packets, it seems that the times when the SSL handshake fails, TCP options are missing. I have attached the wireshark screenshot for the success and failure scenario. Notice the difference in the [SYN,ACK] packet sent by the server. In success case, it has larger Window size (4380) as compared to 512 when it fails. It also has additional options like MSS and SACK_PERM.
Would anyone know why this would happen? It's the same server but it's sending different capabilities in separate scenarios. Any info in troubleshooting this issue will help. Thanks!

Force a router to keep a an IDLE UDP port open

A client opens a UDP connection to my server , after some time (10 minutes-24 hours) the server needs to send data back to the client but it finds that the UDP port of the client is closed !.
After testing , we found that the client still have the UDP port open , but the router (nat) closed the port probably for inactivity !
is there any way to force the router to keep the UDP port open without sending keep-alive packets ? (server or client side) .
is there anything like that in ICMP ?
Thank you .
I had the same problem and I find this solution, not for the router, but for the server:
Try to configure the keep alive.
The way to do it depends of which service / program / OS are you using.
For example, using OpenSSH in the Client, you had to add/configure this lines in the file ./ssh/config or /etc/ssh/ssh_config:
ServerAliveInterval 30
ServerAliveCountMax 60
In the server (where I made the change) add/configure this lines in the file /etc/ssh/sshd_config:
ClientAliveInterval 30
ClientAliveCountMax 60
Of course it depends of the operative system, etc. but the idea is to configurate the keep alive right in the service.
Good luck!

tunneling using SSH

I'm tunneling all of my internet traffic through a remote computer hosting Debian using sshd. But my internet connection becomes so slow (something around 5 to 10 kbps!). Can be anything wrong with the default configuration to cause this problem?
Thanks in advance,
Tunneling TCP within another TCP stream can sometimes work -- but when things go wrong, they go wrong very quickly.
Consider what happens when the "real world" loses one of your TCP packets: after a certain amount of not getting an ACK packet back in response to new data packets, the sending side realizes a packet has gone missing and re-sends the data.
If that packet happens to be a TCP packet whose payload is another TCP packet, then you have two TCP stacks that are upset about their missing packet. The tunneled TCP layer will re-send packets and the outer TCP layer will also resend packets. This causes a giant pileup of duplicate packets that will eventually be delivered and must be dropped on the floor -- because the outer TCP reliably delivered the packet, eventually.
I believe you would be much better served by a more dedicated tunneling method such as GRE tunnels or IPSec.
Yes, tunelling traffic over tcp connection is not a good idea. See http://sites.inka.de/bigred/devel/tcp-tcp.html

Missing UDP fragments when monitoring traffic with tcpdump

I'm on a local LAN with only 8 connected computers using a netgear 24 port gigabit switch, network load is really low and send/receive buffers on all involved nodes(running slackware 11) have been set to 16mb. I'm also running tcpdump on each node to monitor the traffic.
A sending node sends a 10044byte large UDP packet which more often than not (3/4 times) does not end up in the receiving side application, in these cases I notice(using tcpdump) that the first x fragments are missing and only the last 3 (all with offsets > 0 and in order) are caught by tcpdump. The fragmented UDP package can therefore not be reassembled and is most likely thrown away.
I find the missing fragments strange since I have also tried a simple load test bursting out 10000 UDP messages of the same size, the receiving application sends a response and all tests so far gives 100% responses back.
Any clues or hints?
Update!
After resuming the testing of the above mentioned software I found a repeatable way of recreating the error.
Using windump on the sending windows machine, and tcpdump on the receiving machine, after having left the application idle for some time(~5 minutes), I tried sending the udp message but only end up with a single fragment caught by windump and tcpdump, the 3 remaining fragments are lost. Sending the same message one more time works fine and booth windump and tcpdump catches all 4 fragments and the application on the receiving side gets the message. The pattern is repeatable.
Started searching and found the following information, but to me, still not a clear answer.
http://www.eggheadcafe.com/software/aspnet/32856705/first-udp-message-to-a-sp.aspx
Re examining the logs I now notice the ARP request/reply being sent, which matches one of the ideas given in the link above.
NOTE! I filter windump on the sending side using: "dst host receivernode"
Capture from windump: first failed udp message, should be 4 fragments long
14:52:45.342266 arp who-has receivernode tell sendernode
14:52:45.342599 IP sendernode> receivernode : udp
Capture from windump: second udp message, exactly the same contents, all 4 fragments caught
14:52:54.132383 IP sendernode.10104 > receivernode .10113: UDP, length 6019
14:52:54.132397 IP sendernode> receivernode : udp
14:52:54.132406 IP sendernode> receivernode : udp
14:52:54.132414 IP sendernode> receivernode : udp
14:52:54.132422 IP sendernode> receivernode : udp
14:52:56.142421 arp reply sendernode is-at 00:11:11:XX:XX:fd (oui unknown)
Anyone who has a good idea about whats happening? please elaborate!