How to measure bandwith of an SSH tunnel? - ssh

I'm running a SSH Tunnel with OpenSSH on linux using the subprocess python module.
I want to find out how many bytes were sent and received from that SSH tunnel.
How can I find it out?

ssh(1) provides no mechanism for this. The OS does not provide a mechanism for this. Packet sniffing with e.g. tcpdump(1) is an option, but that would probably require root privileges, and would only be approximate if ssh(1) connections are made to the remote peer outside of your application. IPTables Accounting would give you similar tradeoffs, but would probably be much less overhead than tcpdump(1).
If you don't mind being very approximate, you could keep track of all the data you send to and read from your subprocess. ssh(1) will compress data before encrypting it, so you might over-estimate the amount of data sent, but ssh(1) will also have some overhead for re-keying, channel control, message authenticity, and so on, so it might even come close for 'average' data.
Of course, if a router along the way decides to drop every other packet, your TCP stack will send twice the data, maybe more.
Very approximate indeed.

You could measure the raw ssh transfer with something like pv:
ssh user#remote -t "cat /dev/urandom" | pv > /dev/null
ssh user#remote -t "pv > /dev/null" < /dev/urandom
(you could try with /dev/zero - but if you are using ssh compression, you'd get a very unreal transfer rate.)

Related

What is the simplest way to emulate a bidirectional UDP connection between two ports on localhost?

I'm adapting code that used a direct connection between udp://localhost:9080 and udp://localhost:5554 to insert ports 19080 and 15554. On one side, 9080 now talks and listens to 19080 instead of directly to 5554. Similarly, 5554 now talks and listens to 15554. What's missing is a bidirectional connection between 19080 and 15554. All the socat examples I've seen seem to ignore this simplest of cases in favor of specialized ones of limited usefulness.
I previously seemed to have success with:
sudo socat UDP4:localhost:19080 UDP4:localhost:15554 &
but I found that it may have been due to a program bug that bypassed the connection. It no longer works.
I've also been given tentative suggestions to use a pair of more cryptic commands that likewise don't work:
sudo socat UDP4-RECVFROM:19080,fork UDP4-SENDTO:localhost:15554 &
sudo socat UDP4-RECVFROM:15554,fork UDP4-SENDTO:localhost:19080 &
and additionally seem to overcomplicate the manpage statement that "Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them."
I can see from Wireshark that both sides are correctly using their respective sides of the connection to send UDP packets, but neither side is receiving what the other side has sent, due to the opacity of socat used in either of these ways.
Has anyone implemented this simplest of cases simply, reproducibly, and unambiguously? It was suggested to me as a way around writing my own emulator to pass packets back and forth between the ports, but the time spent getting socat to cooperate could likewise be put to better use.
You use fix ports, and you do not specify if one direction is initiating the transfers.
Therefore the datagram addresses are to prefer. Something like the following command should do the trick:
socat \
UDP-DATAGRAM:localhost:9080,bind=localhost:19080,sourceport=9080 \
UDP-DATAGRAM:localhost:5554,bind=localhost:15554,sourceport=5554
Only the 5-digit port numbers belong in the socat commands. The connections from or to 9988, 9080, and 5554 are direct existing connections. I only need socat for the emulated connections that would exist if an actual appliance existed.
I haven't tested this but it appears possible that the two 'more cryptic' commands might cause a non-desirable loop... perhaps the destination ports could be modified as shown below and perhaps that may help achieve your objective. This may not be viable based on your application as you may need to adjust your receive sockets accordingly.
sudo socat UDP4-RECVFROM:19080,fork UDP4-SENDTO:localhost:5554 &
sudo socat UDP4-RECVFROM:15554,fork UDP4-SENDTO:localhost:9080 &

Can SSH be fault-tolerant, or, is there a way to overcome RST messing up my TCP connections (some kind of retry pipe at both ends?)

I'm trying to use "scp" to copy TB-sized files, which is fine, until whatever router or other issue throws a tantrum and drops my connections (lost packets or unwanted RSTs or whatever).
# scp user#rmt1:/home/user/*z .
user#rmt1's password:
log_backups_2019_02_09_07h44m14.gz
16% 6552MB 6.3MB/s 1:27:46 ETAclient_loop: send disconnect: Broken pipe
lost connection
It occurs to me that (if ssh doesn't already support this) it should be possible for something at each end point and in between the connection to simply connect with its peer, and when "stuff goes wrong", to transparently just bloody handle-it (to re-try indefinitely and reconnect basically).
Anyone know the solution?
My "normal" way of tunnelling remote machines into a local connection is using ssh of course, catch-22 - that's the thing that's breaking so I can't do that here...
SSH uses TCP, and TCP is generally designed to be relatively fault-tolerant, with retries for dropped packets, acknowledgements, and other techniques to overcome occasional network problems.
If you're seeing dropped connections nevertheless, then you are seeing excessive network problems, more than any standard protocol can be expected to handle, or you are seeing a malicious attacker intentionally try to disrupt the connection, which cannot be avoided. Those are both issues that no reasonable network protocol can overcome, and so you're going to have to deal with them. That's true whether you're using SSH or some other protocol.
You could try using SFTP instead of SCP, because SFTP supports resuming (e.g., put -a), but that's the best that's going to be possible. You can also try a command like lftp, which may have more scripting possibilities to copy and retry (e.g., mirror --continue --loop), and can also use SFTP under the hood.
Your best bet is to find out what the network problem is and get that fixed. mtr may be helpful for finding where your packet loss is.

How can I limit the rate of new outgoing ssh connections when using GNU parallel?

Background: The default setting for MaxStartups in OpenSSH is 10:30:60, and most Linux distributions keep this default. That means there can be only 10 ssh connections at a time that are exchanging keys and authenticating before sshd starts dropping 30% of new incoming connections, and at 60 unauthenticated connections, all new connections will be dropped. Once a connection is set up, it doesn't count against this limit. See e.g. this question.
Problem: I'm using GNU parallel to run some heavy data processing on a large number of backend nodes. I need to access those nodes through a single frontend machine, and I'm using ssh:s ProxyCommand to set up a tunnel to transparently access the backends. However, I'm constantly hitting the maximum unauthenticated connection limit because parallel is spawning more ssh connections than the frontend can authenticate at once.
I've tried to use ControlMaster auto to reuse a single connection to the frontend, but no luck.
Question: How can I limit the rate at which new ssh connections are opened? Could I control how many unauthenticated connections there are open at a given time, and delay new connections until another connection has become authenticated?
I think we need a 'spawn at most this many jobs per second per host' option for GNU Parallel. It would probably make sense to have the default work for hosts with MaxStartups = 10:30:60, fast CPUs, but with 500 ms latency.
Can we discuss it on parallel#gnu.org?
Edit:
--sshdelay was implemented in version 20130122.
Using ControlMaster auto still sounds like the way to go. It shouldn't hit MaxStartups, since it keeps a single connection open (and opens sessions on that connection). In what way didn't it work for you?
Other relevant settings that might prevent ControlMaster from working, given your ProxyCommand setup are ControlPath:
ControlPath %r#%h:%p - name the socket {user}#{host}:{port}
and ControlPersist:
ControlPersist yes - persists initial connection (even if closed) until told to quit (-O exit)
ControlPersist 1h - persist for 1 hour

UDP Health Check

So we have an application that makes udp calls and sends packets. However, since responses are given for UDP calls, how could we ensure that the service is up and the port is open and that things are getting stored?
The only thought we have right now is to send in test packets and ensure they are getting saved out to the db.
So my over all question is, is there a better, easier way to ensure that our udp calls are succeeding?
On the listening host, you can validate that the port is open with netstat. For example, if your application uses UDP port 68, you could run:
# Grep for :<port> from netstat output.
$ netstat -lnu | grep :68
udp 0 0 0.0.0.0:68 0.0.0.0:*
You could also send some test data to your application, and then check your database to verify that the fixture data made it into your database. That doesn't mean it always will be, just that it's working at the time of the test.
Ultimately, the problem is that UDP packets are best-effort, and not guaranteed. So unless you can configure your logging platform to send some sort of acknowledgment after data is received and/or written, then you can't guarantee anything. The very nature of UDP is that it leaves acknowledgments (if any) to the application layer.
We took a different approach and we are checking to make sure the calls made it to the db. Its easy enough to query a table and ensure records are in there. If none recent, we know something is wrong. CodeGnome had a good idea, just not the route we went. Thanks!

using "vim" can lead ssh timeout but "top" not

When I use ssh to log in a remote server and open vim, if I don't type any words the session will timeout and I have to log in again.
But if I run command like top the session will never timeout?
What's the reason?
Note that the behavior you're seeing isn't related to vim or to top. Chances are good some router along the way is culling "dead" TCP sessions. This is often done by a NAT firewall or a stateful firewall to reduce memory pressure and protect against simple denial of service attacks.
Probably the ServerAliveInterval configuration option can keep your idle-looking sessions from being reaped:
ServerAliveInterval
Sets a timeout interval in seconds after which if no
data has been received from the server, ssh(1) will
send a message through the encrypted channel to request
a response from the server. The default is 0,
indicating that these messages will not be sent to the
server, or 300 if the BatchMode option is set. This
option applies to protocol version 2 only.
ProtocolKeepAlives and SetupTimeOut are Debian-specific
compatibility aliases for this option.
Try adding ServerAliveInterval 180 to your ~/.ssh/config file. This will ask for the keepalive probes every three minutes, which should be faster than many firewall timeouts.
vim will just sit there waiting for input, and (unless you've got a clock or something on the terminal screen) will also produce no output. If this continues for very long, most firewalls will see the connection as dead and kill them, since there's no activity.
Top, by comparison, updates the screen once every few seconds, which is seen as activity and the connection is kept open, since there IS data flowing over it on a regular basis.
There are options you can add the SSH server's configuration to send timed "null" packets to keep a connection alive, even though no actual user data is going across the link: http://www.howtogeek.com/howto/linux/keep-your-linux-ssh-session-from-disconnecting/
Because "top" is always returning data through your SSH console, it will remain active.
"vim" will not because it is static and only transmits data according to your key presses.
The lack of transferred data causes the SSH session to time out