SSH client does not timeout when connection to server has been diconnected - ssh

I think there is a simple answer to this question, but everything I find online is about preventing SSH client connections from timing out.
In this case, the client has established a connection to the server, and remains connected. Then the connection is disrupted, say the ethernet cable is unplugged, or the router is powered off.
When this happens, the client connection is not dropped.
The ssh client connection is part of a script and the line that performs the ssh login looks like this:
ssh -Nn script#example.com
The .ssh/config contains the following parameters:
Host *
ServerAliveInterval 60
ServerAliveCountMax 2
When these disconnects occur, I'd like the client ssh connection to timeout, and allow the script to attempt reconnect...
Thanks!

I guess I was wrong about this being a simple question, since no one was able to provide an answer.
My further reading and asking led to one reply on the openssh IRC channel, around 2022-06-06. I was advised that the options:
ServerAliveInterval 60
ServerAliveCountMax 2
Often don't disconnect the client as one might expect.
The ssh_config man page:
ServerAliveCountMax
Sets the number of server alive messages (see below) which
may be sent without ssh(1) receiving any messages back from
the server. If this threshold is reached while server alive
messages are being sent, ssh will disconnect from the server,
terminating the session...
The default value is 3. If, for example, ServerAliveInterval
(see below) is set to 15 and ServerAliveCountMax is left at
the default, if the server becomes unresponsive, ssh will
disconnect after approximately 45 seconds.
Seems to pretty conclusively state that disconnecting on lack of server response is the intention of these parameters. However, in practice this doesn't happen in all cases. Maybe the caveat here is: "while server alive messages are being sent"?
If the application calls for a reliable client disconnect when the server becomes unresponsive, the advice was to implement an external method, separate from the ssh client login script, that monitors server responsiveness, and kills the ssh client process on timeout.

Related

I cannot instantly reconnect to ssh server after logout

I have a ssh server on an old CentOS 5 installation. I can connect to the server without any problems. However, if i disconnect from the server and want to login again after exiting the previous session, the server is not responding and i got a "connection timed out" error. After a while (must be between 1 and 5 minutes) i can login normally. If i then exit the session, the same timeout happens again.
From the network where the client resides, i can connect to other ssh servers without any problems, so i dont think, this is a firewall issue.
Any suggestions, where i can look for the problem?
I tried to login with key instead of password and i stopped the fail2ban service on the ssh server. Both without any success.
I solved my problem:
There is a iptables rule, which is limiting the connections per ip to one attempt per minute. I have whitelisted my ip and now there is no delay when reconnecting.

What happens under-the-hood before an ssh connection timeouts?

Let's look at a scenario. Say I have the domain foo.bar.cc and I'm attempting to connect via ssh:
ssh foo.bar.cc
But, in this scenario, foo.bar.cc:22 requires VPN access. So, this DNS entry is not visible to me. Seeing how I'd never be able to connect, the connection eventually TO's (times out).
Before the TO, what is happening under-the-hood while I am attempting to access the server? What is the connection loop process like during the connection attempt, and what system calls are called, and why? Eventually sshd bails out: how does it determine this? Again, which system calls typically come into play, etc.

BGP peers established in SDN

I downloaded a routine from github about interconnection about traditional network with sdn. The program establishes ibpg peers. When I run the program, there is a problem occurred shown as follows. How can I deal with this trouble?
Since the peer closed the connection, you should check the logs and/or debug on the other side of the connection. The log file will probably explain why it didn't like your connection attempt / why it refused you.
You tagged quagga, so I assume that at least one of the side is quagga. It should be simple enough to enable some debug on the quagga cli to see what's going on.
Additionally, BGP can send notifications, notifying the peer of an error. So the connecting side should be aware of the error. This implies that the connection (TCP) was established and that the first BGP exchanges (BGP OPEN MSG) happened.
Maybe start with a tcpdump -vvv -i -s0 'host 10.10.10.1 and port 179'

How do I keep my daemon open through ssh tunnel?

I have been working on a http server which accepts connections and then based on the host name, loads up the right project from .so, generates the page the client is asking for, then sends them back.
Now that I have several working projects, I am interested in making them available to others but here is my problem :
I am connecting to my dedicated server through ssh, and starting my daemon from there, but after a while, the pages are no longer accessible because my program is no longer running.
I also get kicked by the server after a while. I wonder :
How do I keep my server running ? Does the fact that I keep getting kicked out by ssh after a little idle time explains why my daemon is being shutdown ?
Thanks in advance to whoever will be able to give me some element of answer.
When your SSH session times out SIGHUP was sent to the sub-processes forked from the current interactive shell. That's why the processes were terminated (server no longer running).
To avoid idle SSH connection being kicked by the server, set the ServerAliveInterval to send a request for response from server (e.g. ~/.ssh/config)
Host *
ServerAliveInterval 30
To avoid shell sub-process termination, refer to
https://askubuntu.com/questions/348836/keep-the-running-processes-alive-when-disconneting-the-remote-connection/348921#348921
https://askubuntu.com/questions/349262/run-a-nohup-command-over-ssh-then-disconnect
In short, there are 3 options:
nohup
disown / setsid
start the servers in CLI in tmux or screen session on the server
NOTE: If the server instances are already properly daemonized, try looking at monit or supervisord to keep them running ;-D

SSH local port forwarding on a remote not listening port: the connection succeeds?

I discovered today that if I ssh-forward the local port X to ssh server port Y, and no process is listening on port Y, I can still connect to local port X (I don't get the usual "connection refused" error).
I did my test with 2 different SSH clients on a windows host connecting to a linux server.
After a bit of reflexion, I came to the conclusion that from a pure network point of view, this is the behaviour I should expect: the SSH client is actually listening on localhost:X, so the connection is possible.
Nevertheless, this leads to a problematic situation in which I have an apparently connected socket that talks to nobody. Even sending data on the socket is a successful operation.
So my question: does the SSH protocol manage this situation in some ways, i.e. do I have strategies for detecting this situation? And if yes, may I hope support for this feature on some SSH clients and APIs (today I'm using ssh.net, that does not seem to offer this feature).
If not, how would you proceed for detecting the situation? Timeout on answer?
Thanks for your help,
Alberto.
The only logical behavior would be to close client connection if the server can't connect to the remote side, but that would not be much better than just a hanging connection.
Also there can happen situation when the SSH server is waiting for the remote connection for a minute or two before giving up, so the client's connection will be opened for this period of time anyway.
So there's actually no logical alternative rather than a hanging client connection.