I'm trying to execute multiple SSH connection to one host but after 20 connections I've got disconnected without any particular reason.
Example test
for i in {1..30}; do ssh user#host & done
First 20 connection are ok but rest got:
kex_exchange_identification: read: Connection reset by peer
And at that time I cannot do ssh to that host for aprox few seconds.
I've tried a lot of configuration changes to /etc/ssh/sshd_config
Like:
MaxStartups
MaxSessions
ClientAliveCountMax
but nothing helps.
Related
Currently, I have built a small datacenter environment in OTC with Terraform. based on Ubuntu 20.04 images.
The idea is to have a jump host in the setup phase and for operational purposes that allows spontaneous access to service frontends via ssh proxy jumps without permanently routing them to the public net.
Basic setup works fine so far - I can access the jump host with ssh, and can access the internal machines from there with ssh when I put the private key onto the jump host. So, cloudwise the security seems to be fine. Key pair is generated with ed25519, I use the same key for jump host and internal servers (for now).
What I cannot achieve is the proxy jump as a chained command from my outside machine.
On the jump host, I set AllowTcpForwarding to "yes" in /etc/ssh/sshd_config and restarted ssh and sshd services.
My current local ssh config looks like this:
Host otc
User ubuntu
Hostname <FloatingIP-Address>
Port 22
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
IdentityFile= ~/.ssh/ssh_access
ControlPath ~/.ssh/cm-%r#%h:%p
ControlMaster auto
ControlPersist 10m
Host 10.*
User ubuntu
Port 22
IdentityFile=~/.ssh/ssh_access
ProxyJump otc
StrictHostKeyChecking=no
UserKnownHostsFile=/dev/null
I can use this to ssh otc to the jump host.
What I would expect is that I could use e.g. ssh 10.0.0.56 to reach an internal host without further ado. As well I should be able to use commands like ssh -L 8080:10.0.0.56:8080 10.0.0.56 -N to map an internal server's port to a localhost port on my external machine. This is how I managed that successfully on other hosting scenarios in the public cloud.
All I get is:
Stdio forwarding request failed: Session open refused by peer
kex_exchange_identification: Connection closed by remote host
Journal on the Jump host says:
Jul 30 07:19:04 dev-nc-o-bastion sshd[2176]: refused local port forward: originator 127.0.0.1 port 65535, target 10.0.0.56 port 22
What I checked as well:
ufw is off on the Jump Host.
replaced ProxyJump configuration with ProxyCommand
So I am at the end of my knowledge. Has anyone a hint what else could be the reason? Any help welcome!
Ok, cause is found (but not yet fully explained).
My local ssh setting was allowing multiplexed forwards (ControlMaster auto ) which caused the creation of a unix socket file for the Controlpath in ~/.ssh.
I had to login to the jump host to AllowTcpForwarding in the first place.
After rebooting the sshd, I returned to the local machine and the failure occured when trying to forward to the remote internal machine.
After deleting the socket file in ~/.ssh, the connection can now be established as needed. Obviously, the persistent tunnel was not impacted by the restarted daemon on the jump host and simply refused to follow the new directive.
This cost me two days. On the bright side, I learned a lot about ssh :o
Let me explain my very strange problem. I have one server (Linux Debian Jessie) which had access to my git repository on gitlab.com
Two days ago, I tried to pull some modifications on this server with a simple git pull. I received an error message :
ssh: connect to host gitlab.com port 22: Connection timed out
Si I have done some tests
1. TELNET
To understand why, I have tried a telnet on 22 port = TIMEOUT
2. IPTABLES
I checked my iptables to be sure that SSH port was allowed. It is. If I try a telnet on another service for example like github.com, it works. So I'm allowed in OUTPUT on this port.
3. PING
I thought a ip translation problem. I have done a ping, I obtain this message :
PING 104.210.2.228 (104.210.2.228) 56(84) bytes of data.
--- 104.210.2.228 ping statistics ---
87 packets transmitted, 0 received, 100% packet loss, time 86534ms
4. FAIL2BAN
I use fail2ban, so I have checked if gitlab was in jail address, but it seems not.
So my problem is that I can't reach gitlab.com
If I try from my local machine or from another server, I don't have this problem. It works.
I can't reach gitlab.com only from this server but I don't know why. Maybe someone has an idea which cans be very precious to help me ?
Probably some modification of firewall caused this. For a quick solution use http protocol instead of ssh. Change your url in the git config file to http.
git config --local -e
change entry of
url = git#gitlab.com:username/repo.git , to
url = https://gitlab.com/username/repo.git
You need to give your username and password to authenticate yourself while making a push or pull though as it's http based.
I have to connect to many server machines by ssh into them.
But if I didn't use terminal for some time, connections are getting disconnected. Now I have to close my terminal and login again with ssh.
Are there any plugins which does help me in this case?
I think there are built in functions in ssh solving your purpose.
From man ssh_config:
ServerAliveInterval
Sets a timeout interval in seconds after which if no data has been received from the server, ssh(1) will send a message through the encrypted channel to request a response from the server. The default is 0, indicating that these messages will not be sent to the server. This option applies to protocol version 2 only.
By default, keep alives are disabled but you can enable them for a single connection by passing the ServerAliveInterval-Parameter with the -o Option:
ssh -oServerAliveInterval=<time in seconds> <rest of your ssh command arguments>
If you like having this configuration for all of your SSH connections. It's easier to put the following in your ~/.ssh/config:
Host *
ServerAliveInterval <time in seconds>
Furthermore there is a second parameter affecting the keep-alive-behaviour: ServerAliveCountMax (see man ssh_config).
I've found a nice article about the ServerAlive-Parameter: How to Keep Alive SSH Sessions
I have an SSH tunnel from an EC2 instance (say A) to another with an Elastic IP (say B). It worked perfectly. Yet, B had a failure. So I had to stop it, and start a new instance with the same Elastic IP. And now the exact same SSH tunnel does not work anymore. Yet:
I can still SSH from A to B. So I know my keys are in place
I tried the exact same tunnel from another instance than A, and it works as expected.
So somehow, it is as if A detected a problem when B went down, and it is now blocking the traffic.
Tunnel:
/usr/bin/ssh -o StrictHostKeyChecking=no -i /path_to/id_dsa -f -p 22 -N -L 26:www.foo.com:80 ssh_tunnel#amazon_public_ip
And when I try Curl here is what I get:
curl -v -H "Host: www.foo.com" http://localhost:26/foofoo
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 26 failed: Connection refused
* Failed to connect to localhost port 26: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 26: Connection refused
Am I missing something?
I found the issue. I did not pay attention, but when I was SSH-ing into the instance, I was getting a warning message: WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!. Since it worked anyway, I thought it was not a problem. It turns out it makes the tunnel fail.
So I just removed the offending RSA key from known_hosts and now it works.
I'm using SSH to access my university's afs system. I like to use rmate (remote TextMate), which requires SSH tunneling, so I included this alias in my .bashrc.
alias sshr=ssh -R 52698:localhost:52698 username#corn.myschool.edu
It has always worked until now.
I had the same problem. In order to find the port that is already open, you have to issue this command on the 'corn.myschool.edu' computer:
sudo netstat -plant | grep 52698
And then kill all of the processes that come up with this (replace xxxx with the process ids)
sudo kill -9 xxxx
(UPDATED: changed the option to be -plant as it is a nice mnemonic)
I had another SSH connection open. I just needed to close that connection before I opened my SSH tunnel.
Further Explanation:
Once one ssh connection has been established, subsequent connections will produce a message:
Warning: remote port forwarding failed for listen port 52698
This message is harmless, as the forward can only be set up once and one forward will work for all ssh connections to the same machine. The original ssh session that opened the forward will stay open when you exit the shell until all remote editing sessions are finished.
I experienced this problem, but it was while connecting to a server on which I don't have sudo priviliges, so the top response suggesting runing sudo netstat ... wasn't feasible for me.
I eventually figured out it was because there were still instances of rmate running, so I used ps to list the running processes and then kill -9 pid (where pid is the process ID for rmate).
This solved my problem reported here as well. To avoid this notification "AllowTcpForwarding" should be enabled in SSH config.
In my case, the problem was that the remote system didn't have DNS properly set up, and it couldn't even resolve its own hostname. Make sure you have a working DNS in /etc/resolv.conf at the remote system.