How can I fix "kex_exchange_identification: read: Connection reset by peer"? - ssh

I want to copy data with scp in a GitLab pipeline using PRIVATE_KEY.
The error is:
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
lost connection
Pipeline log:
$ mkdir -p ~/.ssh
$ echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
$ chmod 600 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 22
$ ssh-add ~/.ssh/id_rsa
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)
$ ssh-keyscan -H $IP >> ~/.ssh/known_hosts
# x.x.x.x:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.10
# x.x.x.x:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.10
$ scp -rv api.yml root#$IP:/home/services/test/
Executing: program /usr/bin/ssh host x.x.x.x, user root, command scp -v -r -t /home/services/test/
OpenSSH_8.6p1, OpenSSL 1.1.1l 24 Aug 2021
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling
debug1: Connecting to x.x.x.x [x.x.x.x] port 22.
debug1: Connection established.
debug1: identity file /root/.ssh/id_rsa type -1
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa_sk type -1
debug1: identity file /root/.ssh/id_ecdsa_sk-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: identity file /root/.ssh/id_ed25519_sk type -1
debug1: identity file /root/.ssh/id_ed25519_sk-cert type -1
debug1: identity file /root/.ssh/id_xmss type -1
debug1: identity file /root/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.6
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
lost connection

kex_exchange_identification: read: Connection reset by peer
When an SSH client connects to an SSH server, the server starts by sending a version string to the client. The error that you're getting means that the TCP connection from the client to the server was "abnormally closed" while the client was waiting for this data from the server, in other words immediately after the TCP connection was opened.
As a practical matter, it's likely to mean one of two things:
The SSH server process malfunctioned (crashed), or perhaps it detected some serious issue causing it to exit immediately.
Some firewall is interfering with connections to the ssh server.
It looks like the ssh-keyscan program was able to connect to the server and get a version string without an error. So the SSH server process is apparently able to talk to a client without crashing.
You should talk the administrators of this x.x.x.x host and the network that it's attached to, to see if they can identify the problem from their end. It's possible that something—a firewall, or the ssh server process itself—is seeing the multiple connections, first from the ssh-keyscan process, then by the scp program, as an intrusion attempt. And it's blocking the second connection attempt.

I had the same problem. I rebooted the server, then it was all good.

I met this issue after I changed my Apple ID password, so I updated my Apple ID and restarted my Mac. It works now.
git pull origin master
Output:
kex_exchange_identification: read: Connection reset by peer
Connection reset by 20.205.243.166 port 22
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.

TL;DR:
find the server-side process listen-to-ssh port and kill it, and then restart the ssh service. It should solve this problem.
On the client side:
ssh account#ip -pPORT
kex_exchange_identification: read: Connection reset by peer
I tried it on the server side:
service ssh status
[ ok ] sshd is running.
service ssh restart
[ ok ] Restarting OpenBSD Secure Shell server: sshd.
but the client side ssh command still fail with the same kex_exchange_identification error.
Then I stop the ssh service on the server side (as root):
service ssh stop
[ ok ] Stopping OpenBSD Secure Shell server: sshd.
And the following client side ssh command still fails with the same kex_exchange_identification error. It's strange; if no process listen the port, it should be the error Connection refused.
It could be the process on the server side listening to the SSH port is dead, and even a restart / stop service do not work. So to find the process, and killing it may solve the problem.
The PORT here is the SSH port defined in 'server /etc/ssh/sshd_config', and the default is 22. As root:
netstat -ap | grep PORT
tcp 0 0 0.0.0.0:PORT 0.0.0.0:* LISTEN 8359/sshd
tcp6 0 0 [::]:PORT [::]:* LISTEN 8359/sshd
kill 8359
netstat -ap | grep PORT
no result
service ssh start
[ ok ] Starting OpenBSD Secure Shell server: sshd.
netstat -ap | grep PORT
tcp 0 0 0.0.0.0:PORT 0.0.0.0:* LISTEN 31418/sshd: /usr/sb
tcp6 0 0 [::]:PORT [::]:* LISTEN 31418/sshd: /usr/sb
The following client-side ssh command succeed.

I suggest to check the routing table for one possibility. In my case on Ubuntu 20.04 (Focal Fossa), I added a local network routing entry to recover when I got the same error message when connecting to the server using SSH. It had disappeared unexpectedly, leaving only the default route.
route -n Kernel IP routing table Destination Gateway
Output:
Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 enp1s0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 enp1s0 # <= disappeared
It seemed as if ack was being filtered by an incomplete routing table although the first syn passed.

Similar to naoki-ogawa, I had a problem with my routing table. In my case, I had an extra route for my local network.
As root:
route
Output:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default RT-AX92U-3E20 0.0.0.0 UG 100 0 0 eno1
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 virbr1
192.168.50.0 RT-AX92U-3E20 255.255.255.0 UG 10 0 0 eno1
192.168.50.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1
I simply removed the gateway on the local network (192.168.50.0):
route del 192.168.50.0/24 via 192.168.50.1
The problem was resolved.

For those who came across this page after upgrading a FreeBSD machine to 13.1 and then trying to ssh into it, see Bug 263489. sshd does not work after reboot to 13.1-RC4.
After the upgrade, the previous sshd daemon (OpenSSH < 8.2) is still running with new configurations (OpenSSH >= 8.2). The solution is to stop and then restart the sshd daemon. The FreeBSD 13.1 release notes now mention this and after 13.1, the freebsd-update script will automatically restart the daemon.

I had this error today when I was trying to use my Dell laptop running Ubuntu 20.04.5 LTS (Focal Fossa) and trying to SSH into a Raspberry Pi. When I was on my home Wi-Fi network and tried to SSH into the pi (also on my home Wi-Fi network) I got the error:
ssh pi#10.0.0.200
Output:
kex_exchange_identification: read: Connection reset by peer'
However, when I switched my Ubuntu Laptop over to a mobile hotspot, the error disappeared, and I was able to SSH without issue. Will update this post as soon as I figure out how to resolve the root cause.
Issue resolved (but full reason unclear). I followed the instructions to change my DNS server here to 8.8.8.8 and 8.8.4.4.
After about 5 minutes had elapsed, I was able to use SSH from my command line terminal just fine.

Error
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
I have an Ubuntu 20.04 host and two RHEL 8 VMs running on VMware. I log in into the two VMs from my Ubuntu terminal. I use Ethernet and Wi-Fi connections. Every time I try to log in into a VM after rebooting it, I get the error:
Restarting the sshd service will not solve the problem. Sometimes, the problem would be resolved if I physically disconnected and reconnected the Ethernet cable.
Finally I turned off my Wi-Fi connection with:
nmcli conn down <name_of_Wi-Fi_connection>
Or turning it off from settings and this gave me a permanent solution.
Both my Ethernet and Wi-Fi connections (static connections) had the same IP address, so I think the VMs were rejecting two "suspicious" similar connections.

Try to check if OpenSSH server is up and running on the server side.
Try checking the sshd configuration. It worked this way for me.

The same issue with me:
I have fixed the issue by doing the below steps.
edit file etc/hosts.allow. Command to do so sudo nano /etc/hosts.allow.
At the end, update the value of ALL keys to ALL like ALL : ALL. Save the file and try again.
Basically, ALL might be set to something else therefore while establishing ssh connection to the host, it is expecting that the request should come from the IP address starting from 10...* if ALL set to ALL : 10.. Therefore by replacing 10. with ALL, you are allowing connection from everywhere.

You can try VPN or if you have been using it before, try to turn it off and connect it again.
If you don't have a budget for a VPN, you can try ProtonVPN which is free. It worked for me when I faced the same problem.

Related

Why can I not ssh into RHEL 8.2 when sshd is running and port 22 shows it's open?

I installed RHEL 8.2 with a free developer license (bare hardware), it looks like sshd is installed, running by default with port 22 already open, I did not have to do anything to install sshd or open the port.
[root#<hostname> etc]# systemctl status sshd
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-08-17 13:35:12 MDT; 1h 7min ago
...
but on Windows 10 Pro (with cygwin ssh client installed),
ssh <user>#<ip-address>
I get this error
ssh: connect to host <ip-address> port 22: Permission denied
On the RHEL 8.2 installation, in a bash terminal, I can successfully ssh locally: ssh <user>#<ip-address> and it works OK.
Any ideas?
This is what I am getting:
From: 192.168.0.153
To: 192.168.0.106
$ssh -Tv <user>#<ip-address>
OpenSSH_8.3p1, OpenSSL 1.1.1f 31 Mar 2020
debug1: Connecting to 192.168.0.106 [192.168.0.106] port 22.
debug1: connect to address 192.168.0.106 port 22: Permission denied
ssh: connect to host 192.168.0.106 port 22: Permission denied
but on 192.168.0.106, it is showing sshd running and port 22 open.
On the machine itself, I can ssh ($ssh <user>#localhost works)
On the server I want to reach, it shows port 22 as open, ssh service enabled (192.168.0.106)
#firewall-cmd --list-all
public (active)
...
interfaces: enp37s0
services: cockpit dhcpv6-client http ssh
ports: 22/tcp
...
First, check the output of ssh -Tv <user>#<ip-address>
It will tell you:
if it can actually contact the server
what local private key it is using
Make sure you have:
generated a public/private key pair in %USERPROFILE%\.ssh, using openSSH ssh-keygen command (ssh-keygen -m PEM -t rsa -P "")
added the content of id_rsa.pub to ~user/.ssh/authorized_keys on the server side.
I had this problem. I had my virtual machine set up for a wired connection. I had to turn on the wired connection in the Red Hat settings. Settings -> Network -> Wired Toggle: ON
Once I turned on the wired connection I was able to make my ssh connections externally.

SSH connection refused with DNS

I have searched and I know people have asked this before, but I have been through all settings and double, triple checked everything but I can't get it to work for the life of me. I have not this before with other machines, but I don't know why this isn't working.
*note: numbers have been changed for security reasons
Here is what I have:
Client
Raspberry Pi 3 with IP: 192.168.0.133
manual port in raspberry pi 3 sshd_config file: 1502
Router:
NAT Virtual Server:
External port: 1502
Internal port: 22
IP address: 192.168.0.133
DNS(duckdns.org)
- checked to make sure public IP address points to the domain: testing2#duckdns.org
ssh command that works on client:
ssh -p 1502 client#192.168.0.133
ssh command that doesn't work
ssh -p 1502 client#testing2.duckdns.org
So I'm not sure where it's going on. here is output from ssh -v -p 1502 client#testing2.duckdns.org
OpenSSH_7.4p1, LibreSSL 2.5.0
debug1: Reading configuration data /Users/testing/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Connecting to testing2.duckdns.org [188.45.22.61] port 1502.
debug1: connect to address 188.45.22.61 port 1502: Connection refused
ssh: connect to host testing2.duckdns.org port 1502: Connection refused
Any ideas? I really would appreciate any insights.
EDIT: To add some more clarifying information:
When I run the second command listed above, that is failing:
ssh -p 1502 client#testing2.duckdns.org
This goes out to the DNS I have setup (testing2.duckdns.org) and that DNS points to my network's public facing IP address. At that point, it hits the NAT routing rule I have setup on my router that forwards any requests from port 1502 to the internal port of 22 to the IP address of 192.168.0.133.
This is why I don't understand where it is failing, all the rules are there and the route should be open. Is there a setting on the raspberry pi (within the config file) that I'm missing?
So I figured out what was going on, and although I am still not able to connect remotely, I have solved what the original question posed.
The problem was I had changed the port on the raspberry pi (the internal port) to 1502. This meant that the route was forwarding correctly (from external of 1502 to internal of 22) but then the internal port was set to 1502, so it failed to connect.
This also explains why it would connect locally with the above command because the local port was 1502.
I still can't connect remotely because the raspberry pi is running a VPN and this is causing the SSH request to timeout, but that is a separate question.
Thanks for the help everyone!

Why Google Cloud Compute Engine instance gives ssh connection refused after restart?

I stopped and restarted an ubuntu 14.04 Google Cloud Compute Engine instance, and now my ssh connection is refused with:
ssh: connect to host 146.148.114.98 port 22: Connection refused
This already happened a previous time, I thought there was a problem with the machine, I deleted it and recreated and it started working again. I don't want to be recreating instances every time. The ssh troubleshooting page of google cloud is quite messy. My firewall rules seem to be ok. Anyone has a solution for this?
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
default-allow-http default 0.0.0.0/0 tcp:80 http-server
default-allow-https default 0.0.0.0/0 tcp:443 https-server
default-allow-icmp default 0.0.0.0/0 icmp
default-allow-internal default 10.128.0.0/9 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default 0.0.0.0/0 tcp:3389
default-allow-ssh default 0.0.0.0/0 tcp:22
This is the output for: ps aux | grep ssh
root 29 0.0 0.4 55184 2860 ? Ss 11:26 0:00 /usr/sbin/sshd -p 22 -o AuthorizedKeysCommand=/google/devshell/authorized_keys.sh -o Author
izedKeysCommandUser=root
root 183 0.0 0.9 82692 5940 ? Ss 11:26 0:00 sshd: fbeshox [priv]
fbeshox 218 0.0 0.7 82692 4424 ? S 11:26 0:00 sshd: fbeshox#pts/0
fbeshox 522 0.0 0.3 12728 2200 pts/1 S+ 12:12 0:00 grep ssh
Here the verbose results of the ssh connetion attempt.
ssh -i .ssh/keyname username#130.211.53.51 -vvv
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /Users/xxxx/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: /etc/ssh_config line 102: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 130.211.53.51 [130.211.53.51] port 22.
debug1: connect to address 130.211.53.51 port 22: Connection refused
ssh: connect to host 130.211.53.51 port 22: Connection refused
It is possible that sshguard, a security tool installed on Ubuntu by default, is interfering with your connection. Basically sshguard might have incorrectly decided that your IP address is 'attacking' your instance and blocked the IP.
If you can log in from a different location, such as the Web SSH provided by the Cloud Console, try using sudo iptables -S to see if there are any firewall rules on the instance (different than the GCE firewall) created by sshguard. If so try disabling sshguard or adding your IP address on the whitlelist (http://www.sshguard.net/docs/whitelist/).
I know this is an old question but today I had similar issue and the problem was very simple - IP of the instance is changed after restart - so I had to update ssh string accordingly
This issue often happens after you changed your default zone or region.
Then, you must update the ssh keys in your metadata by
sudo gcloud compute config-ssh
You can see also the changes in the web interface under Compute Engine | Metadata | SSH Keys.
Try to SSH into the instance with a different username .Google Compute is a bit shaky at times . Try to SSH into the instance using VM instance page in Compute Engine . If SSH takes too much time and refuses connection, then login with a different username in SSH . You can login with a different name using a settings icon on top right corner of SSH window . If these all these doesn't go well, I will advise you to re create one more instance, as it has also been my experience that Google Compute Engine instances are not stable in terms of SSH accessibility and tend to create problems .It's better to use putty as a client to SSH into Compute Engine than the SSH terminals Google provides . Let me know if that helps you :)
I understood that my problem raised after detaching a disk from the console when the machine was stopped with that disk mounted. First unmount the disk, then detach it. Never seen the problem again.
Also make sure you did not recursively modify permissions for the /etc folder.
For example:
chmod -R 775 /etc
This will prevent you from logging back into the VM, even from the Web console and gcloud cli.
Instead, modify permissions on a more granular level e.g. /etc/nginx, etc.
I had the same problem and solved it by first connecting to the VM via browser SSH (from the "VM instances" web overview), which also fails, then choose the offered option of retrying without Cloud Identity-Aware Proxy which worked.
Afterwards all SSH-connections work again (both in browser and local shell).

ssh connection refused from mac os x on connecting to a remote computer

I am running on Mac OS X 10.7.4.
I am unable to ssh to a remote computer, but when I do ssh user#localhost, it works fine.
The error displayed is
OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011
debug1: Reading configuration data /etc/ssh_config
debug1: Applying options for *
debug1: Connecting to web.iiit.ac.in [14.139.82.8] port 22.
debug1: connect to address 14.139.82.8 port 22: Connection refused
ssh: connect to host web.iiit.ac.in port 22: Connection refused
I have enabled the root login in System Preferences and disabled firewall.
Also, ps aux | grep ssh shows ssh-agent, contains /usr/bin/ssh-agent -l and grep ssh in the output.
On doing sudo launchctl list|grep ssh it shows:
0 com.openssh.sshd
After this, sudo launchctl start com.openssh.sshd ; sudo launchctl list|grep ssh gives
45973 - com.openssh.sshd
Again, after checking sudo launchctl list|grep ssh after sometime, it again shows :
- 0 com.openssh.sshd
In system preferences -> sharing->enable remote login. It will fix it.
Wireless connections has well know problems with the SSH strict package receiving algorithm. I have the same problem with a WiMax connection. It would turn good if you can establish a VPN or any kind of tunnel to the server.
see also
this thread
Your remote host probably doesn't have an SSH server running (or, if it does, it's not listening on port 22).
Your tests (ps aux, launchctl etc) won't help - the issue is on the remote host, not the local (you've got an SSH client, because you can connect to localhost, but the remote host 14.139.82.8 isn't allowing connections on port 22).
When I ran in to this problem, I found that OpenSSH was not completely installed. Install it by typing into Terminal: sudo apt-get install openssh-client openssh-server
Also, check your firewall. The default SSH port is 22. Open that port.
clean known_hosts file and try again. Worked for me.

"ssh example.com" hangs but "ssh example.com bash -i" does not

everyday I encounter a very strange phenomenon.
From my university internet connection, sshing to my machine ("ssh example.com") works without any problems.
From my home adsl, "ssh example.com" my console gets stuck with this message:
debug1: Server accepts key: pkalg ssh-rsa blen 533
debug1: Enabling compression at level 6.
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug1: Requesting no-more-sessions#openssh.com
debug1: Entering interactive session.
Sometimes it might let me in but in most of the cases not.
The funny thing is that if I execute "ssh example.com bash -i" I get logged in immediately.
I finally found the source of the problem. It has to do with SSH type of service (ToS) TCP packets.
When you ask for a regular ssh teminal, ssh sets the TCP packet type of service (ToS) to "interactive". My router in my residence blocks those packet types!
Using netcat, the tunneled TCP packets get no type of service directives. Thus, if you tunnel all your ssh traffic through netcat, you reset the ToS of the TCP packets to the default ones.
In .ssh/config, you have to set:
Host *.example.com
ProxyCommand nc %h %p
So, each time you try to ssh to example.com, netcat will be called and will tunnel the packets.
As of OpenSSH 5.7, you can just add this to your ssh config file (either ~/.ssh/config or /etc/ssh/ssh_config):
Host *
IPQoS 0x00
This is a more-direct way to work around the problem Asterios identified.
I've just had the same problem. Try logging in with a different ssh client for more information. Whereas the linux command-line client didn't come back with any useful message, Putty came back with "server refused to allocate pty". I fixed it with mkdir /dev/pts and mount -a. How it got that mucked up in the first place I'm less sure about.
BTW, bash -l should act like a login shell so you should be able to prove Peter Westlake's suggestion correct or incorrect in your case fairly easily.
The difference between the two cases is that "bash -i" does not give you a login shell but just running ssh does. You can "man bash" for details of what a "login shell" is, but the main thing is that it runs /etc/profile and your .bash_profile. Have a look in those files for anything that might be causing a problem.
Maybe the server is out of ptys.