I want to copy data with scp in a GitLab pipeline using PRIVATE_KEY.
The error is:
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
lost connection
Pipeline log:
$ mkdir -p ~/.ssh
$ echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
$ chmod 600 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 22
$ ssh-add ~/.ssh/id_rsa
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)
$ ssh-keyscan -H $IP >> ~/.ssh/known_hosts
# x.x.x.x:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.10
# x.x.x.x:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.10
$ scp -rv api.yml root#$IP:/home/services/test/
Executing: program /usr/bin/ssh host x.x.x.x, user root, command scp -v -r -t /home/services/test/
OpenSSH_8.6p1, OpenSSL 1.1.1l 24 Aug 2021
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling
debug1: Connecting to x.x.x.x [x.x.x.x] port 22.
debug1: Connection established.
debug1: identity file /root/.ssh/id_rsa type -1
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa_sk type -1
debug1: identity file /root/.ssh/id_ecdsa_sk-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: identity file /root/.ssh/id_ed25519_sk type -1
debug1: identity file /root/.ssh/id_ed25519_sk-cert type -1
debug1: identity file /root/.ssh/id_xmss type -1
debug1: identity file /root/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.6
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
lost connection
kex_exchange_identification: read: Connection reset by peer
When an SSH client connects to an SSH server, the server starts by sending a version string to the client. The error that you're getting means that the TCP connection from the client to the server was "abnormally closed" while the client was waiting for this data from the server, in other words immediately after the TCP connection was opened.
As a practical matter, it's likely to mean one of two things:
The SSH server process malfunctioned (crashed), or perhaps it detected some serious issue causing it to exit immediately.
Some firewall is interfering with connections to the ssh server.
It looks like the ssh-keyscan program was able to connect to the server and get a version string without an error. So the SSH server process is apparently able to talk to a client without crashing.
You should talk the administrators of this x.x.x.x host and the network that it's attached to, to see if they can identify the problem from their end. It's possible that something—a firewall, or the ssh server process itself—is seeing the multiple connections, first from the ssh-keyscan process, then by the scp program, as an intrusion attempt. And it's blocking the second connection attempt.
I had the same problem. I rebooted the server, then it was all good.
I met this issue after I changed my Apple ID password, so I updated my Apple ID and restarted my Mac. It works now.
git pull origin master
Output:
kex_exchange_identification: read: Connection reset by peer
Connection reset by 20.205.243.166 port 22
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
TL;DR:
find the server-side process listen-to-ssh port and kill it, and then restart the ssh service. It should solve this problem.
On the client side:
ssh account#ip -pPORT
kex_exchange_identification: read: Connection reset by peer
I tried it on the server side:
service ssh status
[ ok ] sshd is running.
service ssh restart
[ ok ] Restarting OpenBSD Secure Shell server: sshd.
but the client side ssh command still fail with the same kex_exchange_identification error.
Then I stop the ssh service on the server side (as root):
service ssh stop
[ ok ] Stopping OpenBSD Secure Shell server: sshd.
And the following client side ssh command still fails with the same kex_exchange_identification error. It's strange; if no process listen the port, it should be the error Connection refused.
It could be the process on the server side listening to the SSH port is dead, and even a restart / stop service do not work. So to find the process, and killing it may solve the problem.
The PORT here is the SSH port defined in 'server /etc/ssh/sshd_config', and the default is 22. As root:
netstat -ap | grep PORT
tcp 0 0 0.0.0.0:PORT 0.0.0.0:* LISTEN 8359/sshd
tcp6 0 0 [::]:PORT [::]:* LISTEN 8359/sshd
kill 8359
netstat -ap | grep PORT
no result
service ssh start
[ ok ] Starting OpenBSD Secure Shell server: sshd.
netstat -ap | grep PORT
tcp 0 0 0.0.0.0:PORT 0.0.0.0:* LISTEN 31418/sshd: /usr/sb
tcp6 0 0 [::]:PORT [::]:* LISTEN 31418/sshd: /usr/sb
The following client-side ssh command succeed.
I suggest to check the routing table for one possibility. In my case on Ubuntu 20.04 (Focal Fossa), I added a local network routing entry to recover when I got the same error message when connecting to the server using SSH. It had disappeared unexpectedly, leaving only the default route.
route -n Kernel IP routing table Destination Gateway
Output:
Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 enp1s0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 enp1s0 # <= disappeared
It seemed as if ack was being filtered by an incomplete routing table although the first syn passed.
Similar to naoki-ogawa, I had a problem with my routing table. In my case, I had an extra route for my local network.
As root:
route
Output:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default RT-AX92U-3E20 0.0.0.0 UG 100 0 0 eno1
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 virbr1
192.168.50.0 RT-AX92U-3E20 255.255.255.0 UG 10 0 0 eno1
192.168.50.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1
I simply removed the gateway on the local network (192.168.50.0):
route del 192.168.50.0/24 via 192.168.50.1
The problem was resolved.
For those who came across this page after upgrading a FreeBSD machine to 13.1 and then trying to ssh into it, see Bug 263489. sshd does not work after reboot to 13.1-RC4.
After the upgrade, the previous sshd daemon (OpenSSH < 8.2) is still running with new configurations (OpenSSH >= 8.2). The solution is to stop and then restart the sshd daemon. The FreeBSD 13.1 release notes now mention this and after 13.1, the freebsd-update script will automatically restart the daemon.
I had this error today when I was trying to use my Dell laptop running Ubuntu 20.04.5 LTS (Focal Fossa) and trying to SSH into a Raspberry Pi. When I was on my home Wi-Fi network and tried to SSH into the pi (also on my home Wi-Fi network) I got the error:
ssh pi#10.0.0.200
Output:
kex_exchange_identification: read: Connection reset by peer'
However, when I switched my Ubuntu Laptop over to a mobile hotspot, the error disappeared, and I was able to SSH without issue. Will update this post as soon as I figure out how to resolve the root cause.
Issue resolved (but full reason unclear). I followed the instructions to change my DNS server here to 8.8.8.8 and 8.8.4.4.
After about 5 minutes had elapsed, I was able to use SSH from my command line terminal just fine.
Error
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
I have an Ubuntu 20.04 host and two RHEL 8 VMs running on VMware. I log in into the two VMs from my Ubuntu terminal. I use Ethernet and Wi-Fi connections. Every time I try to log in into a VM after rebooting it, I get the error:
Restarting the sshd service will not solve the problem. Sometimes, the problem would be resolved if I physically disconnected and reconnected the Ethernet cable.
Finally I turned off my Wi-Fi connection with:
nmcli conn down <name_of_Wi-Fi_connection>
Or turning it off from settings and this gave me a permanent solution.
Both my Ethernet and Wi-Fi connections (static connections) had the same IP address, so I think the VMs were rejecting two "suspicious" similar connections.
Try to check if OpenSSH server is up and running on the server side.
Try checking the sshd configuration. It worked this way for me.
The same issue with me:
I have fixed the issue by doing the below steps.
edit file etc/hosts.allow. Command to do so sudo nano /etc/hosts.allow.
At the end, update the value of ALL keys to ALL like ALL : ALL. Save the file and try again.
Basically, ALL might be set to something else therefore while establishing ssh connection to the host, it is expecting that the request should come from the IP address starting from 10...* if ALL set to ALL : 10.. Therefore by replacing 10. with ALL, you are allowing connection from everywhere.
You can try VPN or if you have been using it before, try to turn it off and connect it again.
If you don't have a budget for a VPN, you can try ProtonVPN which is free. It worked for me when I faced the same problem.
I installed RHEL 8.2 with a free developer license (bare hardware), it looks like sshd is installed, running by default with port 22 already open, I did not have to do anything to install sshd or open the port.
[root#<hostname> etc]# systemctl status sshd
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2020-08-17 13:35:12 MDT; 1h 7min ago
...
but on Windows 10 Pro (with cygwin ssh client installed),
ssh <user>#<ip-address>
I get this error
ssh: connect to host <ip-address> port 22: Permission denied
On the RHEL 8.2 installation, in a bash terminal, I can successfully ssh locally: ssh <user>#<ip-address> and it works OK.
Any ideas?
This is what I am getting:
From: 192.168.0.153
To: 192.168.0.106
$ssh -Tv <user>#<ip-address>
OpenSSH_8.3p1, OpenSSL 1.1.1f 31 Mar 2020
debug1: Connecting to 192.168.0.106 [192.168.0.106] port 22.
debug1: connect to address 192.168.0.106 port 22: Permission denied
ssh: connect to host 192.168.0.106 port 22: Permission denied
but on 192.168.0.106, it is showing sshd running and port 22 open.
On the machine itself, I can ssh ($ssh <user>#localhost works)
On the server I want to reach, it shows port 22 as open, ssh service enabled (192.168.0.106)
#firewall-cmd --list-all
public (active)
...
interfaces: enp37s0
services: cockpit dhcpv6-client http ssh
ports: 22/tcp
...
First, check the output of ssh -Tv <user>#<ip-address>
It will tell you:
if it can actually contact the server
what local private key it is using
Make sure you have:
generated a public/private key pair in %USERPROFILE%\.ssh, using openSSH ssh-keygen command (ssh-keygen -m PEM -t rsa -P "")
added the content of id_rsa.pub to ~user/.ssh/authorized_keys on the server side.
I had this problem. I had my virtual machine set up for a wired connection. I had to turn on the wired connection in the Red Hat settings. Settings -> Network -> Wired Toggle: ON
Once I turned on the wired connection I was able to make my ssh connections externally.
I have a master and a slave.
I can connect via ssh from master to the slave.
Ansible can't connect from master to the slave.
Question: What am I doing wrong, so that ansible cant connect, but ssh can?
Successful connection from master to slave via ssh
vagrant#master:~$ ssh slave.local
Enter passphrase for key '/home/vagrant/.ssh/id_rsa':
vagrant#slave.local's password:
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-87-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
17 packages can be updated.
9 updates are security updates.
----------------------------------------------------------------
Ubuntu 16.04.3 LTS built 2017-09-08
----------------------------------------------------------------
Last login: Thu Sep 28 15:20:21 2017 from 10.0.0.10
vagrant#slave:~$
Ansible error: "Permission denied (publickey,password)"
vagrant#master:~$ ansible all -m ping -u vagrant
The authenticity of host 'slave.local (10.0.0.11)' can't be established.
ECDSA key fingerprint is SHA256:tRGlinvTj/c2gpTayZ/mYzyWbs63s+BUX81TdKJ+0jQ.
Are you sure you want to continue connecting (yes/no)? yes
Enter passphrase for key '/home/vagrant/.ssh/id_rsa':
slave.local | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: Warning: Permanently added 'slave.local' (ECDSA) to the list of known hosts.\r\nPermission denied (publickey,password).\r\n",
"unreachable": true
}
This is my hosts file
vagrant#master:~$ cat /etc/ansible/hosts
[web]
slave.local
The solution was to add the private key in openSSH format to the file /home/vagrant/.ssh/id_rsa
This is where ansible is looking for the key.
This I could find out, by starting ansible in verbose mode, using key "-vvvv"
ansible all -m ping -u vagrant -vvvv
The verbose output was
10.0.0.11 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: OpenSSH_7.2p2 Ubuntu-4ubuntu2.2, OpenSSL 1.0.2g 1 Mar 2016\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/home/vagrant/.ansible/cp/a72f4dc97e\" does not exist\r\ndebug2: resolving \"10.0.0.11\" port 22\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 10.0.0.11 [10.0.0.11] port 22.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: fd 3 clearing O_NONBLOCK\r\ndebug1: Connection established.\r\ndebug3: timeout: 10000 ms remain after connect\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file /home/vagrant/.ssh/id_rsa type -1\r\ndebug1: key_load_public: No such file or directory\r\ndebug1: identity file ...
I enable docker in redhat 6.5 by following the guide of centos6 and create a redhat 6.5 base image. The 6.5 imange can run well in the container but while I enable sshd in the image, the sshd always terminate my ssh client immediately once login successfully.
I save the image and load into my ubuntu 14.04.1 docker and then the redhat6.5 sshd works well. So I think the redhat 6.5 sshd image shall be ok. And then, I save my ubuntu ssh image and load into the redhat 6.5 host, the ubuntu sshd also works well in the container of redhat 6.5. So I really do not understand why my redhat 6.5 sshd image can not work well in the container of redhat 6.5.
My Docker information:
[root#c111bc2n10e1 ~]# docker info
Containers: 4
Images: 32
Storage Driver: devicemapper
Pool Name: docker-8:3-1572873-pool
Data file: /var/lib/docker/devicemapper/devicemapper/data
Metadata file: /var/lib/docker/devicemapper/devicemapper/metadata
Data Space Used: 2501.9 Mb
Data Space Total: 102400.0 Mb
Metadata Space Used: 3.1 Mb
Metadata Space Total: 2048.0 Mb
Execution Driver: native-0.2
Kernel Version: 2.6.32-431.el6.x86_64
Username: apollos
Registry: [https://index.docker.io/v1/]
SSH Server:
debug1: Setting controlling tty using TIOCSCTTY.
debug1: Received SIGCHLD.
debug1: session_by_pid: pid 7
debug1: session_exit_message: session 0 channel 0 pid 7
debug1: session_exit_message: release channel 0
SSH Client:
debug1: PAM: reinitializing credentials
debug1: permanently_set_uid: 0/0
Connection to 9.114.46.152 closed.
I got the workaround and there is two opitons:
1) Change UsePAM no in /etc/ssh/sshd_config
or
2) Comment session required pam_loginuid.so in /etc/pam.d/sshd
But I do not understand the root cause. Who can help on it?
When running
ssh -v myuser#xx.xxx.xxx.xx
I connect to the server and can operate the session
When running
ssh myuser#xx.xxx.xxx.xx
the behaviour returns
ssh: connect to host xx.xxx.xxx.xx port 22: Operation timed out
THis behaviour appeared after I stated on the server:
ssh-add ~/.ssh/id_rsa
thus adding the id to the agent has messed up ssh... How to fix?