I want to copy data with scp in a GitLab pipeline using PRIVATE_KEY.
The error is:
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
lost connection
Pipeline log:
$ mkdir -p ~/.ssh
$ echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
$ chmod 600 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 22
$ ssh-add ~/.ssh/id_rsa
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)
$ ssh-keyscan -H $IP >> ~/.ssh/known_hosts
# x.x.x.x:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.10
# x.x.x.x:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.10
$ scp -rv api.yml root#$IP:/home/services/test/
Executing: program /usr/bin/ssh host x.x.x.x, user root, command scp -v -r -t /home/services/test/
OpenSSH_8.6p1, OpenSSL 1.1.1l 24 Aug 2021
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling
debug1: Connecting to x.x.x.x [x.x.x.x] port 22.
debug1: Connection established.
debug1: identity file /root/.ssh/id_rsa type -1
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa_sk type -1
debug1: identity file /root/.ssh/id_ecdsa_sk-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: identity file /root/.ssh/id_ed25519_sk type -1
debug1: identity file /root/.ssh/id_ed25519_sk-cert type -1
debug1: identity file /root/.ssh/id_xmss type -1
debug1: identity file /root/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.6
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
lost connection
kex_exchange_identification: read: Connection reset by peer
When an SSH client connects to an SSH server, the server starts by sending a version string to the client. The error that you're getting means that the TCP connection from the client to the server was "abnormally closed" while the client was waiting for this data from the server, in other words immediately after the TCP connection was opened.
As a practical matter, it's likely to mean one of two things:
The SSH server process malfunctioned (crashed), or perhaps it detected some serious issue causing it to exit immediately.
Some firewall is interfering with connections to the ssh server.
It looks like the ssh-keyscan program was able to connect to the server and get a version string without an error. So the SSH server process is apparently able to talk to a client without crashing.
You should talk the administrators of this x.x.x.x host and the network that it's attached to, to see if they can identify the problem from their end. It's possible that something—a firewall, or the ssh server process itself—is seeing the multiple connections, first from the ssh-keyscan process, then by the scp program, as an intrusion attempt. And it's blocking the second connection attempt.
I had the same problem. I rebooted the server, then it was all good.
I met this issue after I changed my Apple ID password, so I updated my Apple ID and restarted my Mac. It works now.
git pull origin master
Output:
kex_exchange_identification: read: Connection reset by peer
Connection reset by 20.205.243.166 port 22
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
TL;DR:
find the server-side process listen-to-ssh port and kill it, and then restart the ssh service. It should solve this problem.
On the client side:
ssh account#ip -pPORT
kex_exchange_identification: read: Connection reset by peer
I tried it on the server side:
service ssh status
[ ok ] sshd is running.
service ssh restart
[ ok ] Restarting OpenBSD Secure Shell server: sshd.
but the client side ssh command still fail with the same kex_exchange_identification error.
Then I stop the ssh service on the server side (as root):
service ssh stop
[ ok ] Stopping OpenBSD Secure Shell server: sshd.
And the following client side ssh command still fails with the same kex_exchange_identification error. It's strange; if no process listen the port, it should be the error Connection refused.
It could be the process on the server side listening to the SSH port is dead, and even a restart / stop service do not work. So to find the process, and killing it may solve the problem.
The PORT here is the SSH port defined in 'server /etc/ssh/sshd_config', and the default is 22. As root:
netstat -ap | grep PORT
tcp 0 0 0.0.0.0:PORT 0.0.0.0:* LISTEN 8359/sshd
tcp6 0 0 [::]:PORT [::]:* LISTEN 8359/sshd
kill 8359
netstat -ap | grep PORT
no result
service ssh start
[ ok ] Starting OpenBSD Secure Shell server: sshd.
netstat -ap | grep PORT
tcp 0 0 0.0.0.0:PORT 0.0.0.0:* LISTEN 31418/sshd: /usr/sb
tcp6 0 0 [::]:PORT [::]:* LISTEN 31418/sshd: /usr/sb
The following client-side ssh command succeed.
I suggest to check the routing table for one possibility. In my case on Ubuntu 20.04 (Focal Fossa), I added a local network routing entry to recover when I got the same error message when connecting to the server using SSH. It had disappeared unexpectedly, leaving only the default route.
route -n Kernel IP routing table Destination Gateway
Output:
Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 enp1s0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 enp1s0 # <= disappeared
It seemed as if ack was being filtered by an incomplete routing table although the first syn passed.
Similar to naoki-ogawa, I had a problem with my routing table. In my case, I had an extra route for my local network.
As root:
route
Output:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default RT-AX92U-3E20 0.0.0.0 UG 100 0 0 eno1
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 virbr1
192.168.50.0 RT-AX92U-3E20 255.255.255.0 UG 10 0 0 eno1
192.168.50.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1
I simply removed the gateway on the local network (192.168.50.0):
route del 192.168.50.0/24 via 192.168.50.1
The problem was resolved.
For those who came across this page after upgrading a FreeBSD machine to 13.1 and then trying to ssh into it, see Bug 263489. sshd does not work after reboot to 13.1-RC4.
After the upgrade, the previous sshd daemon (OpenSSH < 8.2) is still running with new configurations (OpenSSH >= 8.2). The solution is to stop and then restart the sshd daemon. The FreeBSD 13.1 release notes now mention this and after 13.1, the freebsd-update script will automatically restart the daemon.
I had this error today when I was trying to use my Dell laptop running Ubuntu 20.04.5 LTS (Focal Fossa) and trying to SSH into a Raspberry Pi. When I was on my home Wi-Fi network and tried to SSH into the pi (also on my home Wi-Fi network) I got the error:
ssh pi#10.0.0.200
Output:
kex_exchange_identification: read: Connection reset by peer'
However, when I switched my Ubuntu Laptop over to a mobile hotspot, the error disappeared, and I was able to SSH without issue. Will update this post as soon as I figure out how to resolve the root cause.
Issue resolved (but full reason unclear). I followed the instructions to change my DNS server here to 8.8.8.8 and 8.8.4.4.
After about 5 minutes had elapsed, I was able to use SSH from my command line terminal just fine.
Error
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
I have an Ubuntu 20.04 host and two RHEL 8 VMs running on VMware. I log in into the two VMs from my Ubuntu terminal. I use Ethernet and Wi-Fi connections. Every time I try to log in into a VM after rebooting it, I get the error:
Restarting the sshd service will not solve the problem. Sometimes, the problem would be resolved if I physically disconnected and reconnected the Ethernet cable.
Finally I turned off my Wi-Fi connection with:
nmcli conn down <name_of_Wi-Fi_connection>
Or turning it off from settings and this gave me a permanent solution.
Both my Ethernet and Wi-Fi connections (static connections) had the same IP address, so I think the VMs were rejecting two "suspicious" similar connections.
Try to check if OpenSSH server is up and running on the server side.
Try checking the sshd configuration. It worked this way for me.
The same issue with me:
I have fixed the issue by doing the below steps.
edit file etc/hosts.allow. Command to do so sudo nano /etc/hosts.allow.
At the end, update the value of ALL keys to ALL like ALL : ALL. Save the file and try again.
Basically, ALL might be set to something else therefore while establishing ssh connection to the host, it is expecting that the request should come from the IP address starting from 10...* if ALL set to ALL : 10.. Therefore by replacing 10. with ALL, you are allowing connection from everywhere.
You can try VPN or if you have been using it before, try to turn it off and connect it again.
If you don't have a budget for a VPN, you can try ProtonVPN which is free. It worked for me when I faced the same problem.
I'm trying to use rsync on my dev server to download files to my local machine after checking out a branch on the dev server.
Before using wsl2, I used to be able to do the following:
Remote server
rsync -ave "ssh -p 22001" --delete --exclude-from ~/rsync_exclude_list.txt ~/as/ alex#localhost:/home/alexmk92/code/project
Local SSH config
Host dev-tunnel
HostName dev.sever.co.uk
User as
ControlMaster auto
ControlPath ~/.ssh/sockets/%r#%h:%p
RemoteForward 22001 localhost:22
Host dev
HostName dev.server.co.uk
User as
RequestTTY yes
RemoteCommand cd as; bash
I can then run these with ssh dev and ssh -fvN dev-tunnel if from the remote server I type ssh -p 22001 alex#localhost then I get:
debug1: remote forward success for: listen 22001, connect localhost:22
debug1: All remote forwarding requests processed
debug1: client_input_channel_open: ctype forwarded-tcpip rchan 2 win 2097152 max 32768
debug1: client_request_forwarded_tcpip: listen localhost port 22001, originator 127.0.0.1 port 34472
debug1: connect_next: host localhost ([127.0.0.1]:22) in progress, fd=5
debug1: channel 1: new [127.0.0.1]
debug1: confirm forwarded-tcpip
debug1: channel 1: connection failed: Connection refused
connect_to localhost port 22: failed.
debug1: channel 1: free: 127.0.0.1, nchannels 2
I'm guessing this is because WSL2 no longer runs on localhost, and is instead isolated within Hypervisor. Which probably means windows is receiving this request on localhost:22 (where no SSH server is running) and then hangs up the connection.
How can I forward the request to my WSL2 SSH process?
It is possible to add a port mapping to WSL2 machines, using the following WSH script:
$port = 3000;
$addr = '0.0.0.0';
$remoteaddr = bash.exe -c "ifconfig eth0 | grep 'inet '"
$found = $remoteaddr -match '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}';
if( $found ) {
$remoteaddr = $matches[0];
} else {
echo "Error: ip address of WSL 2 cannot be found";
exit;
}
Invoke-Expression "netsh interface portproxy delete v4tov4 listenport=$port listenaddress=$addr"
Invoke-Expression "netsh interface portproxy add v4tov4 listenport=$port listenaddress=$addr connectport=$port connectaddress=$remoteaddr"
echo "Success: Port mapping added!";
Of course, you need to change to port and maybe the IP address (first two lines)
Maybe you need to run the script as admin...
I'm using SSH port forwarding to get to a DB behind a firewall. I use the following command (forwards remote 5432 port to local 5430):
ssh -i privatekey -v -N -A \
ec2-user#host -fNT -4 -L \
5430:rds-endpoint.us-west-2.rds.amazonaws.com:5432
This command always returns exit code 0, but approx. once in ten cases it doesn't actually open the tunnel and I get connection refused error when I try to connect to localhost:5430.
I've checked the debug output and noticed that there's one difference. The unsuccessful runs' debug output ends with this:
debug1: channel 0: new [port listener]
debug1: Requesting no-more-sessions#openssh.com
debug1: forking to background
while the successful runs have 3 more lines after the forking to background line:
debug1: Entering interactive session.
debug1: pledge: network
debug1: client_input_global_request: rtype hostkeys-00#openssh.com want_reply 0
So I assume SSH fails to "enter interactive session". Is there a way to fight this bug and make the port forwarding command reliable?
I have downloaded raspbian lite and flashed it to SD card through etcher
As per raspbian's headless ssh tutorial, I created an empty ssh file in /boot
touch /Volumes/boot/ssh
Then I connected ethernet cable from pi into apple airport extreme
when pi is booted, airport utility on macbook shows 10.0.1.9 as a new device on the network
from macbook:
$ ssh -vvv pi#10.0.1.9
OpenSSH_7.8p1, LibreSSL 2.6.2
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 48: Applying options for *
debug2: resolve_canonicalize: hostname 10.0.1.9 is address
debug2: ssh_connect_direct
debug1: Connecting to 10.0.1.9 [10.0.1.9] port 22.
debug1: connect to address 10.0.1.9 port 22: Connection refused
ssh: connect to host 10.0.1.9 port 22: Connection refused
I've done this multiple times, recreating the ssh file at each boot, redownloading and reflashing the raspbian image, it always fails.
I'm 100% certain of 10.0.1.9 being the pi's local ip because i've attempted this around 10 times and it only appears when pi is on and ethernet is plugged in
Run these command to generate the needed keys for ssh remote access.
sudo rm -r /etc/ssh/ssh*key
sudo dpkg-reconfigure openssh-server
So I'm on my local machine, and I'm sshing into a google compute server.
From this google compute server, I'm trying to establish an ssh tunnel to a third party server ($host) using the following command:
ssh username#$host -L 3306:127.0.0.1:3306 -N
And after hanging for 20-30 seconds, I get:
ssh: connect to host $host port 22: Connection timed out
I can use the exact same command on my local machinet to the third party server and it works fine.
I've killed anything using the 3306 port on the google compute server.
I've opened port 22 and 3306 on the google server through the interface (through I can't tell if this applies to outbound connections also).
Not sure where to go from here, any help would be appreciated.
Edit1: The google server can successfully ping the third party server.
Edit2: Just tried it from the company server, it doesn't work there either. Both he google-compute and the company server are linux (Deb Wee and Ubuntu respectively) and the local machine is windows. The fact that I'm sshing into them shouldn't make a difference should it?
Edit3: Changed the default SSH port on the google server to 22222 and connected to it using that instead. Trying to connect to third party now with:
sudo ssh -p 22 username#$host -L 3306:127.0.0.1:3306 -N -v -v -v
Debug output is:
OpenSSH_6.6.1, OpenSSL 1.0.1e 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to $host [$host] port 22.
And after that it just hangs.
Debug output on local machine using same command is is:
OpenSSH_4.6p1, OpenSSL 0.9.8e 23 Feb 2007
debug2: ssh_connect: needpriv 0
debug1: Connecting to $host [$host] port 22.
debug1: Connection established.
*other junk*
Turns out the third party server had ssh blocked from anywhere outside Australia
-_-