WSL2 SSH RemoteForward connect back - ssh

I'm trying to use rsync on my dev server to download files to my local machine after checking out a branch on the dev server.
Before using wsl2, I used to be able to do the following:
Remote server
rsync -ave "ssh -p 22001" --delete --exclude-from ~/rsync_exclude_list.txt ~/as/ alex#localhost:/home/alexmk92/code/project
Local SSH config
Host dev-tunnel
HostName dev.sever.co.uk
User as
ControlMaster auto
ControlPath ~/.ssh/sockets/%r#%h:%p
RemoteForward 22001 localhost:22
Host dev
HostName dev.server.co.uk
User as
RequestTTY yes
RemoteCommand cd as; bash
I can then run these with ssh dev and ssh -fvN dev-tunnel if from the remote server I type ssh -p 22001 alex#localhost then I get:
debug1: remote forward success for: listen 22001, connect localhost:22
debug1: All remote forwarding requests processed
debug1: client_input_channel_open: ctype forwarded-tcpip rchan 2 win 2097152 max 32768
debug1: client_request_forwarded_tcpip: listen localhost port 22001, originator 127.0.0.1 port 34472
debug1: connect_next: host localhost ([127.0.0.1]:22) in progress, fd=5
debug1: channel 1: new [127.0.0.1]
debug1: confirm forwarded-tcpip
debug1: channel 1: connection failed: Connection refused
connect_to localhost port 22: failed.
debug1: channel 1: free: 127.0.0.1, nchannels 2
I'm guessing this is because WSL2 no longer runs on localhost, and is instead isolated within Hypervisor. Which probably means windows is receiving this request on localhost:22 (where no SSH server is running) and then hangs up the connection.
How can I forward the request to my WSL2 SSH process?

It is possible to add a port mapping to WSL2 machines, using the following WSH script:
$port = 3000;
$addr = '0.0.0.0';
$remoteaddr = bash.exe -c "ifconfig eth0 | grep 'inet '"
$found = $remoteaddr -match '\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}';
if( $found ) {
$remoteaddr = $matches[0];
} else {
echo "Error: ip address of WSL 2 cannot be found";
exit;
}
Invoke-Expression "netsh interface portproxy delete v4tov4 listenport=$port listenaddress=$addr"
Invoke-Expression "netsh interface portproxy add v4tov4 listenport=$port listenaddress=$addr connectport=$port connectaddress=$remoteaddr"
echo "Success: Port mapping added!";
Of course, you need to change to port and maybe the IP address (first two lines)
Maybe you need to run the script as admin...

Related

How can I fix "kex_exchange_identification: read: Connection reset by peer"?

I want to copy data with scp in a GitLab pipeline using PRIVATE_KEY.
The error is:
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
lost connection
Pipeline log:
$ mkdir -p ~/.ssh
$ echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
$ chmod 600 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 22
$ ssh-add ~/.ssh/id_rsa
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)
$ ssh-keyscan -H $IP >> ~/.ssh/known_hosts
# x.x.x.x:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.10
# x.x.x.x:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.10
$ scp -rv api.yml root#$IP:/home/services/test/
Executing: program /usr/bin/ssh host x.x.x.x, user root, command scp -v -r -t /home/services/test/
OpenSSH_8.6p1, OpenSSL 1.1.1l 24 Aug 2021
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling
debug1: Connecting to x.x.x.x [x.x.x.x] port 22.
debug1: Connection established.
debug1: identity file /root/.ssh/id_rsa type -1
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa_sk type -1
debug1: identity file /root/.ssh/id_ecdsa_sk-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: identity file /root/.ssh/id_ed25519_sk type -1
debug1: identity file /root/.ssh/id_ed25519_sk-cert type -1
debug1: identity file /root/.ssh/id_xmss type -1
debug1: identity file /root/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.6
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
lost connection
kex_exchange_identification: read: Connection reset by peer
When an SSH client connects to an SSH server, the server starts by sending a version string to the client. The error that you're getting means that the TCP connection from the client to the server was "abnormally closed" while the client was waiting for this data from the server, in other words immediately after the TCP connection was opened.
As a practical matter, it's likely to mean one of two things:
The SSH server process malfunctioned (crashed), or perhaps it detected some serious issue causing it to exit immediately.
Some firewall is interfering with connections to the ssh server.
It looks like the ssh-keyscan program was able to connect to the server and get a version string without an error. So the SSH server process is apparently able to talk to a client without crashing.
You should talk the administrators of this x.x.x.x host and the network that it's attached to, to see if they can identify the problem from their end. It's possible that something—a firewall, or the ssh server process itself—is seeing the multiple connections, first from the ssh-keyscan process, then by the scp program, as an intrusion attempt. And it's blocking the second connection attempt.
I had the same problem. I rebooted the server, then it was all good.
I met this issue after I changed my Apple ID password, so I updated my Apple ID and restarted my Mac. It works now.
git pull origin master
Output:
kex_exchange_identification: read: Connection reset by peer
Connection reset by 20.205.243.166 port 22
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
TL;DR:
find the server-side process listen-to-ssh port and kill it, and then restart the ssh service. It should solve this problem.
On the client side:
ssh account#ip -pPORT
kex_exchange_identification: read: Connection reset by peer
I tried it on the server side:
service ssh status
[ ok ] sshd is running.
service ssh restart
[ ok ] Restarting OpenBSD Secure Shell server: sshd.
but the client side ssh command still fail with the same kex_exchange_identification error.
Then I stop the ssh service on the server side (as root):
service ssh stop
[ ok ] Stopping OpenBSD Secure Shell server: sshd.
And the following client side ssh command still fails with the same kex_exchange_identification error. It's strange; if no process listen the port, it should be the error Connection refused.
It could be the process on the server side listening to the SSH port is dead, and even a restart / stop service do not work. So to find the process, and killing it may solve the problem.
The PORT here is the SSH port defined in 'server /etc/ssh/sshd_config', and the default is 22. As root:
netstat -ap | grep PORT
tcp 0 0 0.0.0.0:PORT 0.0.0.0:* LISTEN 8359/sshd
tcp6 0 0 [::]:PORT [::]:* LISTEN 8359/sshd
kill 8359
netstat -ap | grep PORT
no result
service ssh start
[ ok ] Starting OpenBSD Secure Shell server: sshd.
netstat -ap | grep PORT
tcp 0 0 0.0.0.0:PORT 0.0.0.0:* LISTEN 31418/sshd: /usr/sb
tcp6 0 0 [::]:PORT [::]:* LISTEN 31418/sshd: /usr/sb
The following client-side ssh command succeed.
I suggest to check the routing table for one possibility. In my case on Ubuntu 20.04 (Focal Fossa), I added a local network routing entry to recover when I got the same error message when connecting to the server using SSH. It had disappeared unexpectedly, leaving only the default route.
route -n Kernel IP routing table Destination Gateway
Output:
Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 enp1s0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 enp1s0 # <= disappeared
It seemed as if ack was being filtered by an incomplete routing table although the first syn passed.
Similar to naoki-ogawa, I had a problem with my routing table. In my case, I had an extra route for my local network.
As root:
route
Output:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default RT-AX92U-3E20 0.0.0.0 UG 100 0 0 eno1
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 virbr1
192.168.50.0 RT-AX92U-3E20 255.255.255.0 UG 10 0 0 eno1
192.168.50.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1
I simply removed the gateway on the local network (192.168.50.0):
route del 192.168.50.0/24 via 192.168.50.1
The problem was resolved.
For those who came across this page after upgrading a FreeBSD machine to 13.1 and then trying to ssh into it, see Bug 263489. sshd does not work after reboot to 13.1-RC4.
After the upgrade, the previous sshd daemon (OpenSSH < 8.2) is still running with new configurations (OpenSSH >= 8.2). The solution is to stop and then restart the sshd daemon. The FreeBSD 13.1 release notes now mention this and after 13.1, the freebsd-update script will automatically restart the daemon.
I had this error today when I was trying to use my Dell laptop running Ubuntu 20.04.5 LTS (Focal Fossa) and trying to SSH into a Raspberry Pi. When I was on my home Wi-Fi network and tried to SSH into the pi (also on my home Wi-Fi network) I got the error:
ssh pi#10.0.0.200
Output:
kex_exchange_identification: read: Connection reset by peer'
However, when I switched my Ubuntu Laptop over to a mobile hotspot, the error disappeared, and I was able to SSH without issue. Will update this post as soon as I figure out how to resolve the root cause.
Issue resolved (but full reason unclear). I followed the instructions to change my DNS server here to 8.8.8.8 and 8.8.4.4.
After about 5 minutes had elapsed, I was able to use SSH from my command line terminal just fine.
Error
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
I have an Ubuntu 20.04 host and two RHEL 8 VMs running on VMware. I log in into the two VMs from my Ubuntu terminal. I use Ethernet and Wi-Fi connections. Every time I try to log in into a VM after rebooting it, I get the error:
Restarting the sshd service will not solve the problem. Sometimes, the problem would be resolved if I physically disconnected and reconnected the Ethernet cable.
Finally I turned off my Wi-Fi connection with:
nmcli conn down <name_of_Wi-Fi_connection>
Or turning it off from settings and this gave me a permanent solution.
Both my Ethernet and Wi-Fi connections (static connections) had the same IP address, so I think the VMs were rejecting two "suspicious" similar connections.
Try to check if OpenSSH server is up and running on the server side.
Try checking the sshd configuration. It worked this way for me.
The same issue with me:
I have fixed the issue by doing the below steps.
edit file etc/hosts.allow. Command to do so sudo nano /etc/hosts.allow.
At the end, update the value of ALL keys to ALL like ALL : ALL. Save the file and try again.
Basically, ALL might be set to something else therefore while establishing ssh connection to the host, it is expecting that the request should come from the IP address starting from 10...* if ALL set to ALL : 10.. Therefore by replacing 10. with ALL, you are allowing connection from everywhere.
You can try VPN or if you have been using it before, try to turn it off and connect it again.
If you don't have a budget for a VPN, you can try ProtonVPN which is free. It worked for me when I faced the same problem.

Advance ssh config file

How to ssh directly to Remote Server, below is the details description.
Local machine ---> Jump1 ----> Jump2 ----> Remote Server
From local machine there is no direct access to Remote Server and Jump2 is disable
Remote Server can only be accessed from Jump2
There is no sshkegen to remote server we have to give the paswword manually.
from Local Machine we access the Jump1 with ip and port 2222 then from Jump 1 we access the Jump2 with host name default port 22.
With ssh/config file we were able to access the jump2 server without any problem. But my requirement is to directly access the remote server.
is there any possible way I don't mind entering the password for remote server.
Log
ssh -vvv root#ip address
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
debug1: Reading configuration data /root/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to ip address [ip address] port 22.
My Config file
Host jump1
Hostname ip.109
Port 2222
User avdy
Host jump2
Hostname ip.138
Port 22
ProxyCommand ssh -W %h:%p jump1
User avdy
Host remote-server
Hostname ip.8
Port 22
ProxyCommand ssh -W %h:%p jump2
User root
Set your ~/.ssh/config:
Host Jump1
User jump1user
Port 2222
Host Jump2
ProxyCommand ssh -W %h:%p Jump1
User jump2user
Host RemoveServer
ProxyCommand ssh -W %h:%p Jump2
User remoteUser
Or with new OpenSSH 7.3:
Host RemoveServer
ProxyJump jump1user#Jump1,jump2user#Jump2
User remoteUser
Then you can connect simply using ssh RemoteServer

Why the forwarded SSH port still seem open even with the endpoint failing?

I mount a SSH port forwarding tunnel to a remote server RemoteServerSSH and forward the 55555 port to a non-existing equipment (this is what I try to test).
$ hostname
MyMachine
Setting the forwarding tunnel
$ ssh -q -N -p 22 -vvv \
-i ~/.ssh/MyKey \
-o Compression=yes \
-o ServerAliveInterval=3 \
-o serverAliveCountMax=3 \
-L *:55555:RemoteDownItem:9100 user#RemoteServerSSH
Testing the tunnel
When I telnet the device directly I got the correct behavior (not connected). However, when I try to reach it through the tunnel, telnet says it's connected:
$ telnet RemoteDownItem 9100 # Not Connected = OK
$ telnet MyMachine 55555 # Connected! Why? should be same as above
When I measure the telnet time connection, it is instantaneous (1ms!).
It is the SSH client that answers me, it does not cross the ssh tunnel! Why ?
Verbose
...
debug1: Local connections to *:55555 forwarded to remote address 10.220.9.183:9100
debug3: channel_setup_fwd_listener: type 2 wildcard 1 addr NULL
debug1: Local forwarding listening on 0.0.0.0 port 55555.
debug2: fd 4 setting O_NONBLOCK
debug3: fd 4 is O_NONBLOCK
debug1: channel 0: new [port listener]
debug3: sock_set_v6only: set socket 5 IPV6_V6ONLY
debug1: Local forwarding listening on :: port 55555.
debug2: fd 5 setting O_NONBLOCK
debug3: fd 5 is O_NONBLOCK
debug1: channel 1: new [port listener]
debug2: fd 3 setting TCP_NODELAY
debug3: packet_set_tos: set IP_TOS 0x10
debug1: Requesting no-more-sessions#openssh.com
debug1: Entering interactive session.
debug1: Connection to port 55555 forwarding to 10.220.9.183 port 9100 requested.
debug2: fd 6 setting TCP_NODELAY
debug2: fd 6 setting O_NONBLOCK debug3: fd 6 is O_NONBLOCK
debug1: channel 2: new [direct-tcpip]
Question
Is there an SSH parameter to forward the telnet connection directly to the endpoint?
Project Related Question
Telnet connect to non-existing adress
Consider how a tunnel works. When you run something like ssh -L *:55555:RemoteDownItem:9100 user#host, the port forward is handled like this:
The local ssh instance binds to TCP port 55555 and listens for connections.
An "originator" connects to port 55555 on the local system. The local ssh instance accepts the TCP connection.
The local ssh instance sends a "direct-tcpip" request through the SSH connection to the remote ssh server.
The remote ssh server attempts to connect to host "RemoteDownItem" port 9100.
At step 4, if the ssh server is able to connect to the target of the tunnel, then the ssh client and server will each relay data between the originator and the target through the direct-tcpip channel.
Alternately, at step 4, the server may not be able to make the TCP connection to the target. Or the server may be configured not to permit forward requests. In either case, it will respond to the client with an error (or a message saying the channel is closed).
At this point, the only thing that the local ssh instance can do is to close the TCP connection to the originator. From the perspective of the originator, it successfully connected to a "server" (the ssh client), and then the "server" almost immediately closed the connection.
The OpenSSH software doesn't contain any logic to handle this in a more sophisticated way. And handling it in a more sophisticated way may be difficult. Consider:
The remote SSH server has no idea whether it can connect to "RemoteDownItem" port 9100 until it tries. So it's problematic for ssh to figure out in advance that the port forward won't work.
Even if one attempt to connect to the target fails, the next attempt might succeed. So it's problematic for ssh to assume the port forward won't work, just because one attempt failed.
The remote SSH server could successfully connect to the target, and then the target could immediately close the TCP connection. So the ssh server, ssh client, and originator all have to handle this behavior anyway.

Trying to make a SSH Tunel

I configurated a bastion server on AWS on my public subnet.
I can make direct ssh to the ec2 instance inside the private subnet, using the bastion host.
I can connect to the bastion host and check if the 7474 port on the private ec2 istance is opened.
nc -v -z -w 5 10.0.3.102 7474; echo $?
Connection to 10.0.3.102 7474 port [tcp/*] succeeded!
0
I want to ssh tunnel from a localhost (my home machine) to a ec2 instance on private network.
ssh -v -C -N -L 9000:PRIVATE_MDM:7474 BASTION
But i getting:
open failed: administratively prohibited: open failed
Authenticated to 52.32.240.40 ([52.32.240.40]:22).
debug1: Local connections to LOCALHOST:9000 forwarded to remote address PRIVATE_MDM:7474
debug1: Local forwarding listening on ::1 port 9000.
debug1: channel 0: new [port listener]
debug1: Local forwarding listening on 127.0.0.1 port 9000.
debug1: channel 1: new [port listener]
debug1: Requesting no-more-sessions#openssh.com
debug1: Entering interactive session.
debug1: Connection to port 9000 forwarding to PRIVATE_MDM port 7474 requested.
debug1: channel 2: new [direct-tcpip]
debug1: Connection to port 9000 forwarding to PRIVATE_MDM port 7474 requested.
debug1: channel 3: new [direct-tcpip]
channel 2: open failed: administratively prohibited: open failed
channel 3: open failed: administratively prohibited: open failed
debug1: channel 2: free: direct-tcpip: listening port 9000 for PRIVATE_MDM port 7474, connect from 127.0.0.1 port 42685 to 127.0.0.1 port 9000, nchannels 4
debug1: channel 3: free: direct-tcpip: listening port 9000 for PRIVATE_MDM port 7474, connect from 127.0.0.1 port 42686 to 127.0.0.1 port 9000, nchannels 3
debug1: Connection to port 9000 forwarding to PRIVATE_MDM port 7474 requested.
debug1: channel 2: new [direct-tcpip]
channel 2: open failed: administratively prohibited: open failed
debug1: channel 2: free: direct-tcpip: listening port 9000 for PRIVATE_MDM port 7474, connect from 127.0.0.1 port 42687 to 127.0.0.1 port 9000, nchannels 3
BASTION machine has forbidden to create port forwarding by option AllowTcpForwarding. If you want to have port-forwarding working, you need to allow this option on this machine.
EDIT: Now I see the flaw there. Can you add description what are you trying to achieve? Forwarding your non-used local port to non-used remote port does not make sense. You either forward existing service on remote side to your local port (then use -L -- local port forwarding) or the other way round, your local service to remote port (then you use -R -- remote port forwarding). Without this, you can't proceed further.
SOLUTION: Difference between nc and ssh command in examples is in usage of direct IP address and hostname. The BASTION was not able to resolve PRIVATE_MDM which caused the problem.

Added Listen 443 in remote server's sshd_config. Why I can't ssh -p 443 / 22?

In a moment of weakness I sheepishly followed a tutorial on how to connect to my Amazon EC2 remote server bypassing a public library's Wifi ssh restriction.
So first thing I did was adding the following (last) line to my /etc/ssh/sshd_config file residing in my remote EC2 AMazon server:
# Use these options to restrict which interfaces/protocols sshd will bind to
#ListenAddress ::
#ListenAddress 0.0.0.0
ListenAddress 443
Then I restarted the ssh server and, in a genius move, logged out from my remote server. So when in my local machine I do this...
$ ssh -i /path/to/key.pem xxx#xx.xx.xxx.xx -p 443 -v
...I get this:
$ ssh -i /path/to/key.pem xxx#xx.xx.xxx.xx -v -p 443
OpenSSH_6.0p1 Debian-4+deb7u2, OpenSSL 1.0.1e 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to xx.xx.xxx.xx [xx.xx.xxx.xx] port 443.
debug1: connect to address xx.xx.xxx.xx port 443: Connection timed out
ssh: connect to host xx.xx.xxx.xx port 443: Connection timed out
If I try to ssh to default's port 22 I get this:
OpenSSH_6.0p1 Debian-4+deb7u2, OpenSSL 1.0.1e 11 Feb 2013
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to xx.xx.xxx.xx [xx.xx.xxx.xx] port 22.
debug1: connect to address xx.xx.xxx.xx port 22: Connection refused
ssh: connect to host xx.xx.xxx.xx port 22: Connection refused
I also added the following line in my Amazon's EC2 'Security Groups'...
Custom TCP port 443
... to no avail.
Did I effectively locked me out of my remote server? I was following a tutorial on how to tunnel and then this happened. Shouldn't have just added to /etc/ssh/sshd_config...
Port 443
...instead of 'ListenAddress 443' ?
I have never had problems ssh'ing to my remote server before (which is a Debian Wheezy).
As far as I know I can still detach my volume, re-attach it into a new instance, fix the sshd_config file, etc. I hope there's an alternative to that.
So my question is: It is possible to connect to my remote server considering the line 'ListenAddress 443' in ssh_config ? If so, how? And perhaps more importantly, why is that I can't connect on Port 22 if I hadn't touched or changed anything n sshd_config besides the ListenAddress 443?
Thanks in advance!
Edit:
telnet xx.xx.xxx.xx 22
Trying xx.xx.xxx.xx...
telnet: Unable to connect to remote host: Connection refused
You can't connect because of one of three reasons:
sshd on the remote server is down because it can't parse ListenAddress 443.
sshd parsed ListenAddress 443 into an IP address ('443' can be interpreted as an IP address - an IPv4 address is represented at low levels by a 32-bit unsigned integer) but was unable to bind to the IP address represented by '443' and is down.
sshd parsed ListenAddress 443 into an IP address, successfully bound to that IP address, and is now running and listening for incoming connections on "0.0.1.187" or some similar interpretation of '443' as an IP address.