"ssh example.com" hangs but "ssh example.com bash -i" does not - authentication

everyday I encounter a very strange phenomenon.
From my university internet connection, sshing to my machine ("ssh example.com") works without any problems.
From my home adsl, "ssh example.com" my console gets stuck with this message:
debug1: Server accepts key: pkalg ssh-rsa blen 533
debug1: Enabling compression at level 6.
debug1: Authentication succeeded (publickey).
debug1: channel 0: new [client-session]
debug1: Requesting no-more-sessions#openssh.com
debug1: Entering interactive session.
Sometimes it might let me in but in most of the cases not.
The funny thing is that if I execute "ssh example.com bash -i" I get logged in immediately.

I finally found the source of the problem. It has to do with SSH type of service (ToS) TCP packets.
When you ask for a regular ssh teminal, ssh sets the TCP packet type of service (ToS) to "interactive". My router in my residence blocks those packet types!
Using netcat, the tunneled TCP packets get no type of service directives. Thus, if you tunnel all your ssh traffic through netcat, you reset the ToS of the TCP packets to the default ones.
In .ssh/config, you have to set:
Host *.example.com
ProxyCommand nc %h %p
So, each time you try to ssh to example.com, netcat will be called and will tunnel the packets.

As of OpenSSH 5.7, you can just add this to your ssh config file (either ~/.ssh/config or /etc/ssh/ssh_config):
Host *
IPQoS 0x00
This is a more-direct way to work around the problem Asterios identified.

I've just had the same problem. Try logging in with a different ssh client for more information. Whereas the linux command-line client didn't come back with any useful message, Putty came back with "server refused to allocate pty". I fixed it with mkdir /dev/pts and mount -a. How it got that mucked up in the first place I'm less sure about.
BTW, bash -l should act like a login shell so you should be able to prove Peter Westlake's suggestion correct or incorrect in your case fairly easily.

The difference between the two cases is that "bash -i" does not give you a login shell but just running ssh does. You can "man bash" for details of what a "login shell" is, but the main thing is that it runs /etc/profile and your .bash_profile. Have a look in those files for anything that might be causing a problem.

Maybe the server is out of ptys.

Related

How can I fix "kex_exchange_identification: read: Connection reset by peer"?

I want to copy data with scp in a GitLab pipeline using PRIVATE_KEY.
The error is:
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
lost connection
Pipeline log:
$ mkdir -p ~/.ssh
$ echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
$ chmod 600 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 22
$ ssh-add ~/.ssh/id_rsa
Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)
$ ssh-keyscan -H $IP >> ~/.ssh/known_hosts
# x.x.x.x:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.10
# x.x.x.x:22 SSH-2.0-OpenSSH_7.2p2 Ubuntu-4ubuntu2.10
$ scp -rv api.yml root#$IP:/home/services/test/
Executing: program /usr/bin/ssh host x.x.x.x, user root, command scp -v -r -t /home/services/test/
OpenSSH_8.6p1, OpenSSL 1.1.1l 24 Aug 2021
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Authenticator provider $SSH_SK_PROVIDER did not resolve; disabling
debug1: Connecting to x.x.x.x [x.x.x.x] port 22.
debug1: Connection established.
debug1: identity file /root/.ssh/id_rsa type -1
debug1: identity file /root/.ssh/id_rsa-cert type -1
debug1: identity file /root/.ssh/id_dsa type -1
debug1: identity file /root/.ssh/id_dsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa type -1
debug1: identity file /root/.ssh/id_ecdsa-cert type -1
debug1: identity file /root/.ssh/id_ecdsa_sk type -1
debug1: identity file /root/.ssh/id_ecdsa_sk-cert type -1
debug1: identity file /root/.ssh/id_ed25519 type -1
debug1: identity file /root/.ssh/id_ed25519-cert type -1
debug1: identity file /root/.ssh/id_ed25519_sk type -1
debug1: identity file /root/.ssh/id_ed25519_sk-cert type -1
debug1: identity file /root/.ssh/id_xmss type -1
debug1: identity file /root/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.6
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
lost connection
kex_exchange_identification: read: Connection reset by peer
When an SSH client connects to an SSH server, the server starts by sending a version string to the client. The error that you're getting means that the TCP connection from the client to the server was "abnormally closed" while the client was waiting for this data from the server, in other words immediately after the TCP connection was opened.
As a practical matter, it's likely to mean one of two things:
The SSH server process malfunctioned (crashed), or perhaps it detected some serious issue causing it to exit immediately.
Some firewall is interfering with connections to the ssh server.
It looks like the ssh-keyscan program was able to connect to the server and get a version string without an error. So the SSH server process is apparently able to talk to a client without crashing.
You should talk the administrators of this x.x.x.x host and the network that it's attached to, to see if they can identify the problem from their end. It's possible that something—a firewall, or the ssh server process itself—is seeing the multiple connections, first from the ssh-keyscan process, then by the scp program, as an intrusion attempt. And it's blocking the second connection attempt.
I had the same problem. I rebooted the server, then it was all good.
I met this issue after I changed my Apple ID password, so I updated my Apple ID and restarted my Mac. It works now.
git pull origin master
Output:
kex_exchange_identification: read: Connection reset by peer
Connection reset by 20.205.243.166 port 22
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
TL;DR:
find the server-side process listen-to-ssh port and kill it, and then restart the ssh service. It should solve this problem.
On the client side:
ssh account#ip -pPORT
kex_exchange_identification: read: Connection reset by peer
I tried it on the server side:
service ssh status
[ ok ] sshd is running.
service ssh restart
[ ok ] Restarting OpenBSD Secure Shell server: sshd.
but the client side ssh command still fail with the same kex_exchange_identification error.
Then I stop the ssh service on the server side (as root):
service ssh stop
[ ok ] Stopping OpenBSD Secure Shell server: sshd.
And the following client side ssh command still fails with the same kex_exchange_identification error. It's strange; if no process listen the port, it should be the error Connection refused.
It could be the process on the server side listening to the SSH port is dead, and even a restart / stop service do not work. So to find the process, and killing it may solve the problem.
The PORT here is the SSH port defined in 'server /etc/ssh/sshd_config', and the default is 22. As root:
netstat -ap | grep PORT
tcp 0 0 0.0.0.0:PORT 0.0.0.0:* LISTEN 8359/sshd
tcp6 0 0 [::]:PORT [::]:* LISTEN 8359/sshd
kill 8359
netstat -ap | grep PORT
no result
service ssh start
[ ok ] Starting OpenBSD Secure Shell server: sshd.
netstat -ap | grep PORT
tcp 0 0 0.0.0.0:PORT 0.0.0.0:* LISTEN 31418/sshd: /usr/sb
tcp6 0 0 [::]:PORT [::]:* LISTEN 31418/sshd: /usr/sb
The following client-side ssh command succeed.
I suggest to check the routing table for one possibility. In my case on Ubuntu 20.04 (Focal Fossa), I added a local network routing entry to recover when I got the same error message when connecting to the server using SSH. It had disappeared unexpectedly, leaving only the default route.
route -n Kernel IP routing table Destination Gateway
Output:
Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 enp1s0
192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 enp1s0 # <= disappeared
It seemed as if ack was being filtered by an incomplete routing table although the first syn passed.
Similar to naoki-ogawa, I had a problem with my routing table. In my case, I had an extra route for my local network.
As root:
route
Output:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default RT-AX92U-3E20 0.0.0.0 UG 100 0 0 eno1
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 virbr1
192.168.50.0 RT-AX92U-3E20 255.255.255.0 UG 10 0 0 eno1
192.168.50.0 0.0.0.0 255.255.255.0 U 100 0 0 eno1
I simply removed the gateway on the local network (192.168.50.0):
route del 192.168.50.0/24 via 192.168.50.1
The problem was resolved.
For those who came across this page after upgrading a FreeBSD machine to 13.1 and then trying to ssh into it, see Bug 263489. sshd does not work after reboot to 13.1-RC4.
After the upgrade, the previous sshd daemon (OpenSSH < 8.2) is still running with new configurations (OpenSSH >= 8.2). The solution is to stop and then restart the sshd daemon. The FreeBSD 13.1 release notes now mention this and after 13.1, the freebsd-update script will automatically restart the daemon.
I had this error today when I was trying to use my Dell laptop running Ubuntu 20.04.5 LTS (Focal Fossa) and trying to SSH into a Raspberry Pi. When I was on my home Wi-Fi network and tried to SSH into the pi (also on my home Wi-Fi network) I got the error:
ssh pi#10.0.0.200
Output:
kex_exchange_identification: read: Connection reset by peer'
However, when I switched my Ubuntu Laptop over to a mobile hotspot, the error disappeared, and I was able to SSH without issue. Will update this post as soon as I figure out how to resolve the root cause.
Issue resolved (but full reason unclear). I followed the instructions to change my DNS server here to 8.8.8.8 and 8.8.4.4.
After about 5 minutes had elapsed, I was able to use SSH from my command line terminal just fine.
Error
kex_exchange_identification: read: Connection reset by peer
Connection reset by x.x.x.x port 22
I have an Ubuntu 20.04 host and two RHEL 8 VMs running on VMware. I log in into the two VMs from my Ubuntu terminal. I use Ethernet and Wi-Fi connections. Every time I try to log in into a VM after rebooting it, I get the error:
Restarting the sshd service will not solve the problem. Sometimes, the problem would be resolved if I physically disconnected and reconnected the Ethernet cable.
Finally I turned off my Wi-Fi connection with:
nmcli conn down <name_of_Wi-Fi_connection>
Or turning it off from settings and this gave me a permanent solution.
Both my Ethernet and Wi-Fi connections (static connections) had the same IP address, so I think the VMs were rejecting two "suspicious" similar connections.
Try to check if OpenSSH server is up and running on the server side.
Try checking the sshd configuration. It worked this way for me.
The same issue with me:
I have fixed the issue by doing the below steps.
edit file etc/hosts.allow. Command to do so sudo nano /etc/hosts.allow.
At the end, update the value of ALL keys to ALL like ALL : ALL. Save the file and try again.
Basically, ALL might be set to something else therefore while establishing ssh connection to the host, it is expecting that the request should come from the IP address starting from 10...* if ALL set to ALL : 10.. Therefore by replacing 10. with ALL, you are allowing connection from everywhere.
You can try VPN or if you have been using it before, try to turn it off and connect it again.
If you don't have a budget for a VPN, you can try ProtonVPN which is free. It worked for me when I faced the same problem.

Can't authenticate via ssh key on new GitLab container

I've recently set up a new GitLab docker container, and though everything else has been working great I can't authenticate to it via ssh.
I followed the instructions here to the letter, with no succes.
Whatever key type I generate, and regardless of the client (Linux, Windows git-bash), The server instantly rejects the publickey and does not prompt for a password.
Debug shows the following:
debug1: Offering public key: /c/Users/[user]/.ssh/id_ed25519 ED25519 SHA256:[SHA-256]
debug3: send packet: type 50
debug2: we sent a publickey packet, wait for reply
debug3: receive packet: type 51
Maybe it's something obvious, but I can't quite figure it out and no troubleshooting step managed to help.
As a side note, the ssh port is non standard, though I am accesing via the new port. I've also double checked ssh is enabled on both the server and the clients.
Any help would be greatly appreciated.
Thanks!
Check first if this is similar to gitlab-org/gitlab-foss issue 18371 "docker omnibus gitlab denying ssh public key"
In my case the problem was mismatch between ssh socket that docker container was exposing and my server's one.
It helped to expose it on different port like 10022 and reconfiguring gitlab like this:
gitlab_rails['gitlab_shell_ssh_port'] = 10022
Ideally, you would need to stop, restart the ssh daemon (server side, container side as seen in this thread) with
usr/sbin/sshd -d
That would allow you to check:
if the SSH request is received at all
if it is blocked for any reason

SSH via HTTP proxy with password on Windows with mingw64

I use Portable Git x64 on Windows. I run everything thought Git Bash. I need to ssh to a server which is reachable only via HTTP proxy. Authentication for server is via pubkey, authentication for proxy is via password, usernames are different. My ~/.ssh/config:
Host server
Hostname server_hostname
User server_username
IdentityFile ~/.ssh/id_rsa
ProxyCommand /c/PortableGit/mingw64/bin/connect.exe -H proxy_username#proxy_ip:12345 %h %p
The problem starts when ssh tries to pop-up the window where you need to enter a password for the HTTP proxy, log from ssh -vvv server:
$ ssh -vvv server
OpenSSH_7.9p1, OpenSSL 1.1.1a 20 Nov 2018
debug1: Reading configuration data /c/Users/username/.ssh/config
debug1: /c/Users/username/.ssh/config line 1: Applying options for server
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: Executing proxy command: exec /c/PortableGit/mingw64/bin/connect.exe -H proxy_username#proxy_ip:12345 server_hostname 22
debug1: identity file /c/Users/username/.ssh/id_rsa type 0
debug1: identity file /c/Users/username/.ssh/id_rsa-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.9
'C:\PortableGit\mingw64\libexec\git-core\git-gui--askpass' is not recognized as an internal or external command,
operable program or batch file.
FATAL: Cannot decide password for proxy authentication.ssh_exchange_identification: Connection closed by remote host
git-gui--askpass is there, but for some reason it's not picked up by ssh. Running file 'C:\PortableGit\mingw64\libexec\git-core\git-gui--askpass' gives:
$ file 'C:\PortableGit\mingw64\libexec\git-core\git-gui--askpass'
C:\PortableGit\mingw64\libexec\git-core\git-gui--askpass: POSIX shell script, ASCII text executable
Content of the git-gui--askpass is identical to https://github.com/git/git/blob/3bab5d56259722843359702bc27111475437ad2a/git-gui/git-gui--askpass
I tried to run this script via command line, it works fine:
Also, I tried to specify another program as SSH_ASKPASS=/mingw64/libexec/git-core/git-askpass.exe (which I assume a stupid thing to do). This does not work either:
...
fatal: failed to acquire credentials.
I tried to supply a password in ~/.ssh/config as:
ProxyCommand /c/PortableGit/mingw64/bin/connect.exe -H proxy_username:proxy_password#proxy_ip:12345 %h %p
^^^^^^^^^^^^^^^
but this is ignored by ssh.
Besides, I tried to connect via MobaXterm and this works completely fine -- I've been asked for a proxy password and after entering it I am connected. Also, after connecting in MobaXterm I can connect in command line since the proxy does not ask for a password for some time. But for a different reason I cannot use MobaXterm.
Any ideas on how to make it work?
Utility connect.exe works with HTTP_PROXY_USER and HTTP_PROXY_PASSWORD environment variables. Solution found in source code
Try keeping your password in your ~/.ssh/config file and add
unset SSH_ASKPASS
To your .bashprofile

Unable to connect on GitLab.com since 2 days (HTTP, SSH...)

Let me explain my very strange problem. I have one server (Linux Debian Jessie) which had access to my git repository on gitlab.com
Two days ago, I tried to pull some modifications on this server with a simple git pull. I received an error message :
ssh: connect to host gitlab.com port 22: Connection timed out
Si I have done some tests
1. TELNET
To understand why, I have tried a telnet on 22 port = TIMEOUT
2. IPTABLES
I checked my iptables to be sure that SSH port was allowed. It is. If I try a telnet on another service for example like github.com, it works. So I'm allowed in OUTPUT on this port.
3. PING
I thought a ip translation problem. I have done a ping, I obtain this message :
PING 104.210.2.228 (104.210.2.228) 56(84) bytes of data.
--- 104.210.2.228 ping statistics ---
87 packets transmitted, 0 received, 100% packet loss, time 86534ms
4. FAIL2BAN
I use fail2ban, so I have checked if gitlab was in jail address, but it seems not.
So my problem is that I can't reach gitlab.com
If I try from my local machine or from another server, I don't have this problem. It works.
I can't reach gitlab.com only from this server but I don't know why. Maybe someone has an idea which cans be very precious to help me ?
Probably some modification of firewall caused this. For a quick solution use http protocol instead of ssh. Change your url in the git config file to http.
git config --local -e
change entry of
url = git#gitlab.com:username/repo.git , to
url = https://gitlab.com/username/repo.git
You need to give your username and password to authenticate yourself while making a push or pull though as it's http based.

Why Google Cloud Compute Engine instance gives ssh connection refused after restart?

I stopped and restarted an ubuntu 14.04 Google Cloud Compute Engine instance, and now my ssh connection is refused with:
ssh: connect to host 146.148.114.98 port 22: Connection refused
This already happened a previous time, I thought there was a problem with the machine, I deleted it and recreated and it started working again. I don't want to be recreating instances every time. The ssh troubleshooting page of google cloud is quite messy. My firewall rules seem to be ok. Anyone has a solution for this?
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
default-allow-http default 0.0.0.0/0 tcp:80 http-server
default-allow-https default 0.0.0.0/0 tcp:443 https-server
default-allow-icmp default 0.0.0.0/0 icmp
default-allow-internal default 10.128.0.0/9 tcp:0-65535,udp:0-65535,icmp
default-allow-rdp default 0.0.0.0/0 tcp:3389
default-allow-ssh default 0.0.0.0/0 tcp:22
This is the output for: ps aux | grep ssh
root 29 0.0 0.4 55184 2860 ? Ss 11:26 0:00 /usr/sbin/sshd -p 22 -o AuthorizedKeysCommand=/google/devshell/authorized_keys.sh -o Author
izedKeysCommandUser=root
root 183 0.0 0.9 82692 5940 ? Ss 11:26 0:00 sshd: fbeshox [priv]
fbeshox 218 0.0 0.7 82692 4424 ? S 11:26 0:00 sshd: fbeshox#pts/0
fbeshox 522 0.0 0.3 12728 2200 pts/1 S+ 12:12 0:00 grep ssh
Here the verbose results of the ssh connetion attempt.
ssh -i .ssh/keyname username#130.211.53.51 -vvv
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /Users/xxxx/.ssh/config
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: /etc/ssh_config line 102: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to 130.211.53.51 [130.211.53.51] port 22.
debug1: connect to address 130.211.53.51 port 22: Connection refused
ssh: connect to host 130.211.53.51 port 22: Connection refused
It is possible that sshguard, a security tool installed on Ubuntu by default, is interfering with your connection. Basically sshguard might have incorrectly decided that your IP address is 'attacking' your instance and blocked the IP.
If you can log in from a different location, such as the Web SSH provided by the Cloud Console, try using sudo iptables -S to see if there are any firewall rules on the instance (different than the GCE firewall) created by sshguard. If so try disabling sshguard or adding your IP address on the whitlelist (http://www.sshguard.net/docs/whitelist/).
I know this is an old question but today I had similar issue and the problem was very simple - IP of the instance is changed after restart - so I had to update ssh string accordingly
This issue often happens after you changed your default zone or region.
Then, you must update the ssh keys in your metadata by
sudo gcloud compute config-ssh
You can see also the changes in the web interface under Compute Engine | Metadata | SSH Keys.
Try to SSH into the instance with a different username .Google Compute is a bit shaky at times . Try to SSH into the instance using VM instance page in Compute Engine . If SSH takes too much time and refuses connection, then login with a different username in SSH . You can login with a different name using a settings icon on top right corner of SSH window . If these all these doesn't go well, I will advise you to re create one more instance, as it has also been my experience that Google Compute Engine instances are not stable in terms of SSH accessibility and tend to create problems .It's better to use putty as a client to SSH into Compute Engine than the SSH terminals Google provides . Let me know if that helps you :)
I understood that my problem raised after detaching a disk from the console when the machine was stopped with that disk mounted. First unmount the disk, then detach it. Never seen the problem again.
Also make sure you did not recursively modify permissions for the /etc folder.
For example:
chmod -R 775 /etc
This will prevent you from logging back into the VM, even from the Web console and gcloud cli.
Instead, modify permissions on a more granular level e.g. /etc/nginx, etc.
I had the same problem and solved it by first connecting to the VM via browser SSH (from the "VM instances" web overview), which also fails, then choose the offered option of retrying without Cloud Identity-Aware Proxy which worked.
Afterwards all SSH-connections work again (both in browser and local shell).