Why my SSH hang on there while connecting to github? - ssh

This is what I tried to do ten times today without success:
make a key with ssh-keygen.
open ~/.ssh/id_rsa.pub with Gedit or Notepad++ and copy the contents.
Go to account settings on github.com
Go to SSH Keys
Click on the Add Key button.
give the key a title
paste the key into the key box.
Save the key (enter my github password to verify).
Then, I run '$ ssh -vT git#github.com' in cygwin, but it always hang on there. Here is the output:
$ ssh -vT git#github.com
OpenSSH_6.0p1, OpenSSL 1.0.1c 10 May 2012
debug1: Reading configuration data /home/eason.wu/.ssh/config
debug1: /home/eason.wu/.ssh/config line 1: Applying options for github.com
debug1: Reading configuration data /etc/ssh_config
debug1: Connecting to github.com [207.97.227.239] port 22.
debug1: Connection established.
debug1: identity file /home/eason.wu/.ssh/id_rsa type 1
debug1: identity file /home/eason.wu/.ssh/id_rsa-cert type -1
Does any one meet this problem, any solution will be appreciated

Make sure you did copy the public key as one line, because a copy from an editor can sometime buffer the content of that key as several lines.
If you still have an issue, check other SSH debug tips at "Unable to Git-push master to Github".
A ssh -vvvT git#github.com can display more debug information.
The OP Eason Wu comments:
I found the real reason of this problem, it is caused by my network.
Some websites are prohibited by my company, I would think it also affects GitHub service.
After I turn on an VPN connection, and retest again with ssh -vvvT git#github.com, it passed successfully

For anyone coming here recently looking for a solution, this was happening to me too, however in the debug (as per above instruction) the connection to GitHub never established.
My output looked like:
OpenSSH_7.9p1 Ubuntu-10, OpenSSL 1.1.1b 26 Feb 2019
debug1: Reading configuration data /home/preston/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: resolving "github.com" port 22
debug2: ssh_connect_direct
debug1: Connecting to github.com [2607:7700:0:1a:0:1:c01e:ff70] port 22.
I noticed the IPv6 address in the last line and thought that might be the issue. So I sourced an article on changing it to use an IPv4 address in the global ssh settings.
Changing to IPv4 worked.
Source: https://stackoverflow.com/a/35113901/3818056

For me, the issue was the router I was connected to was using WPA, not WPA2/3. Once I changed to a network that didn't have this issue my repo was instantly cloned with ssh.

I solved this by adding GitHub "github.com" in the whitelist of my router. You can also overcome this by VPN however it will require another set of steps to find a VPN and setup.

Related

Bitbucket Pipeline read_passphrase: can't open /dev/tty: No such device or address

I have a staging server and a production server, and I run identical Bitbucket Pipelines, where I send some commands over SSH. Unfortunately, my pipeline for the production always fails with:
Host key verification failed.
I've tried everything, folder permissions, recreating the keys, nothing works.
Finally with adding -v to my ssh call, I think I'm a step closer, but still lost.
On my staging server, I see something like this:
debug1: Host '$STAGING_SERVER' is known and matches the RSA host key.
debug1: Found key in /root/.ssh/known_hosts:4
debug1: ssh_rsa_verify: signature correct
and the rest of the build follows flawlessly.
On my production server, however, I see the following:
debug1: Host '$PRODUCTION_SERVER' is known and matches the RSA host key.
debug1: Found key in /root/.ssh/known_hosts:5
debug1: read_passphrase: can't open /dev/tty: No such device or address
debug1: permanently_drop_suid: 0
ssh_askpass: exec(/usr/bin/ssh-askpass): No such file or directory
Host key verification failed.
So it would look like the key is found on my production server, but for some reason on the production server read_passphrase is being called. I've just created a new id_rsa and id_rsa.pub key, with no password, so why the heck is my production server trying to call read_passphrase? My ssh_config and sshd_config on both servers are identical - checked via diff.
Another way of looking at it is that ssh_rsa_verify is called immediately on the staging server, while on the production server read_passphrase is called.
Any help here would be greatly appreciated, this is driving me crazy!
Hallelujah! Solved! 🥳
Hours wasted for the simplest reason...
I noticed in the full output of the ssh -v on production that Bitbucket was printing out something like this:
debug1: Connecting to $PRODUCTION_SERVER [12.345.567.890] port 22.
where as the staging output was:
debug1: Connecting to $STAGING_SERVER [$STAGING_SERVER] port 22.
Meaning the static IP was the exact value of that repository variable. (Bitbucket parses secret logs out, which is why they appear this way).
I realized I had set the repository variable PRODUCTION_SERVER incorrectly to the alias for the IP address, (i.e. myserver.com) when it should be the IP address exactly. Changing that value in my repository variables to the IP address fixed the issue! Apparently, using the alias name isn't a perfect enough match for SSH to be satisfied.
I had the same issue. I solved this problem by
go to -> Repository settings
go to -> SSH keys ( on the left navigation)
at the known hosts section
input your Bastion host public IP address
then click Fetch button
rerun your pipeline
Please check this for reference

kex_exchange_identification while connecting to local gitlab instance

I've set up a local instance of gitlab with the following configuration:
version: "3"
services:
gitlab:
image: gitlab/gitlab-ce:latest
container_name: gitlab
hostname: 'gitlab.local.com'
restart: always
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gitlab.local.com:4005'
gitlab_rails['gitlab_shell_ssh_port'] = 3005
ports:
- '4005:4005'
- '3005:3005'
volumes:
- '/srv/gitlab/config:/etc/gitlab'
- '/srv/gitlab/logs:/var/log/gitlab'
- '/srv/gitlab/data:/var/opt/gitlab'
Then I've added SSH keys according to the gitlab documentation.
Finally when connecting to the instance via SSH or cloning a repo I get the following error:
ssh -Tvv git#gitlab.local.com -p 3005
OpenSSH_8.2p1 Ubuntu-4ubuntu0.2, OpenSSL 1.1.1f 31 Mar 2020
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: include /etc/ssh/ssh_config.d/*.conf matched no files
debug1: /etc/ssh/ssh_config line 21: Applying options for *
debug2: resolving "gitlab.local.com" port 3005
debug2: ssh_connect_direct
debug1: Connecting to gitlab.local.com [0.0.0.0] port 3005.
debug1: Connection established.
debug1: identity file /home/rafael/.ssh/id_rsa type 0
debug1: identity file /home/rafael/.ssh/id_rsa-cert type -1
debug1: identity file /home/rafael/.ssh/id_dsa type -1
debug1: identity file /home/rafael/.ssh/id_dsa-cert type -1
debug1: identity file /home/rafael/.ssh/id_ecdsa type -1
debug1: identity file /home/rafael/.ssh/id_ecdsa-cert type -1
debug1: identity file /home/rafael/.ssh/id_ecdsa_sk type -1
debug1: identity file /home/rafael/.ssh/id_ecdsa_sk-cert type -1
debug1: identity file /home/rafael/.ssh/id_ed25519 type 3
debug1: identity file /home/rafael/.ssh/id_ed25519-cert type -1
debug1: identity file /home/rafael/.ssh/id_ed25519_sk type -1
debug1: identity file /home/rafael/.ssh/id_ed25519_sk-cert type -1
debug1: identity file /home/rafael/.ssh/id_xmss type -1
debug1: identity file /home/rafael/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.2p1 Ubuntu-4ubuntu0.2
kex_exchange_identification: Connection closed by remote host
I've disable ufw, I've reset the known_hosts, I've tried everything I can think of and found nothing on the internet that helped me.
Why is this error appearing? It's the only "server" I have problems acessing via ssh...
Check first the ssh daemon, in your GitLab Docker container, does listen to port 3005 (a custom port).
See for instance gitlab-org/omnibus-gitlab issue 1767:
I had to say that this issue gave me very hard time trying to figure things out.
It is really counter-intuitive that gitlab_rails['gitlab_shell_ssh_port'] = 30022 only works to change the URI displayed in the web page instead of also changing the port sshd serves in guest machine.
Besides subjective feelings above, there are also two facts in the way it currently works:
There is no way to change the ssh port gitlab shell uses on the docker container.
When using the docker's ip address to access the gitlab server, port would always have to be 22 instead of what is used in the URI.
I would argue that the way original document described is a better way how things should work around the issue.
gitlab_rails['gitlab_shell_ssh_port'] should also change the port gitlab-shell is served on guest side.
And:
You have to customize the port inside the file /assets/sshd_config by your Dockerfile.
That was mentioned here.
Since I see "Connection established.", it is possible, since those bug reports, that sshd_config is now correctly modified (automatically)
If that is the case, double-check what public key you have registered to your GitLab profile: it should be /home/rafael/.ssh/id_rsa.pub.

Not able to change sshd_config on Google Coral dev board

So I got my dev board earlier this week. I was trying to get started with and have been able to reflash it and my Chromebook is able to see the device when I do a "mdt devices" but when I do an "mdt shell", I get an error. I tried ssh directly and the verbose messages are shown below. My Chromebook was not able to see the devices using the USB-C data connection but then I was able to connect to it via the USB-serial connection and use the nmtui to connect the dev board to WiFi (same network to which the Chromebook is connected). The problem, from what I can read on Stackoverflow and other places is to do with sshd config on the board, needs to either have PAM disabled or password authentication enabled. I was trying to do that but then I see that I (the user mendel) cannot edit the /etc/ssh/sshd_config file because mendel is not in sudoers, which is weird because there is a 99-mendel-sudo in runonce.d which does precisely that (please see https://coral.googlesource.com/mendel-minimal/+/refs/heads/master/etc/runonce.d/99-mendel-sudo, I verified this file exists on my dev board).
So, does anyone know a workaround for this issue (root password?). I read several people talking about ssh issues and all solutions involve editing sshd_config which makes sense, of course. Only thing is that none of those pages (on Medium, Stackoverflow, GitHub) ever mention that something special is needed to first add mendel to /etc/sudoers. Seems like either I am missing something or something is broken regarding adding mendel to sudoers.
Here is my mendel Linux version:
mendel#tuned-eft:~$ uname -a
Linux tuned-eft 4.14.98-imx #1 SMP PREEMPT Fri Jul 17 01:15:45 UTC 2020 aarch64 GNU/Linux
mendel#tuned-eft:~$ cat /etc/mendel_version
5.0
mendel#tuned-eft:~$
Here are the ssh messages from my Chromebook:
amiarora#penguin:~$ ssh -v amiarora#tuned-eft c i eth i
OpenSSH_7.9p1 Debian-10+deb10u2, OpenSSL 1.1.1d 10 Sep 2019
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to tuned-eft [10.55.1.187] port 22.
debug1: Connection established.
debug1: identity file /home/amiarora/.ssh/id_rsa type -1
debug1: identity file /home/amiarora/.ssh/id_rsa-cert type -1
debug1: identity file /home/amiarora/.ssh/id_dsa type -1
debug1: identity file /home/amiarora/.ssh/id_dsa-cert type -1
debug1: identity file /home/amiarora/.ssh/id_ecdsa type -1
debug1: identity file /home/amiarora/.ssh/id_ecdsa-cert type -1
debug1: identity file /home/amiarora/.ssh/id_ed25519 type -1
debug1: identity file /home/amiarora/.ssh/id_ed25519-cert type -1
debug1: identity file /home/amiarora/.ssh/id_xmss type -1
debug1: identity file /home/amiarora/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.9p1 Debian-10+deb10u2
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.9p1 Debian-10+deb10u2
debug1: match: OpenSSH_7.9p1 Debian-10+deb10u2 pat OpenSSH* compat 0x04000000
debug1: Authenticating to tuned-eft:22 as 'amiarora'
debug1: SSH2_MSG_KEXINIT sent
Connection closed by 10.55.1.187 port 22
amiarora#penguin:~$
Output of the groups command on the dev board.
mendel#tuned-eft:~$ groups
mendel adm sudo audio video plugdev staff games users netdev input render i2c systemd-journal bluetooth apex
mendel#tuned-eft:~$ sudo sudosh
>>> /etc/sudoers: syntax error near line 28 <<<
sudo: parse error in /etc/sudoers near line 28
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
mendel#tuned-eft:~$
Any help would be much appreciated.
One thing that seems really odd to me is that your mendel user doesn't have sudoer access but it really should be by default. Without that, there isn't much options to change the sshd_config or the sudoers file. My best suggestion is to go ahead and reflash the board using these instructions:
https://coral.ai/docs/dev-board/reflash/#flash-the-board
Instead of mdt reboot-bootloader, you may have to just reboot the board manually and type anything within the first 3 seconds of it booting up to go into u-boot mode and type this in the u-boot prompt to get into fastboot mode:
fastboot 0
For reference, this is what my /etc/sudoer looks like:
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
# Host alias specification
# User alias specification
# Cmnd alias specification
# User privilege specification
root ALL=(ALL:ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
# See sudoers(5) for more information on "#include" directives:
#includedir /etc/sudoers.d
mendel ALL=(ALL) NOPASSWD: ALL
I recommend still using key-pair authentication, but use the USB-serial connection to set that up.
Generate a key pair (e.g. ssh-keygen)
On your Chromebook, run mdt setkey [private key]
Copy the public key to your clipboard
On your device (via USB serial), edit ~/.ssh/authorized_keys (you likely will
need to make both .ssh and authorized keys). Copy in your public key.
MDT should now work as expected (I like to use mdt set
preferred-device [ip addr] so I don't need to add the ip address to
commands).
As for the sudoers question, it's surprising to hear that mendel doesn't have sudo access. Checking on my board:
mendel#elusive-dog:~$ groups
mendel adm sudo audio video plugdev staff games users netdev input render i2c systemd-journal bluetooth apex
Can you verify?

SSH Connection closed by remote host : Having Security Group SSH Inbound permission set to specific IP address

I am trying to connect to AWS EC2 server from local system using SSH. It is connecting to instance when Security group Inbound permission for SSH is given as connect from anywhere. But whenever it is given specific IP address, it is giving Connection closed by remote host . I'm getting following error while connecting.
sudo ssh -vvv -i {$pemfile} ubuntu#{domain_name}
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to {HOST_NAME HERE} port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug3: Incorrect RSA1 identifier
debug3: Could not load "{KEY_PATH HERE}" as a RSA1 public key
debug1: identity file {KEY_PATH HERE} type -1
debug1: identity file {KEY_PATH HERE} type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.6.1p1 Ubuntu-2ubuntu2.6
ssh_exchange_identification: Connection closed by remote host
Are you giving your IP correct? You might be using IP you get from something like http://formyip.com/ and what EC2 instance is getting might be different one.
Try following:
Change setting to: Allow all IP
SSH into your instance
See the logs at /var/log/auth.log (not very sure about this logfile location)
Identify your IP for last successful login attempt (which might or might not be same as you got from above website)
Use IP from logs (if different) in security group settings
TRY AGAIN :D
I didn't look into details but restarting the instance solved my problem simply.
Open your instance link in aws web interface and go to the "connect" tab, there shows the SSH connection information, make sure you are following it correctly.
My problem was using ec2-user as username instead ubuntu#ec2-my-ip.aws.com

SSH: Connection closed by remote server

I am trying to ssh login to my remote server. But whenever i try to login through terminal using ssh command:
ssh root#{ip_address}
I get error:
Connection closed by {ip_address}
I checked hosts deny and hosts allow, there is nothing in the file. I am not getting why it happening?
It happened when i changed my workstation and key got changed. When i tried ssh login, it asked to add key and i entered yes and then it closed the connection.
Is there any way to get connected with ssh again?
Your help is appreciated.
Thank you.
Edit:
Output of ssh -v -v -v -v root#{ip_address} is
OpenSSH_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Connecting to {ip_address} [{ip_address}] port 22.
debug1: Connection established.
debug3: Incorrect RSA1 identifier
debug3: Could not load "/home/mona/.ssh/id_rsa" as a RSA1 public key
debug1: identity file /home/mona/.ssh/id_rsa type 1
debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048
debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048
debug1: identity file /home/mona/.ssh/id_rsa-cert type -1
debug1: identity file /home/mona/.ssh/id_dsa type -1
debug1: identity file /home/mona/.ssh/id_dsa-cert type -1
debug1: identity file /home/mona/.ssh/id_ecdsa type -1
debug1: identity file /home/mona/.ssh/id_ecdsa-cert type -1
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1
debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH*
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_5.9p1 Debian-5ubuntu1.1
debug2: fd 3 setting O_NONBLOCK
debug3: load_hostkeys: loading entries for host "{ip_address}" from file "/home/mona/.ssh/known_hosts"
debug3: load_hostkeys: loaded 0 keys
debug1: SSH2_MSG_KEXINIT sent
Connection closed by 151.236.220.15
Had the same issue but a simple remote server reboot helped.
Are you sure your server is permitting root logins via SSH?
If not, I suggest using a different account with sudo privileges instead of enabling root login - especially if the server's SSH port is accessible from the whole inernet.
try sudo ssh root#{ip_address}, it works for me.
I tried to connect with a user, which had :/bin/false in /etc/passwd. After changing it to :/bin/bash the connection was not closed anymore.
I had a similar issue that was resolved by lowering the MTU on the client side with the following command:
ip li set mtu 1400 dev eth0
I found this solution from a separate thread on serverfault.
I was getting the same "Connection closed by {ip_address}" error on one of my SSH connections. I tried all the usual solutions and nothing worked. Finally I found that the ~/.ssh/authorized_keys file on the host was corrupted. Someone had tried to append a key to the file, but they copied and pasted it with embedded line feeds where each line wrapped at the end. So what should have been one continuous string spanning three lines was actually three separate strings -- one per line. Since the embedded line feed was exactly at the end of the line, it was not apparent from looking at it.
I deleted the offending key and added my own. Then everything worked as expected.
I temporarily disabled my antivirus firewall and this maybe helped a bit.
Now it suddenly says Shell access is not enabled on yr account! Connection closed.
So I logged into my WHM server.domain_name:2087 and clicked on Modify domain and enabled Shell Access for the website.
(Or ask your host provider to enable SSh for you if you do not have a WHM server)
Login success, it now says:
Last login: 03:37 from . [user#whm_domain_name ~]$
I myself had same problems while working with cloud9 editor. Mine was cause from high CPU usage. It would get fine after stopping apache connection.
Check the name being used to connect to the ftp site, its either wrong or multiple names are being sent for uthentication.