Gitlab CI/CD using ssh / knownhosts error - ssh

I'm trying to use gitlab CI/CD to auto deploy my code, after push on an specific branch (in my case 'staging' branch)
after push on 'staging' branch I see following error on jobs section in gitlab UI:
Running with gitlab-runner 15.0.0 (xxxxxx)
on deploy xxxxxx
Preparing the "ssh" executor
00:36
Using SSH executor...
ERROR: Preparation failed: ssh command Connect() error: ssh Dial() error: ssh: handshake failed: knownhosts: key is unknown
I can see gitlab from my VM and gitlab-runner registered successfully before.
I've also created ssh key and add it to gitlab-runner installation steps.

You need to check what SSH URL is used in your case.
Something like git#gitlab.com:me/myProject would look for gitlab.com SSH host keys fingerprints in an ~/.ssh/known_hosts file.
Make sure to add first in gitlab-runner server the following to ~/.ssh/known_hosts:
gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
That will skip manual fingerprint confirmation in SSH.
In other words, no more "knownhosts: key is unknown".
Note that with GitLab 15.3 (August 2022), you will have an easier time finding those:
New links to SSH fingerprints
Your GitLab SSH fingerprints are now easier to find, thanks to new links on the SSH configuration page and in the documentation.
Thank you Andreas Deicha for your contribution!
See Documentation and Issue.

For people who still encounter this issue: in our case the cause was a difference between the host name in the known_host file and the one in the toml file. They must be both fully qualified or both non qualified.

Related

How to clone gitlab repo over tor using ssh?

Error message
After having added the ssh key of a user of a GitLab server and repository that is hosted over tor, a test was performed that tried to clone a private repository (to which the testing user is added) over tor. The cloning was attempted with command:
torsocks git clone git#some_onion_domain.onion:root/test.git
Which returns error:
Cloning into 'test'... 1620581859 ERROR torsocks[50856]: Connection
refused to Tor SOCKS (in socks5_recv_connect_reply() at socks5.c:543)
ssh: connect to host some_onion_domain.onion port 22: Connection
refused fatal: Could not read from remote repository.
Please make sure you have the correct access rights and the repository
exists.
GitLab SSH Cloning Verification
However, to verify the ssh access is available to the test user, the cloning was verified without tor using command:
git clone git#127.0.0.1:root/test.git
Which successfully returned:
Cloning into 'test'... remote: Enumerating objects: 3, done. remote:
Counting objects: 100% (3/3), done. remote: Total 3 (delta 0), reused
0 (delta 0), pack-reused 0 Receiving objects: 100% (3/3), done.
Server side hypothesis
My first guess is that it is a server-side issue that has to do with the lack of https, in following setting in the /etc/gitlab/gitlab.rb file:
external_url 'http://127.0.0.1'​
However setting external_url 'https://127.0.0.1 requires an https certificate, e.g. from Let's encrypt, which seem to not be provided for onion domains.
Client-side hypothesis
My second guess would be that it is a client-side issue related to some SOCKS setting is incorrect at the test user side that runs the torsocks command, similar to an issue w.r.t. the SOCKS 5 protocol that seems to be described here.
Question
Hence I would like to ask:
How can I resolve the connect to host some_onion_domain.onion port 22: Connection refused error when users try to clone the repo over tor?
One can set the ssh port of the GitLab instance to 9001, e.g. with:
sudo docker run --detach \
--hostname gitlab.example.com \
--publish 443:443 --publish 80:80 --publish 22:9001 \
--name gitlab \
--restart always \
--volume $GITLAB_HOME/config:/etc/gitlab \
--volume $GITLAB_HOME/logs:/var/log/gitlab \
--volume $GITLAB_HOME/data:/var/opt/gitlab \
gitlab/gitlab-ee:latest
Next, add port 9001 and port 22 to the ssh configuration in /etc/ssh/sshd_config by adding:
Port 9001
Port 22
then restart the ssh service with: systemctl restart ssh.
It is essential that one adds a public ssh key to the GitLab server for each computer you want to download the repo from, even if one wants to clone a public repository. You can make a new GitLab account for each computer, or add multiple public ssh keys to a single GitLab account. These instructions explain how to do that, tl;dr
ssh-keygen -t ed25519
<enter>
<enter>
<enter>
systemctl restart ssh
xclip -sel clip < ~/.ssh/id_ed25519.pub
Ps. if xclip does not work, one can manually copy the ssh key with: cat ~/.ssh/id_ed25519.pub.
Then open a browser and go to https://gitlab.com/-/profile/keys so for your own tor GitLab server that would be: someoniondomain.onion/-/profile/keys, and copy paste that key in there.
That is it, now one can clone the repository over tor with:
torify -p 22 git clone ssh://git#someoniondomain.onion:9001/root/public.git
Note
As a side note, in the question I happened to have tested git clone git#127.0.0.1:root/test.git however, instead of using 127.0.0.1 I should have used either the output of hostname -I or the public ip address of the device that hosts the GitLab server. Furthermore, I should have verified whether the GitLab server was accessible through ssh by testing:
ssh -T git#youronionserver.onion
Which should return Congratulations... It would not have done so if I had tested that, indicating the problem was in the ssh access to the GitLab server (or the ssh connection to the device). I could have determined whether the ssh problem was with the device or the ssh server by testing if I could log into the device with: ssh deviceusername#device_ip, which would have been successfull indicating, the ssh problem with at the GitLab server.

Unable to connect from bitbucket pipelines to shared hosting via ssh

What I need to do is to SSH public server (which is shared hosting) and run a script that starts the deployment process.
I followed what's written here:
I've created a key pair in Settings > Pipelines > SSH Keys
Then I've added the IP address of the remote server
Then I've appended the public key to the remote server's ~/.ssh/authorized_keys file
When I try to run this pipeline:
image: img-name
pipelines:
branches:
staging:
- step:
deployment: Staging
script:
- ssh remote_username#remote_ip:port ls -l
I have the following error:
Could not resolve hostname remote_ip:port: Name or service not known
Please help!
The SSH command doesn't take the ip:port syntax. You'll need to use a different format:
ssh -p port user#remote_ip "command"
(This assumes that your remote_ip is publicly-accessible, of course.)

How do you find your GitLab host name (to test your SSH key)?

I just created a personal GitLab account and am trying to follow the steps on
https://gitlab.com/help/ssh/README
to deploy my SSH key to GitLab. I've completed up to step 5, and see my SSH key among 'Your SSH keys' in my User Settings -> SSH keys:
I'm trying to now complete the optional 6th step, testing the key:
My GitLab username is khpeek, so I guessed my 'GitLab domain' is gitlab.com/khpeek. However, the test command
ssh -T git#gitlab.com/khpeek
yields an error message:
ssh: Could not resolve hostname gitlab.com/khpeek: Name or service not known
Apparently this is the wrong hostname. What would be the right one?
If you're using Gitlab on gitlab.com then the domain is simply gitlab.com so you should run ssh -T git#gitlab.com
Go to clone with ssh. And check the URL link. It should looks like this. git#hostname:project.git. git#hostname this whole part is your hostname.
Open your GitLab account or repository online (any site will do, just make sure your logged in). Then checkout the URL. The domain together with the TLD will be your hostname. E.g.
www.gitlab.your.institution.com/...
So in this case gitlab.your.institution.com is your hostname.

glassfish4 create-node-ssh failed due to ssh key exchange not finished

I'm trying to create a node on a remote host(I've already created a domain).
I'm running the command:
asadmin -p <port_number> create-node-ssh --nodehost <remote_hostname> --installdir <glassfish_installed_dir_path> <node_name>
and getting the following error everytime:
remote failure: Warning: some parameters appear to be invalid.
SSH node not created. To force creation of the node with these parameters rerun the command using the --force option.
Could not connect to host <hostname> using SSH.
There was a problem while connecting to <hostname>:22
Key exchange was not finished, connection is closed.
Command create-node-ssh failed.
From the error it seems that there is some connection problem. But I can ssh to the target server and I'm using the same key_pair.
After searching for some solution (link1, link2) I found that trying to login through ssh without password could solve this.
But no luck. Now I can ssh to & from the target server without password as well. But this issue is still there.
What should I check for, in order to resolve this ?
Let me know if I'm missing out anything.
Can you try to start sshd daemon in debug mode to a different port at the remote node host:
sudo sshd -D -d -e -p 23
and try create-node-ssh command against that ssh port?:
asadmin -p <port_number> create-node-ssh --nodehost <remote_hostname> --installdir <glassfish_installed_dir_path> --sshport 23 <node_name>
I had an issue regarding glassfish ssh exchange keys because of ssh newer versions deprecating older algorithms:
Unable to negotiate with X.X.X.X port XXXXX: no matching key exchange method found. Their offer: diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 [preauth]
My solution was to add legacy keys to /etc/ssh/sshd_config:
KexAlgorithms +diffie-hellman-group1-sha1
Ciphers +aes128-cbc
Even if this is not your case, sshd debug will surelly give you more information.

Issue remoting into a device and doing a simple ping test with Ansible

After following instructions both online and in a couple of books, I am unsure of why this is happening. I have a feeling there is a missing setting, but here is the setup:
I am attempting to use the command:
ansible all -u $USER -m ping -vvvv
Obviously using the -vvvv for debugging, but not much output aside from the fact it says it's attempting to connect. I get the following error:
S4 | FAILED => FAILED: Authentication failed.
S4 stands for switch 4, a Cisco switch I am attempting to automate configuration and show commands on. I know 100% the password I set in the host_vars file is correct, as it works when I use it from a standard SSH client.
Here are my non-default config settings in the ansible.cfg file:
[defaults]
transport=paramiko
hostfile = ./myhosts
host_key_checking=False
timeout = 5
My myhosts file:
[cisco-switches]
S4
And my host_vars file for S4:
ansible_ssh_host: 192.168.1.12
ansible_ssh_pass: password
My current version is 1.9.1, running on a Centos VM. I do have an ACL applied on the management interface of the switch, but it allows remote connections from this particular IP.
Please advise.
Since you are using ansible to automate commands in a Cisco switch, I guess you want to perform the SSH connection to the switch without been prompted for password or been requested to press [Y/N] to confirm the connection.
To do that I recommend to configure the Cisco IOS SSH Server on the switch to perform RSA-Based user authentication.
First of all you need to generate RSA key pair on your Linux box:
ssh-keygen -t rsa -b 1024
Note: You can use 2048 instead 1024 but consider that some IOS versions will accept maximum 254 characters for ssh public key.
At switch side:
conf t
ip ssh pubkey-chain
username test
key-string
Copy the entire public key as appears in the cat id_rsa.pub
including the ssh-rsa and username#hostname.
Please note that some IOS versions will accept
maximum 254 characters.
You can paste multiple lines.
exit
exit
If you need that 'test' user can execute privileged IOS commands:
username test privilege 15 secret _TEXT_CLEAR_PASSWORD_
Then, test your connection from your Linux box in order to add the switch to known_hosts file. This will only happen one time for each switch/host not found in the known_hosts file:
ssh test#10.0.0.1
The authenticity of host '10.0.0.1 (10.0.0.1)' can't be established.
RSA key fingerprint is xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:d6:4b:d1:67.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.0.0.1' (RSA) to the list of known hosts.
ciscoswitch#
ciscoswitch#exit
Finally test the connection using ansible over SSH and raw module, for example:
ansible inventory -m raw -a "show env all" -u test
I hope you find it useful.