how to execute commands via ssh shell runner from private gitlab to private server - ssh

Situation:
shell gitlab runner, certificate configured, ssh connected as follows:
ssh-keygen --> id_rsa & id_rsa.pub
ssh-copy-id <user>#<remotehost>
ssh <user>#<remotehost> works as designed
id_rsa -> gitlab cicd variable called 'SSH_PRIVATE_KEY'
gitlab-ci as follows:
before_script:
- echo "Before script section"
# Install ssh-agent if not already installed, it is required by Docker.
# (change apt-get to yum if you use a CentOS-based image)
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
- ssh-add < ~/.ssh/id_rsa
- ssh-add -l
build1:
stage: build
script:
- echo "Pulling on Dev\n"
- ssh -A <user>#<remotehost>
- hostname
- ssh-agent bash -c 'hostname'
- ssh-agent bash -c 'awk "NR==1{print;exit}" /etc/php7/php.ini'
Complication:
when executing commands via gitlab-ci after the ssh connection, it seems to be executed on the gitlab machine. (php is installed on the ssh'ed system, not on gitlab)
See gitlab job output below:
...
eval $(ssh-agent -s)
Agent pid 1234
$ ssh-add < ~/.ssh/id_rsa
Identity added: /home/gitlab-runner/.ssh/id_rsa (/home/gitlab-runner/.ssh/id_rsa)
$ ssh-add -l
4096 SHA256:<KEY> /home/gitlab-runner/.ssh/id_rsa (RSA)
# same behaviour with ssh -T <user>#<ipaddress> -p <portnumber>
$ ssh -A <user>#<ipaddress> -p <portnumber>
Pseudo-terminal will not be allocated because stdin is not a terminal.
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
$ hostname
gitlab
$ ssh-agent bash -c 'hostname'
gitlab
$ ssh-agent bash -c 'awk "NR==1{print;exit}" /etc/php7/php.ini'
awk: cannot open /etc/php7/php.ini (No such file or directory)
In what way do I need to configure the system, so that the commands are actually run on the ssh'ed system?

I'm currently working with a solution which seems a bit too dirty for me.
In the gitlab-ci I'm pulling and running phpunit as follows
ssh -T <user>#<remotehost> "cd /var/www/projectfolder; git pull https://<gitlabUser>:$GITLAB_TOKEN#<privateGitlab>/<gitRepo>.git;"
ssh -T <user>#<remotehost> "cd /var/www/projectfolder/tests; phpunit;"
ie, I'm using a new ssh each time I'd like to run a command, which doesnt quite seem right to me. Any suggestions are welcome!

#til As per your suggestion request, single ssh command...
ssh -T <user>#<remotehost> "cd /var/www/projectfolder; git pull https://<gitlabUser>:$GITLAB_TOKEN#<privateGitlab>/<gitRepo>.git; cd /var/www/projectfolder/tests; phpunit;"

Related

Gitlab CI/CD issue with SSH config file

I am trying to deploy my first project to my production server. Here is the script for the deployment stage:
deploy_production:
stage: deploy
script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$SSH_PRIVATE_KEY")
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "ssh -p 69" "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ./vendor/bin/envoy run deploy
environment:
name: production
when: manual
only:
- main
When I run the stage, I get this error :
[myServer#xxx.xxx.x.x]: /home/php/.ssh/config: line 1: Bad configuration option: ssh
[myServer#xxx.xxx.x.x]: /home/php/.ssh/config: terminating, 1 bad configuration options
[✗] This task did not complete successfully on one of your servers.
Why is it trying to access the SSH on this path :
/home/php/.ssh/config
Why is it trying to access the SSH on this path :
This should be related to the account used by gitlab-ci: it is supposed to look for SSH settings in $HOME/.ssh: display first what $HOME is.
If you look at the official documentation, you will see an SSH setup relies on proper rights associated to SSH folders/files:
efore_script:
##
## Install ssh-agent if not already installed, it is required by Docker.
## (change apt-get to yum if you use an RPM-based image)
##
- 'command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )'
##
## Run ssh-agent (inside the build environment)
##
- eval $(ssh-agent -s)
##
## Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
## We're using tr to fix line endings which makes ed25519 keys work
## without extra base64 encoding.
## https://gitlab.com/gitlab-examples/ssh-private-key/issues/1#note_48526556
##
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
##
## Create the SSH directory and give it the right permissions
##
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
I mention before a chmod 400 my_private_key if you store a key in ~/.ssh.
And to be safe, I would add a chmod 600 ~/.ssh/config.
The point is: if the rights are to opened, SSH will refuse to operate.

rsync not finding local directory when sending through SSH on pipeline

Using bitbucket pipelines to push to our remote from the build process that you get from the pipeline.
This is a snippet of the bitbucket-pipelines.yml file
- pipe: atlassian/ssh-run:0.2.2
variables:
SSH_USER: $PRODUCTION_USER
SERVER: $PRODUCTION_SERVER
COMMAND: '''rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:home/$PRODUCTION_USER'''
PORT: '22007'
The connection itself works, and it does run the command correctly once it is remoted onto the server...
INFO: Executing the pipe...
INFO: Using default ssh key
INFO: Executing command on {HOST}
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}'
bash: rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 /opt/atlassian/pipelines/agent/build/ {USER}#{HOST}:home/{USER}: No such file or directory
Connection to {HOST} closed.
I've tried to run the same command locally from the directory on my machine
ssh -A -tt -i /root/.ssh/pipelines_id -o StrictHostKeyChecking=no -p 22007 {USER}#{HOST} 'rsync -zrSlh -e "ssh -p 22007" --stats --max-delete=0 "$PWD" {USER}#{HOST}:/home/{USER}'
but it just duplicates the home directory on the remote.
It looks to me like it's looking for the source directory on the server and not looking at the docker container from bitbucket (or the files on my local machine with pwd).
If I try to run the command without the '' then it fails because it's using port 22 by default. I've also tried offsetting the command into a bash script and using MODE: 'Script' which is an acceptable pattern for the plugin, but I can't use my environment variables in the sh file.
If all you wan't to do is to copy the files from the pipeline to the production server, you should you the rsync-deploy pipe, instead of the ssh-run. Your pipe configuration is gonna look pretty much like the following:
script:
- pipe: atlassian/rsync-deploy:0.3.2
variables:
USER: $PRODUCTION_USER
SERVER: $PRODUCTION_USER
REMOTE_PATH: 'home/$PRODUCTION_USER'
LOCAL_PATH: 'build'
SSH_PORT: '22007'
Make sure to configure your SSH keys in pipelines properly (here is a link to our docs for configuring SSH keys https://confluence.atlassian.com/bitbucket/use-ssh-keys-in-bitbucket-pipelines-847452940.html)
I've found another way around this instead of needing a plugin, instead I'm running an rsync as a script step
image: atlassian/default-image:latest
- rsync -rltDvzCh --max-delete=0 --stats --exclude-from=excludes -e 'ssh -e none -p 22007' $BITBUCKET_CLONE_DIR/ $PRODUCTION_USER#$PRODUCTION_SERVER:/home/$PRODUCTION_USER
It seems the -e none is an important addition, as is loading in the atlassian image, as fails to find the rsync function, otherwise. I found this info on this post on Atlassian Community.
This seems to work pretty well for me
image: node:10.15.3
pipelines:
default:
- step:
name: <project-path>
script:
- apt-get update && apt-get install -y rsync
- ssh-keyscan -H $SSH_HOST >> ~/.ssh/known_hosts
- cd $BITBUCKET_CLONE_DIR
- rsync -r -v -e ssh . $SSH_USER#$SSH_HOST:/<project-path>
- ssh $SSH_USER#$SSH_HOST 'cd <project-path> && npm install'
- ssh $SSH_USER#$SSH_HOST 'pm2 restart 0'
Note: Avoid using sudo cmd in pipeline scripts
same issue with atlassian/default-image:3
rsync -azv ./project_path/*
bash: rsync: command not found
Solution:
apt-get update && apt-get install -y rsync

Gitlab CI/CD: Deploy to ubuntu server using ssh keys (using a windows shell runner)

Hello everyone i need your help plz, i'm using gitlab ci/cd and trying to deploy my .jar application to an ubuntu server, i configured my gitlab project with a windows runner with shell executor. i configured a key based access on the runner to avoid being prompt for a password;
the following command runs successfully when i login to the runner machine and use it's powershell :
scp -i C:\Users\Administrators\ssh\id_rsa myapp-0.0.1-SNAPSHOT.jar username#myubuntuserver:/
but when i'm using the above commande in my .yml file to copy the .jar on the server, it doesn't give any response until the job fail due to timeout
i tried also the solution proposed here https://docs.gitlab.com/ee/ci/ssh_keys/ by setting an SSH_PRIVATE_KEY variable on my project but i'm unable to adapt the given 'before_script' to my windows runner.
this is the before_script proposed in the documentation (above link):
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
when the previous scp command is replaced by this:
ssh -iv C:\Users\Administrators\ssh\id_rsa username#myubuntuserver
i get the following output:
the image
Thanks in advance
It works after doing the following steps:
1) configuring the runner (shell executor) on ubuntu 18.04
2) Then from the terminal login as the gitlab-runner user: sudo su - gitlab-runner
3) run ssh-keygen -t rsa
4) run ssh -i ~/.ssh/id_rsa username#myubuntuserver:
5) run cat ~/.ssh/id_rsa.pub | ssh username#myubuntuserver "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
5) now you can add the following to your job script (yml file) and it should work:
- scp -i ~/.ssh/id_rsa fileToCopy username#myubuntuserver:/mydirectory
#you can execute multiple commands at a time, for ex:
- ssh username#myubuntuserver " mv /mydirectory/myapp-0.0.1-SNAPSHOT.jar /mydirectory/myapp.jar "
Hope it will help
If ssh -iv C:\Users\Administrators\ssh\id_rsa username#myubuntuserver does not work, that may be because of the C: part, which confuses ssh into thinkig C is the name of the server!
A Unix-like path would work:
ssh -iv /C/Users/Administrators/ssh/id_rsa username#myubuntuserver
But, as the OP Medmahmoud comments, this supposes the public key has been published on the server:
Configure the runner on ubuntu18.04.
Then from the terminal login as the gitlab-runner user:
sudo su - gitlab-runner - run ssh-keygen -t rsa
ssh -i ~/.ssh/id_rsa username#myubuntuserver
cat ~/.ssh/id_rsa.pub | ssh username#myubuntuserver \
"mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys"
Now from your yml file the following should work:
- scp -i ~/.ssh/id_rsa pom.xml username#myubuntuserver:/mydirectory

Gitlab CI - SSH Permission denied (publickey,password)

I've been trying to setup CD for my project. My Gitlab CI runner and my project will be on same server. I've followed https://docs.gitlab.com/ee/ci/examples/deployment/composer-npm-deploy.html but I keep getting SSH Permission denied (publickey,password). error. All my variables, private key and other variables set correctly in project settings.
I've created my ssh key with ssh-keygen -t rsa -C "my.email#example.com" -b 4096 command with no passphrase and set my PRODUCTION_PRIVATE_KEY variable with content of ~/.ssh/id_rsa file.
This is my gitlab-ci.yml:
stages:
- deploy
deploy_production:
stage: deploy
image: tetraweb/php
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- ssh-add <(echo "$PRODUCTION_PRIVATE_KEY")
- mkdir -p ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- apt-get install rsync
script:
- ssh $PRODUCTION_SERVER_USER#$PRODUCTION_SERVER
- hostname
only:
- master
And this is output from Gitlab CI runner:
Running with gitlab-ci-multi-runner 9.2.0 (adfc387)
on ci-test (1eada8d0)
Using Docker executor with image tetraweb/php ...
Using docker image sha256:17692e06e6d33d8a421441bbe9adfda5b65c94831c6e64d7e69197e0b51833f8 for predefined container...
Pulling docker image tetraweb/php ...
Using docker image tetraweb/php ID=sha256:474f639dc349f36716fb98b193e6bae771f048cecc9320a270123ac2966b98c6 for build container...
Running on runner-1eada8d0-project-3287351-concurrent-0 via lamp-512mb-ams2-01...
Fetching changes...
HEAD is now at dfdb499 Update .gitlab-ci.yml
Checking out dfdb4992 as master...
Skipping Git submodules setup
$ which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )
/usr/bin/ssh-agent
$ eval $(ssh-agent -s)
Agent pid 12
$ ssh-add <(echo "$PRODUCTION_PRIVATE_KEY")
Identity added: /dev/fd/63 (rsa w/o comment)
$ mkdir -p ~/.ssh
$ echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
$ apt-get install rsync
Reading package lists...
Building dependency tree...
Reading state information...
rsync is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
$ ssh $PRODUCTION_SERVER_USER#$PRODUCTION_SERVER
Pseudo-terminal will not be allocated because stdin is not a terminal.
Warning: Permanently added '{MY_SERVER_IP}' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).
ERROR: Job failed: exit code 1
Thanks in advance.
You need to add the public key to the server so it would be recognized as an authentication key. This is, paste the content of the public key corresponding to the private key you are using to the ~/.ssh/authorized_keys on the $PRODUCTION_SERVER.
This is the script that worked to me:
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 700 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -t rsa 64.227.1.160 > ~/.ssh/known_hosts
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- chmod 644 ~/.ssh/known_hosts
And I had to unprotect the variable as well.
The following can be used alternatively
some_stage:
- eval $(ssh-agent -s)
- cd ~
- touch id.rsa
- echo "$SSH_PRIVATE_KEY" > id.rsa
- chmod 700 id.rsa
- ssh -o StrictHostKeyChecking=no -i id.rsa $SSH_USER#$SERVER
Something important too...
The permissions of the ~/.ssh/authorized_keys file should be 600.
It can also be due to restrictions on users you can ssh into.
In my case, on the server, I got the following tail -f /var/log/auth.log:
..
Sep 6 19:25:59 server-name sshd[7943]: User johndoe from WW.XX.YY.ZZ not allowed because none of user's groups are listed in AllowGroups
..
The solution consists in updating the AllowGroups directive on the server's file /etc/ssh/sshd_config:
AllowGroups janesmith johndoe
In our case, we were clueless until we add the flag -v to the SSH command (we knew the public key setup was OK because we were able to connect to this instance from our laptop using the private key).
We saw this :
debug1: Offering public key: ... RSA SHA256:... agent
95debug1: send_pubkey_test: no mutual signature algorithm
And understood the situation thanks to the two links below : our key was generated with RSA format which is considered legacy on up-to-date openssh versions.
https://confluence.atlassian.com/bitbucketserverkb/ssh-rsa-key-rejected-with-message-no-mutual-signature-algorithm-1026057701.html
https://transang.me/ssh-handshake-is-rejected-with-no-mutual-signature-algorithm-error/
So you have two solutions :
generate a new key using ed25519 format and setup the public key on your instance
use this extra flag below in your ssh command
It should be a temporary workaround :
ssh -o PubkeyAcceptedKeyTypes=+ssh-rsa -o StrictHostKeyChecking=no your_user#your_instance_url "your command"
I hope it can help you if you are reading this.
Regards!
Add the public key (corresponding to the private key) to authorized keys.
Just a new line with you pub key:
cat /root/.ssh/id_rsa.pub.pub >> /root/.ssh/authorized_keys
And also add the pub key to gitlab ssh keys section Profile > Keys

run npm install on private gitlab project from gitlab-ci

I have a private(with ssh) project in gitlab(that wasn't published to npm)
what is the right way to install this project from gitlab-CI?
I get this error:'Host key verification failed.' from gitlab-ci stdout
If the repo you try to access is not the one the deployment runs...
Add a ssh public-key to your deploy-docker-container via gitlab-ci.yml like this
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
- mkdir -p ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ssh-add <(echo "$SSH_PRIVATE_KEY")
With that you can put your private ssh key into a variable in GitLab Project Configuration.
The public key you place inside the private repo Deploy Keys (you find this in the Repo Settings depending on the GitLab version you have).