gitlab CI/CD how to execute a multiline command as shown - gitlab-ci

I want to execute a command like below in the gitlabl CI/CD
ssh $DEPLOY_USER#$DEPLOY_HOST <<'ENDSSH'
set -x -o verbose;
execute some command here
set +x
ENDSSH
How to add such commands in script
deploy-to-stage:
stage: deploy
before_script:
- "which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )"
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
script:
- *** HERE I WANT TO RUN THAT COMMAND ***
How can i do that

you can do it like that:
script:
- |
ssh $DEPLOY_USER#$DEPLOY_HOST <<'ENDSSH'
set -x -o verbose;
execute some command here
set +x
ENDSSH

Related

Gitlab CI - Permission denied

I've been trying to setup the CD for my project in Gitlab but I'm getting the following error in the pipeline:
bash: line 151: /home/gitlab-runner/.ssh/ssh-key.pem: Permission denied
Cleaning up project directory and file based variables
ERROR: Job failed: exit status 1
This is my gitlab-ci.yml:
default:
image: amazonlinux:latest
deploy-prod:
only:
- main
stage: deploy
before_script:
- ls -la
- pwd
- 'which ssh-agent || ( yum update -y && yum install openssh-client -y )'
- eval $(ssh-agent -s)
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- cat $SSH_KEY_EC2
- echo "$(cat $SSH_KEY_EC2)" >> ~/.ssh/ssh-key.pem
- chmod 400 ~/.ssh/ssh-key.pem
- cat ~/.ssh/ssh-key.pem
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- yum update -y
- apt-get -y install rsync
script:
- >-
...
Thanks!!!
Considering the official documentation also use the same chmod (400 for private keys, 700 for .ssh, make sure first exactly where, in your script, the error occurs:
at the echo >>,
at the chmod 400,
or at the cat?
That way you can start debug the permission issue, making sure the permission are properly set.

GitLab pipelines pushes to remote server but cannot ssh

My .gitlab-ci.yml looks like this:
build app:
stage: build
only:
- feature/ci-pipeline-job-v2
before_script:
- echo "before script"
- 'command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$GIT_URL" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- echo "HOST *" > ~/.ssh/config
- echo "StrictHostKeyChecking no" >> ~/.ssh/config
- git config user.email "user.villiers#main.com"
- git config user.name "user-main"
- git remote add acquia $GIT_URL
script:
- echo "running the script"
- git checkout -b feature/ci-pipeline-job-v2
- git push acquia feature/ci-pipeline-job-v2
after_script:
- echo "time to ssh"
- ssh maindecoupled.dev#maindecoupleddev.ssh.prod.acquia-sites.com "cd /var/www/html && ls -la && composer install && exit"
My pipeline gives a success, but when I look at the job result, I see a permission denied certainly from the after script.
The full result of the job is as follows:
$ echo "before script"
before script
$ command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )
$ eval $(ssh-agent -s)
Agent pid 12
$ echo "$SSH_PRIVATE_KEY" | ssh-add -
Identity added: (stdin) (userdevilliers#Norton-MacBook-Pro.local)
$ mkdir -p ~/.ssh
$ chmod 700 ~/.ssh
$ echo "$GIT_URL" >> ~/.ssh/known_hosts
$ chmod 644 ~/.ssh/known_hosts
$ echo "HOST *" > ~/.ssh/config
$ echo "StrictHostKeyChecking no" >> ~/.ssh/config
$ git config user.email "user.villiers#main.com"
$ git config user.name "user-main"
$ git remote add acquia $GIT_URL
$ echo "running the script"
running the script
$ git checkout -b feature/ci-pipeline-job-v2
Switched to a new branch 'feature/ci-pipeline-job-v2'
$ git push acquia feature/ci-pipeline-job-v2
Warning: Permanently added 'svn-23449.prod.hosting.acquia.com,22.222.22.222' (RSA) to the list of known hosts.
To svn-23449.prod.hosting.acquia.com:maindecoupled.git
b1f2c6ca..622cab3b feature/ci-pipeline-job-v2 -> feature/ci-pipeline-job-v2
Running after_script
00:02
Running after script...
$ echo "time to ssh"
time to ssh
$ ssh maindecoupled.dev#maindecoupleddev.ssh.prod.acquia-sites.com "cd /var/www/html && ls -la && composer install && exit"
Warning: Permanently added 'maindecoupleddev.ssh.prod.acquia-sites.com,11.11.111.111' (ECDSA) to the list of known hosts.
maindecoupled.dev#maindecoupleddev.ssh.prod.acquia-sites.com: Permission denied (publickey).
Cleaning up project directory and file based variables
00:01
Job succeeded
How am I able to push to the Acquia repo but have a public key error when its time to ssh?
Not sure how to go about from here.
How can I ssh into the remote server and cd into the intended directories?

This GitLab CI configuration is invalid: jobs stage config should implement a script: or a trigger: keyword

i have been facing the above error for the below yaml file
before running the pipeline and getting this error
stage: deploy
stages:
- deploy
Deploy: ~
before_script:
- "command -v ssh-agent >/dev/null || ( apk add --update openssh )"
- "eval $(ssh-agent -s)"
- "echo \"$SSH_PRIVATE_KEY\" | tr -d '\\r' | ssh-add -"
- "mkdir -p ~/.ssh"
- "chmod 700 ~/.ssh"
- "ssh-keyscan $EC2_IPADDRESS >> ~/.ssh/known_hosts"
- "chmod 644 ~/.ssh/known_hosts"
script:
- "mkdir .public"
- "cp -r * .public"
- "mv .public public"
- "zip -r public.zip public"
- "scp -o StrictHostKeyChecking=no public.zip ubuntu#3.129.128.56:/var/www/html"
- "ssh -o StrictHostKeyChecking=no ubuntu#3.129.128.56 \"cd /var/www/html; touch foo.txt; unzip public.zip\""
You can see in this question or this GitLab thread examples of SSH-based pipelines.
In each case, there is no Deploy: ~, so try first without this line.

GitLab CI/CD Script Improvement

Below is my first gitlab-ci.yml script for a static website. It does exactly what I need. It does not require a build process per Angular or React. Does anyone see room for improvement? Any glaring newbie mistakes? Are the exit commands necessary or will it automatically log off when the script terminates? Also, is it necessary to remove the deployment keys at the end of each deployment section?
- build
- deploy_staging
- deploy_production
build:
image: alpine
stage: build
before_script:
- apk add zip
script:
- zip -r website.zip * -x "composer.json" -x "composer.lock" -x "gruntfile.js" -x "package-lock.json" -x "package.json" -x "Read Me" -x "_/" -x "deploy_production.sh" -x "deploy_staging.sh" -x "README.md" -x "Read Me Custom.txt" -x "gitlab-ci.yml"
artifacts:
paths:
- website.zip
deploy_to_staging:
image: alpine
stage: deploy_staging
before_script:
- apk add unzip openssh-client
- eval $(ssh-agent -s)
- echo "$DEPLOYMENT_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan -H "$DEPLOYMENT_SERVER" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- scp website.zip "$DEPLOYMENT_LOGIN":"$DEPLOYMENT_PATH"
- ssh -p 2222 "$DEPLOYMENT_LOGIN" "
cd temp;
rm website.zip;
cd ../staging;
bash -O extglob -c 'rm -rf !(website.zip)';
unzip website.zip;
"cp website.zip ../../temp/";
rm website.zip;
exit; "
rm -f ~/.ssh/id_rsa
only:
- main
deploy_to_production:
image: alpine
stage: deploy_production
before_script:
- apk add unzip openssh-client
- eval $(ssh-agent -s)
- echo "$DEPLOYMENT_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan -H "$DEPLOYMENT_SERVER" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- ssh -p 2222 "$DEPLOYMENT_LOGIN" "
cp temp/website.zip portal/;
cd portal;
bash -O extglob -c 'rm -rf !(website.zip)';
unzip website.zip;
rm website.zip;
exit; "
rm -f ~/.ssh/id_rsa
when: manual
only:
- main
The scripts looks pretty straight forward, and it does what it should do. But there are some things you should consider.
you rely on the fact that NO deployment pipeline is run before you execute your life deployment. But theoretically there is the chance, that the zip on the server in the temp folder is not coming from the same pipeline. When eg. Another pipeline executed the staging call already. This way you would deploy the newer package, although you execute the old pipeline. Hence that i recommend to upload again, for safety.
your script contains some duplication, which results in errors, when you need to adapt those duplicated code. I added an example of inheritance for you.
Use environments. GitLab has a pretty nice feature called environments, where you have an overview of existing environments and what is deployed to which environment, by which pipeline. https://docs.gitlab.com/ee/ci/yaml/#environment
Use resourcegroups to prevent parallel job executions to the same environment. https://docs.gitlab.com/ee/ci/yaml/#resource_group
Additionally something to consider on a later stage is adding real releases and tagging to your project - but this is an own topic overall :)
Disclaimer: i am not a pro either, but those are the changes and considerations i would take into account :)
stages:
- build
- deploy_staging
- deploy_production
build:
image: alpine
stage: build
before_script:
- apk add zip
script:
- zip -r website.zip * -x "composer.json" -x "composer.lock" -x "gruntfile.js" -x "package-lock.json" -x "package.json" -x "Read Me" -x "_/" -x "deploy_production.sh" -x "deploy_staging.sh" -x "README.md" -x "Read Me Custom.txt" -x "gitlab-ci.yml"
artifacts:
paths:
- website.zip
.deploy:
image: alpine
before_script:
- apk add unzip openssh-client
- eval $(ssh-agent -s)
- echo "$DEPLOYMENT_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan -H "$DEPLOYMENT_SERVER" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- scp website.zip "$DEPLOYMENT_LOGIN":"$DEPLOYMENT_PATH"
- ssh -p 2222 "$DEPLOYMENT_LOGIN" "
cd $DEPLOYMENT_PATH;
bash -O extglob -c 'rm -rf !(website.zip)';
unzip website.zip;
rm website.zip;
exit; "
after_script:
- rm -f ~/.ssh/id_rsa
only:
- main
deploy_to_staging:
stage: deploy_staging
variables:
DEPLOYMENT_PATH: "../staging"
extends: .deploy # inheritance to reduce duplicated code
environment:
name: staging
resource_group: staging
deploy_to_production:
stage: deploy_production
variables:
DEPLOYMENT_PATH: "portal"
extends: .deploy # inheritance to reduce duplicated code
environment:
name: production
resource_group: production
when: manual

gitlab CI different SSL keys for different server deployments, how?

I have a gitlab-ci.yml that has this content:
before_script:
# Setup SSH deploy keys
- which ssh-agent || ( apt-get install -qq openssh-client )
- eval $(ssh-agent -s)
- bash -c 'ssh-add <(echo "$KEY1")'
- bash -c 'ssh-add <(echo "$KEY2")'
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan ec2-xxx-xxx-xxx-xx1.eu-west-2.compute.amazonaws.com >> ~/.ssh/known_hosts
- ssh-keyscan ec2-xx-xxx-xxx-xx2.eu-west-2.compute.amazonaws.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
// more stuff that tests the code
staging:
type: deploy
script:
- shell/deploy_s_staging_ci.sh
only:
- tags
- s_staging
My environment has the $KEY1 and $KEY2 stored.
The last line of my local deployment shell script is like this:
scp -v -i "~/.ssh/example.pem" s_etl_deploy.tgz \
ec2-user#ec2-xxx-xxx-xxx-xxx.eu-west-2.compute.amazonaws.com:/home/ec2-user/etl/
How is this altered to pick up a particular SSL Key, lets say KEY1, in the gitlb CI docker runner, to use that in the scp command?
You should probably make it possible to select between both keys.
For example, using variables permit to achieve that goal.
.use_ssh_key:
before_script:
# Setup SSH deploy keys
- which ssh-agent || ( apt-get install -qq openssh-client )
- eval $(ssh-agent -s)
- bash -c 'ssh-add <(echo "$KEY_TO_USE")'
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan ec2-xxx-xxx-xxx-xx1.eu-west-2.compute.amazonaws.com >> ~/.ssh/known_hosts
- ssh-keyscan ec2-xx-xxx-xxx-xx2.eu-west-2.compute.amazonaws.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
// more stuff that tests the code
staging:
extends: .use_ssh_key
variables:
KEY_TO_USE: $KEY1
type: deploy
script:
- shell/deploy_s_staging_ci.sh
only:
- tags
- s_staging
Then you can just extend that configuration, and define the KEY_TO_USE depending on your job.