GitLab CI/CD Script Improvement - ssh

Below is my first gitlab-ci.yml script for a static website. It does exactly what I need. It does not require a build process per Angular or React. Does anyone see room for improvement? Any glaring newbie mistakes? Are the exit commands necessary or will it automatically log off when the script terminates? Also, is it necessary to remove the deployment keys at the end of each deployment section?
- build
- deploy_staging
- deploy_production
build:
image: alpine
stage: build
before_script:
- apk add zip
script:
- zip -r website.zip * -x "composer.json" -x "composer.lock" -x "gruntfile.js" -x "package-lock.json" -x "package.json" -x "Read Me" -x "_/" -x "deploy_production.sh" -x "deploy_staging.sh" -x "README.md" -x "Read Me Custom.txt" -x "gitlab-ci.yml"
artifacts:
paths:
- website.zip
deploy_to_staging:
image: alpine
stage: deploy_staging
before_script:
- apk add unzip openssh-client
- eval $(ssh-agent -s)
- echo "$DEPLOYMENT_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan -H "$DEPLOYMENT_SERVER" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- scp website.zip "$DEPLOYMENT_LOGIN":"$DEPLOYMENT_PATH"
- ssh -p 2222 "$DEPLOYMENT_LOGIN" "
cd temp;
rm website.zip;
cd ../staging;
bash -O extglob -c 'rm -rf !(website.zip)';
unzip website.zip;
"cp website.zip ../../temp/";
rm website.zip;
exit; "
rm -f ~/.ssh/id_rsa
only:
- main
deploy_to_production:
image: alpine
stage: deploy_production
before_script:
- apk add unzip openssh-client
- eval $(ssh-agent -s)
- echo "$DEPLOYMENT_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan -H "$DEPLOYMENT_SERVER" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- ssh -p 2222 "$DEPLOYMENT_LOGIN" "
cp temp/website.zip portal/;
cd portal;
bash -O extglob -c 'rm -rf !(website.zip)';
unzip website.zip;
rm website.zip;
exit; "
rm -f ~/.ssh/id_rsa
when: manual
only:
- main

The scripts looks pretty straight forward, and it does what it should do. But there are some things you should consider.
you rely on the fact that NO deployment pipeline is run before you execute your life deployment. But theoretically there is the chance, that the zip on the server in the temp folder is not coming from the same pipeline. When eg. Another pipeline executed the staging call already. This way you would deploy the newer package, although you execute the old pipeline. Hence that i recommend to upload again, for safety.
your script contains some duplication, which results in errors, when you need to adapt those duplicated code. I added an example of inheritance for you.
Use environments. GitLab has a pretty nice feature called environments, where you have an overview of existing environments and what is deployed to which environment, by which pipeline. https://docs.gitlab.com/ee/ci/yaml/#environment
Use resourcegroups to prevent parallel job executions to the same environment. https://docs.gitlab.com/ee/ci/yaml/#resource_group
Additionally something to consider on a later stage is adding real releases and tagging to your project - but this is an own topic overall :)
Disclaimer: i am not a pro either, but those are the changes and considerations i would take into account :)
stages:
- build
- deploy_staging
- deploy_production
build:
image: alpine
stage: build
before_script:
- apk add zip
script:
- zip -r website.zip * -x "composer.json" -x "composer.lock" -x "gruntfile.js" -x "package-lock.json" -x "package.json" -x "Read Me" -x "_/" -x "deploy_production.sh" -x "deploy_staging.sh" -x "README.md" -x "Read Me Custom.txt" -x "gitlab-ci.yml"
artifacts:
paths:
- website.zip
.deploy:
image: alpine
before_script:
- apk add unzip openssh-client
- eval $(ssh-agent -s)
- echo "$DEPLOYMENT_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan -H "$DEPLOYMENT_SERVER" >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- scp website.zip "$DEPLOYMENT_LOGIN":"$DEPLOYMENT_PATH"
- ssh -p 2222 "$DEPLOYMENT_LOGIN" "
cd $DEPLOYMENT_PATH;
bash -O extglob -c 'rm -rf !(website.zip)';
unzip website.zip;
rm website.zip;
exit; "
after_script:
- rm -f ~/.ssh/id_rsa
only:
- main
deploy_to_staging:
stage: deploy_staging
variables:
DEPLOYMENT_PATH: "../staging"
extends: .deploy # inheritance to reduce duplicated code
environment:
name: staging
resource_group: staging
deploy_to_production:
stage: deploy_production
variables:
DEPLOYMENT_PATH: "portal"
extends: .deploy # inheritance to reduce duplicated code
environment:
name: production
resource_group: production
when: manual

Related

This GitLab CI configuration is invalid: jobs stage config should implement a script: or a trigger: keyword

i have been facing the above error for the below yaml file
before running the pipeline and getting this error
stage: deploy
stages:
- deploy
Deploy: ~
before_script:
- "command -v ssh-agent >/dev/null || ( apk add --update openssh )"
- "eval $(ssh-agent -s)"
- "echo \"$SSH_PRIVATE_KEY\" | tr -d '\\r' | ssh-add -"
- "mkdir -p ~/.ssh"
- "chmod 700 ~/.ssh"
- "ssh-keyscan $EC2_IPADDRESS >> ~/.ssh/known_hosts"
- "chmod 644 ~/.ssh/known_hosts"
script:
- "mkdir .public"
- "cp -r * .public"
- "mv .public public"
- "zip -r public.zip public"
- "scp -o StrictHostKeyChecking=no public.zip ubuntu#3.129.128.56:/var/www/html"
- "ssh -o StrictHostKeyChecking=no ubuntu#3.129.128.56 \"cd /var/www/html; touch foo.txt; unzip public.zip\""
You can see in this question or this GitLab thread examples of SSH-based pipelines.
In each case, there is no Deploy: ~, so try first without this line.

gitlab CI/CD how to execute a multiline command as shown

I want to execute a command like below in the gitlabl CI/CD
ssh $DEPLOY_USER#$DEPLOY_HOST <<'ENDSSH'
set -x -o verbose;
execute some command here
set +x
ENDSSH
How to add such commands in script
deploy-to-stage:
stage: deploy
before_script:
- "which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )"
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN registry.gitlab.com
script:
- *** HERE I WANT TO RUN THAT COMMAND ***
How can i do that
you can do it like that:
script:
- |
ssh $DEPLOY_USER#$DEPLOY_HOST <<'ENDSSH'
set -x -o verbose;
execute some command here
set +x
ENDSSH

Why Gitlab CI docker build tagging problem

I have been trying to build my project and deployment to a remote server using gitlab CI runner and using this link as reference
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-continuous-deployment-pipeline-with-gitlab-ci-cd-on-ubuntu-18-04
After runing the pipeline, the publish stage is giving error about the docker tagging
$ docker build -t $TAG_COMMIT -t $TAG_LATEST .
invalid argument "/patch-9:64a25b49" for "-t, --tag" flag: invalid reference format
I have tried changing the docker build tagging in different formats but still could not find out why the error.
I have tried changing the tagging
TAG_LATEST: ${CI_REGISTRY_IMAGE}/${CI_COMMIT_REF_NAME}:latest
TAG_COMMIT:${CI_REGISTRY_IMAGE}/${CI_COMMIT_REF_NAME}:${CI_COMMIT_SHORT_SHA}
but I am still get the error
$ cd $GOPATH/src/$REPO/$NAMESPACE/$PROJECT
$ docker build -t $TAG_COMMIT -t $TAG_LATEST .
invalid argument "/patch-10:fbf4855b" for "-t, --tag" flag: invalid reference format
See 'docker build --help'.
Can anyone help me solve this problem?
My .gitlab-ci.yml file looks
image: golang:1.15.3
variables:
REPO: github.com
NAMESPACE: daniel
PROJECT: danapp
TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
before_script:
- mkdir -p $GOPATH/src/$REPO/$NAMESPACE/$PROJECT
- cp -r -v $CI_PROJECT_DIR $GOPATH/src/github.com/daniel
- cd $GOPATH/src/$REPO/$NAMESPACE/$PROJECT
stages:
- build
- publish
- deploy
compile:
stage: build
script:
- go build -race -ldflags "-extldflags '-static'" -o $CI_PROJECT_DIR/danapp
artifacts:
paths:
- danapp
publish:
image: docker:latest
stage: publish
services:
- docker:dind
script:
- docker build -t $TAG_COMMIT -t $TAG_LATEST .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
- docker push $TAG_COMMIT
- docker push $TAG_LATEST
deploy:
image: alpine:latest
stage: deploy
tags:
- deployment
before_script:
- apk update && apk add openssh-client
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- chmod og= $SSH_PRIVATE_KEY
- apk update && apk add openssh-client
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no admin#192.168.x.x "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no admin#192.168.x.x "docker pull $TAG_COMMIT"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no admin#192.168.x.x "docker container rm -f danapp || true"
- ssh -i $SSH_PRIVATE_KEY -o StrictHostKeyChecking=no admin#192.168.x.x "docker run -d -p 20005:20005 --name danapp $TAG_COMMIT"
environment: stagging
only:
- master
I suggest you change your $CI_COMMIT_REF_NAME to $CI_COMMIT_REF_SLUG
Maybe this solved.

gitlab runner ssh private key 644 file permission error

When running a gitlab ci/cd pipeline, ssh gives 0644 bad permission error. Variable is stored as a file type in the settings>variable section in gitlab.
.gitlab-ci.yml file looks like:
stages:
- deploy
before_script:
- apt-get update -qq
- apt-get install -qq git
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
deploy_1:
stage: deploy
only:
- master
tags:
- master
script:
- ssh -i $SSH_KEY user#ip "mkdir -p runner_test"
deploy_2:
stage: deploy
only:
- master
tags:
- master
script:
- ssh -i $SSH_KEY user#ip "mkdir -p runner_test"
Error:
$ ssh -i $SSH_KEY host#ip "mkdir -p runner_test"
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0644 for '/home/user/builds/gPnQDT8L/0/username/server.tmp/SSH_KEY' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/home/user/builds/gPnQDT8L/0/username/server.tmp/SSH_KEY": bad permissions
user#ip: Permission denied (publickey).
Cleaning up file based variables
How do I change the private key permissions from 644 to 600 or 400.
You can see the same error in this deploy process for this gitlab-ci.yml
The fixed version of that file:
server:
stage: deploy
script:
- apt-get install -y openssh-client rsync
- chmod 400 $SSH_KEY
- scp -o StrictHostKeyChecking=no -P $SSH_PORT -i $SSH_KEY public/server.zip $SSH_URI:modpack/server.zip
A simple chmod 400 $SSH_KEY should be enough.

gitlab CI different SSL keys for different server deployments, how?

I have a gitlab-ci.yml that has this content:
before_script:
# Setup SSH deploy keys
- which ssh-agent || ( apt-get install -qq openssh-client )
- eval $(ssh-agent -s)
- bash -c 'ssh-add <(echo "$KEY1")'
- bash -c 'ssh-add <(echo "$KEY2")'
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan ec2-xxx-xxx-xxx-xx1.eu-west-2.compute.amazonaws.com >> ~/.ssh/known_hosts
- ssh-keyscan ec2-xx-xxx-xxx-xx2.eu-west-2.compute.amazonaws.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
// more stuff that tests the code
staging:
type: deploy
script:
- shell/deploy_s_staging_ci.sh
only:
- tags
- s_staging
My environment has the $KEY1 and $KEY2 stored.
The last line of my local deployment shell script is like this:
scp -v -i "~/.ssh/example.pem" s_etl_deploy.tgz \
ec2-user#ec2-xxx-xxx-xxx-xxx.eu-west-2.compute.amazonaws.com:/home/ec2-user/etl/
How is this altered to pick up a particular SSL Key, lets say KEY1, in the gitlb CI docker runner, to use that in the scp command?
You should probably make it possible to select between both keys.
For example, using variables permit to achieve that goal.
.use_ssh_key:
before_script:
# Setup SSH deploy keys
- which ssh-agent || ( apt-get install -qq openssh-client )
- eval $(ssh-agent -s)
- bash -c 'ssh-add <(echo "$KEY_TO_USE")'
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- ssh-keyscan ec2-xxx-xxx-xxx-xx1.eu-west-2.compute.amazonaws.com >> ~/.ssh/known_hosts
- ssh-keyscan ec2-xx-xxx-xxx-xx2.eu-west-2.compute.amazonaws.com >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
// more stuff that tests the code
staging:
extends: .use_ssh_key
variables:
KEY_TO_USE: $KEY1
type: deploy
script:
- shell/deploy_s_staging_ci.sh
only:
- tags
- s_staging
Then you can just extend that configuration, and define the KEY_TO_USE depending on your job.