Trigger GitLab pipeline based on an external event (Example - A binary image has been uploaded to Nexus) - gitlab-ci

I have a problem where the application code is developed and built in vendor premises and only code binary is sent in the form of docker image and uploaded to Nexus repository. Now, I need to create a deployment pipeline in GitLab which will be triggered as soon as a new image is uploaded to nexus.
Please suggest any solution.
I have tried to create a parent child pipeline where I am trying to check the HTTP resonse status from Nexus through curl request, trigger the child pipeline only when the status is 200. The parent pipeline can be configured as a scheduled pipeline which will check the image availability on a regular interval.
Parent yml:
stages:
- build
- trigger_child
build:
stage: build
script:
- echo "img_status=`curl -s -o /dev/null -w "%{http_code}" https://www.google.com`" >> build.env
artifacts:
reports:
dotenv: build.env
trigger_child:
stage: trigger_child
trigger:
include:
- local: App/.gitlab-ci.yml
strategy: depend
needs:
- job: build
artifacts: true
rules:
- if: $img_status == "200"
when: always
Child yml:
stages:
- deploy
deploy:
stage: deploy
script:
- echo "This job deploys something new"

Related

Having a script run only when a manually triggered job fails in GitLab

I have the following script that pulls from a remote template. The remote template has the following stages: build, test, code_analysis, compliance, deploy.
The deploy step is manually triggered and executed AWS CLI to deploy a SAM project.
I want to add an additional step such that when the deploy step fails, it will execute a script to rollback the cloudformation stack to its last operational state.
I created a "cleanup-cloudformation-stack-failure" job and tried adding "extends: .deploy", but that didn't work.
I then added an additional stage called "cloudformation_stack_rollback" in the serverless-template.yml file and tried to use a mix of rules and when to get it to trigger on failure, but I'm getting errors flagged by GitLab's linter.
Does anyone know what I'm doing wrong?
include:
- remote: 'https://my-gitlab-server.com/ci-templates/-/raw/master/serverless-template.yml'
deploy-qas:
extends: .deploy
variables:
....
PARAMETER_OVERRIDES: "..."
environment: qas
only:
- qas
tags:
- serverless
cleanup-cloudformation-stack-failure:
variables:
STACK_NAME: $CI_PROJECT_NAME-$CI_ENVIRONMENT_NAME
stage: cloudformation_stack_rollback
rules:
- if: '$CI_JOB_MANUAL == true'
when: on_failure
script:
- aws cloudformation continue-update-rollback --stack-name ${STACK_NAME} --resources-to-skip ${STACK_NAME}
You forgot double quotes around true, however you can use Directed Asyclic Graphs to execute jobs conditionally
include:
- remote: 'https://my-gitlab-server.com/ci-templates/-/raw/master/serverless-template.yml'
deploy-qas:
extends: .deploy
variables:
....
PARAMETER_OVERRIDES: "..."
environment: qas
only:
- qas
tags:
- serverless
cleanup-cloudformation-stack-failure:
needs:
- deploy-qas
when: on_failure
variables:
STACK_NAME: $CI_PROJECT_NAME-$CI_ENVIRONMENT_NAME
stage: cloudformation_stack_rollback
script:
- aws cloudformation continue-update-rollback --stack-name ${STACK_NAME} --reso

Is there a way to use the GitLab "Merge when pipeline succeeds" together with Review apps (that need an auto-stop job)?

I have a pipeline with review apps. So when the pipeline runs in the context of a Merge Request / Pull Request then I run:
build and upload a docker image to ECR tagged with $CI_PROJECT_NAME.$CI_MERGE_REQUEST_ID
deploy a helm chart that configures the app to be visible at $CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID.reviewapps.example.com
I want to delete that docker image tag and kubernetes deployment after the merge request is merged/closed, so I added the following stop review app job:
deploy review app:
stage: deploy
image: alpine/helm:3.5.0
dependencies: []
script:
- helm -n "$KUBE_NAMESPACE" upgrade
--install --wait "$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID" chart
-f helm-reviewapp-values.yaml
--set-string "ingress.annotations.external-dns\.alpha\.kubernetes\.io/hostname=$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID.reviewapps.example.com."
--set-string "ingress.host=$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID.reviewapps.example.com"
--set-string "image=$AWS_REPOSITORY:$CI_PROJECT_NAME.$CI_MERGE_REQUEST_ID"
--set "deploymentAnnotations.app\.gitlab\.com/app=${CI_PROJECT_PATH_SLUG}"
--set "deploymentAnnotations.app\.gitlab\.com/env=${CI_ENVIRONMENT_SLUG}"
--set "podAnnotations.app\.gitlab\.com/app=${CI_PROJECT_PATH_SLUG}"
--set "podAnnotations.app\.gitlab\.com/env=${CI_ENVIRONMENT_SLUG}"
environment:
name: review/$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID
url: https://$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID.reviewapps.example.com
on_stop: stop review app
auto_stop_in: 1 day
needs:
- build docker image review app
rules:
- if: $CI_MERGE_REQUEST_ID
stop review app:
stage: cleanup approval
script: echo approved
dependencies: []
environment:
name: review/$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID
action: stop
needs:
- deploy review app
rules:
- if: $CI_MERGE_REQUEST_ID
when: manual
allow_failure: true
uninstall helm chart:
stage: cleanup
image: alpine/helm:3.5.0
dependencies: []
environment:
name: review/$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID
action: stop
script:
- helm -n "$KUBE_NAMESPACE" uninstall "$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID"
needs:
- stop review app
rules:
- if: $CI_MERGE_REQUEST_ID
allow_failure: true
delete ecr image:
stage: cleanup
image: amazon/aws-cli:2.1.19
dependencies: []
script:
- aws ecr batch-delete-image --repository-name XXXX --image-ids "imageTag=$CI_PROJECT_NAME.$CI_MERGE_REQUEST_ID"
needs:
- stop review app
rules:
- if: $CI_MERGE_REQUEST_ID
allow_failure: true
As you can see the stop review job is
referred in the "deploy review app" in the environment:on_stop, making use of the environment auto-stop feature
marked as when:manual
made optional with allow_failure: true
Then the pipeline looks like this
the stop review app still "blocks" the pipeline, it shows as running until the stop job runs:
This is bothering me because when people click on the Merge when pipeline succeeds nothing will really happen until the environment is manually stopped (by clicking the play button on the stop review app job).
I also tried removing the allow_failure from the stop job but the only difference is that the pipeline will be stuck in state blocked instead of running.
Is there a way to use the Merge when pipeline succeeds together with Review apps (that need a stop job)?
This caused by the needs: stop review app in the downstream jobs.
As a workaround you can create a single job that performs all the cleanup instead of having uninstall helm chart and delete ecr image depending on stop review app via needs:.
You will need to use a docker image for the job that has all the tools required (helm and aws-cli in your case).
The following pipeline .gitlab-ci.yml will turn to passed after deploy review app passed. The single optional stop job stop review app does not force the pipeline to remain in running or blocked and the pipeline succeeds without having to run that particular job:
stages:
- test
- package
- deploy
- cleanup approval
- cleanup
build docker image review app:
stage: package
script:
- echo hello
rules:
- if: $CI_MERGE_REQUEST_ID
deploy review app:
stage: deploy
image: alpine/helm:3.5.0
dependencies: []
script:
- echo hello
environment:
name: review/$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID
url: https://$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID.reviewapps.tdhb2bdev.com
on_stop: stop review app
auto_stop_in: 1 day
needs:
- build docker image review app
rules:
- if: $CI_MERGE_REQUEST_ID
stop review app:
stage: cleanup approval
script:
- echo helm uninstall xxxx
- echo aws ecr batch-delete-image xxxx
dependencies: []
environment:
name: review/$CI_PROJECT_NAME-$CI_MERGE_REQUEST_ID
action: stop
needs:
- deploy review app
rules:
- if: $CI_MERGE_REQUEST_ID
when: manual
allow_failure: true

Gitlab run pipeline job only when previous job ran

I'm trying to create a pipeline with a production and a development deployment. In both environments the application should be built with docker. But only when something changed in the according directory.
For example:
When something changed in the frontend directory the frontend should be build and deployed
When something changed in the backend directory the backend should be build and deployed
At first I didn't had the needs: keyword. The pipeline always executed the deploy_backend and deploy_frontend even when the build jobs were not executed.
Now I've added the needs: keyword, but Gitlab says yaml invalid when there was only a change in one directory. When there is a change in both directories the pipeline works fine. When there for exaple a change in the README.md outside the 2 directories the says yaml invalid as well.
Does anyone knows how I can create a pipeline that only runs when there is a change in a specified directory and only runs the according deploy job when the build job has ran?
gitlab-ci.yml:
stages:
- build
- deploy
build_frontend:
stage: build
only:
refs:
- master
- development
changes:
- frontend/*
script:
- cd frontend
- docker build -t frontend .
build_backend:
stage: build
only:
refs:
- master
- development
changes:
- backend/*
script:
- cd backend
- docker build -t backend .
deploy_frontend_dev:
stage: deploy
only:
refs:
- development
script:
- "echo deploy frontend"
needs: ["build_frontend"]
deploy_backend_dev:
stage: deploy
only:
refs:
- development
- pipeline
script:
- "echo deploy backend"
needs: ["build_backend"]
The problem here is that your deploy jobs require the previous build jobs to actually exist.
However, by using the only.changes-rule, they only exist if actually something changed within those directories.
So when only something in the frontend-folder changed, the build_backend-Job is not generated at all. But the deploy_backend_dev job still is and then misses it's dependency.
A quick fix would be to add the only.changes configuration also to the deployment-jobs like this:
deploy_frontend_dev:
stage: deploy
only:
refs:
- development
changes:
- frontend/*
script:
- "echo deploy frontend"
needs: ["build_frontend"]
deploy_backend_dev:
stage: deploy
only:
refs:
- development
- pipeline
changes:
- backend/*
script:
- "echo deploy backend"
needs: ["build_backend"]
This way, both jobs will only be created if the dependent build job is created as well and the yaml will not be invalid.

GitlabCI pipeline run only with code from master

I need to run pipeline everytime there is a commit on non-master branch. The pipeline starts but the code is from master. I need the code from the changed branch
Pipeline is like this:
variables:
IMAGE_TAG: ${CI_PIPELINE_IID}
BASE_NAME: ${CI_COMMIT_REF_NAME}
stages:
- validate
- build
check_image:
stage: validate
tags:
- runner
script:
- cd ~/path/${BASE_NAME}-base && packer validate ${BASE_NAME}-base.json
except: ['master']
create_image:
stage: build
tags:
- runner
script:
- cd ~/path/${BASE_NAME}-base && packer build -force ${BASE_NAME}-base.json
except: ['master']
Nevermind. I figured it out. I was running gitlab-runner under custom user so the environment is already set. I just have to add before_script to checkout the desired branch.

Gitlab after push tag

Is it possible to configure the gitlab-yml of the project in such a way that after the tag has been pushed out it can run several commands ? If so, how do you get it? I would also like to define the variables that I would like to use in these several commands.
My gitlab-ci looks like:
stages:
- build
- deploy
build:
stage: build
script:
- composer install --no-ansi
- vendor/bin/phar-composer build
artifacts:
paths:
- example.phar
tags:
- php:7.0
deploy:
stage: deploy
only:
- tags
dependencies:
- build
script:
- cp example.phar /opt/example/
tags:
- php:7.0
It's about running example.phar bin/console command1 $VARIABLE1 $VARIABLE2 $VARIABLE3 $VARIABLE4.
Please help me because I am not completely familiar with these matters.
You can trigger a job when one tag is pushed using only parameter:
build:
stage: build
image: alpine:3.6
script:
- echo "A tag has been pushed!"
only:
- tags