Gitlab/CI job rule evaluation - gitlab-ci

I am trying to skip a GitLab ci job based on the results of the previous job, however, as a result, the job never runs. I have the impression that rules are evaluated at the beginning of the Pipeline not at the beginning of the Job. Is there any way to make it work?
cache:
paths:
- .images
stages:
- prepare
- build
dirs:
stage: prepare
image:
name: docker.image.me/run:latest
script:
- rm -rf .images/*
- [ $(($RANDOM % 2)) -eq 1 ] && touch .images/DESKTOP
desktop:
stage: build
needs: ["dirs"]
image:
name: docker.image.me/run:latest
rules:
- exists:
- .images/DESKTOP
when: always
script:
- echo "Why is this never launched?"

Dynamically created jobs could be a solution (https://docs.gitlab.com/ee/ci/parent_child_pipelines.html#dynamic-child-pipelines).
You could create a yml-file with a your "desktop"-job in section "script" in your "dirs"-job if ".images/DESKTOP" is created.
Else your created yml-file should be empty.
The created yml-file can be triggered in a seperat job after "dirs"-job.
I'm using for creating dynamic child pipelines jsonnet (https://jsonnet.org/).

The rules evaluation is happening at the beginning of a Gitlab pipeline
Quoting from Gitlab docs https://docs.gitlab.com/ee/ci/yaml/#rules
Rules are evaluated when the pipeline is created, and evaluated in order until the first match. When a match is found, the job is either included or excluded from the pipeline, depending on the configuration.
Here the problem seems to be with the usage of exists keyword
Quoting from Gitlab docs https://docs.gitlab.com/ee/ci/yaml/#rulesexists
Use exists to run a job when certain files exist in the repository
But here it seems that .images/DESKTOP is in Gitlab's runner cache, not in your repository.
cache:
paths:
- .images

Related

gitlab job is running even if there is no changes in the schedule pipeline

I set a schedule for my gitlab.yml file to run the pipeline. In my job I have set rules to run/not run the job. However, in my schedule the job is running no matter if any of my rules met.
here is the simplified yml file:
stages:
- build
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR : ""
DOCKER_NETWORK: "gitlab-network"
.docker_dind_service: &docker_dind_service
services:
- name: docker:20.10-dind
command: ["--insecure-registry", "my_server.net:7000"]
docker:custom:
stage: build
<<: *docker_dind_service
tags:
- docker_runner
image: docker
rules:
- if: '$FORCE_BUILD_DOCKER_IMAGE == "1"'
when: always
- changes:
- Dockerfile
- when: never
script:
- docker build -t my_image .
for the case above, the job is added to the schedule even though there is no change in my Dockerfile. I think I am lost, because when I do changes in my yml file and push it, this job is not added, which is right because there is no change in the Dockerfile. However, it is running for every scheduled pipeline.
Apparently according to the Gitlab documentation:
https://docs.gitlab.com/ee/ci/yaml/#using-onlychanges-without-pipelines-for-merge-requests
You should use rules: changes only with branch pipelines or merge request pipelines. You can use rules: changes with other pipeline types, but rules: changes always evaluates to true when there is no Git push event. Tag pipelines, scheduled pipelines, manual pipelines, and so on do not have a Git push event associated with them. A rules: changes job is always added to those pipelines if there is no if that limits the job to branch or merge request pipelines.

Having a script run only when a manually triggered job fails in GitLab

I have the following script that pulls from a remote template. The remote template has the following stages: build, test, code_analysis, compliance, deploy.
The deploy step is manually triggered and executed AWS CLI to deploy a SAM project.
I want to add an additional step such that when the deploy step fails, it will execute a script to rollback the cloudformation stack to its last operational state.
I created a "cleanup-cloudformation-stack-failure" job and tried adding "extends: .deploy", but that didn't work.
I then added an additional stage called "cloudformation_stack_rollback" in the serverless-template.yml file and tried to use a mix of rules and when to get it to trigger on failure, but I'm getting errors flagged by GitLab's linter.
Does anyone know what I'm doing wrong?
include:
- remote: 'https://my-gitlab-server.com/ci-templates/-/raw/master/serverless-template.yml'
deploy-qas:
extends: .deploy
variables:
....
PARAMETER_OVERRIDES: "..."
environment: qas
only:
- qas
tags:
- serverless
cleanup-cloudformation-stack-failure:
variables:
STACK_NAME: $CI_PROJECT_NAME-$CI_ENVIRONMENT_NAME
stage: cloudformation_stack_rollback
rules:
- if: '$CI_JOB_MANUAL == true'
when: on_failure
script:
- aws cloudformation continue-update-rollback --stack-name ${STACK_NAME} --resources-to-skip ${STACK_NAME}
You forgot double quotes around true, however you can use Directed Asyclic Graphs to execute jobs conditionally
include:
- remote: 'https://my-gitlab-server.com/ci-templates/-/raw/master/serverless-template.yml'
deploy-qas:
extends: .deploy
variables:
....
PARAMETER_OVERRIDES: "..."
environment: qas
only:
- qas
tags:
- serverless
cleanup-cloudformation-stack-failure:
needs:
- deploy-qas
when: on_failure
variables:
STACK_NAME: $CI_PROJECT_NAME-$CI_ENVIRONMENT_NAME
stage: cloudformation_stack_rollback
script:
- aws cloudformation continue-update-rollback --stack-name ${STACK_NAME} --reso

Pass run-time selected parameter to job without typing, GitLab CI (GUI)

I'm working on an application deployment pipeline.
I have multiple environment that should be populated by application instances on demand.
Desired behaviour: init_1 job triggers executor with specific parameter
Run a parametrised job for deployment
The above job gets parameter from user
User can only select parameter values from a predefined list
User should provide parameter to start the pipeline
What I tried:
(1) I have the deploy job with manual trigger where I set the parameter as environment variable [eg: ENV_NAME].
This solution works but error prone and it is pretty hard to rerun properly.
(2) I have an init job that sets an environment variable [eg: ENV_NAME] with preset value [eg: dev]. The deploy job is triggered after the init.
This solution works but it holds no real value.
(3) I have multiple init jobs that can set a single environment variable eg: ENV_NAME] with specific values [eg: dev, stage1, stage2]. The deploy job should be triggered after environment variable is set by the init jobs.
This solution does not work as init and deploy jobs are in two separate stages and the later does not start until all the jobs in the previous stage completes.
(3.a) Same as above but here the init jobs are set to allow_failure.
This solution does not work as init jobs are skipped totally, so deploy job does not get the required parameter.
stages:
- init
- execute
init_1:
stage: init
rules:
- when: manual
script:
- echo "INFRA_ID=pr1" >> build.env
artifacts:
reports:
dotenv: build.env
allow_failure: true
init_2:
stage: init
rules:
- when: manual
script:
- echo "INFRA_ID=pr2" >> build.env
artifacts:
reports:
dotenv: build.env
allow_failure: true
executor:
stage: execute
script:
- echo "Selected infrastructure $INFRA_ID"
(3.b) Same as above but dependency is declared among init and deploy jobs.
This solution does not work as deploy job depends on all of the init jobs.
(4) I create N streams of init and deploy.
This solution works but causes many duplicated code.
Do you see any solutions to my use-case?
Thanks in advance
I have found a solution. It is not the best as it needs two consecutive manual approve, but it works.
Concept
Define initiator jobs in the init stage, setting the predefined values as environment variables (with artifacts.report.dotenv)
Define a trigger step (trigger stage) that blocks the pipeline
Define executor job in a later stage
Set init and trigger jobs for manual
Set the init jobs to allow_failure to make them optional (you only need one set of params at the time)
Screenshots
Pipeline in idle state
Completed pipeline
Code
image: busybox:latest
stages:
- init
- trigger
- execute
init_1:
stage: init
rules:
- when: manual
allow_failure: true
script:
- echo "INFRA_ID=pr1" >> build.env
artifacts:
reports:
dotenv: build.env
init_2:
stage: init
rules:
- when: manual
allow_failure: true
script:
- echo "INFRA_ID=pr2" >> build.env
artifacts:
reports:
dotenv: build.env
start:
stage: trigger
rules:
- when: manual
script:
- echo "Pipeline is triggered with infra ID of $INFRA_ID"
executor:
stage: execute
script:
- echo "Selected infrastructure $INFRA_ID"

How to create Gitlab CI rules that are evaluated as AND instead of OR

The following gitlab ci job will run if the variable $CI_COMMIT_TAG is set OR if the ./versions.txt file has changed.
some-job:
script:
- echo "Do some fancy stuff.";
rules:
- if: $CI_COMMIT_TAG
when: always
- changes:
- ./versions.txt
However, what I need is for this job to run when $CI_COMMIT_TAG is set AND ./versions.txt is changed. I don't want the job to run if only one of these evaluates to true. This was the behaviour in only/changes feature, but the only (and except)-feature is less powerful and deprecated.
Is what I want currently possible with gitlab ci?
From Docs:
In the following example:
We run the job manually if Dockerfile or any file in docker/scripts/ has changed AND $VAR == "string value". Otherwise, the job will not be included in the pipeline.
docker build:
script: docker build -t my-image:$CI_COMMIT_REF_SLUG .
rules:
- if: '$VAR == "string value"'
changes: # Will include the job and set to when:manual if any of the follow paths match a modified file.
- Dockerfile
- docker/scripts/*
when: manual
Your code will look something like this.
some-job:
script:
- echo "Do some fancy stuff.";
rules:
- if: $CI_COMMIT_TAG
changes:
- versions.txt
when: manual

Conditional variables in gitlab-ci.yml

Depending on branch the build comes from I need to use slightly different command line arguments. Particularly I would like to upload snapshot nexus artifacts when building from a branch, and release artifact when building off master.
Is there a way to conditionally alter variables?
I tried to use except/only keywords like this
stages:
- stage
variables:
TYPE: Release
.upload_common:
stage: stage
tags: ["Win"]
script:
- echo Uploading %TYPE%
.upload_snapshot:
variables:
TYPE: "Snapshot"
except:
- master
upload:
extends:
- .upload_common
- .upload_snapshot
Unfortunately it skips whole upload step when building off master.
The reason I am using 'extends' pattern here is that I have win and mac platforms which use slightly different variables substitution syntax ($ vs %). I also have a few different build configuration - Debug/Release, 32bit/64bit.
The code below actually works, but I had to duplicate steps for release and snapshot, one is enabled at a time.
stages:
- stage
.upload_common:
stage: stage
tags: ["Win"]
script:
- echo Uploading %TYPE%
.upload_snapshot:
variables:
TYPE: "Snapshot"
except:
- master
.upload_release:
variables:
TYPE: "Release"
only:
- master
upload_release:
extends:
- .upload_common
- .upload_release
upload_snapshot:
extends:
- .upload_common
- .upload_snapshot
The code gets much larger when snapshot/release configuration is multiplied by Debug/Release, Mac/Win, and 32/64bits. I would like to keep number of configurations at minimum.
Having ability to conditionally altering just a few variables would help me reducing this code a lot.
Another addition in GitLab 13.7 are the rules:variables. This allows some logic in setting vars:
job:
variables:
DEPLOY_VARIABLE: "default-deploy"
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables: # Override DEPLOY_VARIABLE defined
DEPLOY_VARIABLE: "deploy-production" # at the job level.
- if: $CI_COMMIT_REF_NAME =~ /feature/
variables:
IS_A_FEATURE: "true" # Define a new variable.
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
Unfortunatelly YAML-anchors or GitLab-CI's extends don't seem to allow to combine things in script array of commands as of today.
I use built-in variable CI_COMMIT_REF_NAME in combination with global or job-only before_script to solve similar tasks without repeating myself.
Here is an example of my workaround on how to set environment variable to different values dynamically for PROD and DEV during delivery or deployment:
.provide ssh private deploy key: &provide_ssh_private_deploy_key
before_script:
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- |
if [ "$CI_COMMIT_REF_NAME" == "master" ]; then
echo "$SSH_PRIVATE_DEPLOY_KEY_PROD" > ~/.ssh/id_rsa
MY_DYNAMIC_VAR="we are in master (PROD)"
else
echo "$SSH_PRIVATE_DEPLOY_KEY_DEV" > ~/.ssh/id_rsa
MY_DYNAMIC_VAR="we are NOT in master (DEV)"
fi
- chmod 600 ~/.ssh/id_rsa
deliver-via-ssh:
stage: deliver
<<: *provide_ssh_private_deploy_key
script:
- echo Stage is deliver
- echo $MY_DYNAMIC_VAR
- ssh ...
Also consider this workaround for concatenation of "script" commands: https://stackoverflow.com/a/57209078/470108
hopefully it helps.
A nice way to prepare variables for other jobs is the dotenv report artifact. Unfortunately, these variables cannot be used in many places, but if you only need to access them from other jobs scripts, this is the way:
# prepare environment variables for other jobs
env:
stage: .pre
script:
# Set application version from GIT tag or fake it
- echo "APPVERSION=${CI_COMMIT_TAG:-0.1-dev-$CI_COMMIT_REF_SLUG}+$CI_COMMIT_SHORT_SHA" | tee -a .env
artifacts:
reports:
dotenv: .env
In the script of this job you can conditionally prepare and write your environment values into a file, then make a dotenv artifact out of it. Subsequent - or better yet dependent - jobs will pick up the variables for their scripts from there.