Below is a Gitlab CI pipeline code
- image_build
- test1
- test2
image_build:
stage: image_build
tags:
- ddc
script:
- echo "image build"
rules:
- changes:
- Dockerfile
test1:
stage: test1
tags:
- ddc
script:
- echo "Test1 stage"
rules:
- when: on_success
test2:
stage: test2
tags:
- ddc
script:
- echo "Test2 stage"
rules:
- when: on_failure
I need the stages test1, test2 to execute if no changes has been done to Dockerfile. And also the same stages test1, test2 should not execute when there are changes to Dockerfile.
The second scenario works fine but the first one doesnt. Please help me get this pipeline up and running.
If you are using GitLab CI version 11.4 or above you can use only: changes or rules: changes parameters. Base on official docs:
Using the changes keyword with only or except makes it possible to define if a job should be created based on files modified by a Git push event.
So your test1 and test2 stages might look like this:
⋮
test1:
stage: test1
tags:
- ddc
script:
- echo "Test1 stage"
rules:
- when: on_success
except:
changes:
- Dockerfile
test2:
stage: test2
tags:
- ddc
script:
- echo "Test2 stage"
rules:
- when: on_failure
except:
changes:
- Dockerfile
⋮
Related
after adding dynamic child pipelines to our CI pipeline on stop action(eg. deleting branch), stopped working.
In stop job we are deleting created k8s resources, so its important to be executed.
What i noticed is that defining environment in child pipeline is probable cause(without environment section, on stop action is working).
Any ideas?
gitlab-ci.yaml looks like this
stages:
....
- deploy
- tests_prepare
- maintenance_tests
....
deploy_branch_to_k8s:
stage: deploy
only:
- branches
except:
- master
dependencies:
- build_api
environment:
name: branches/$CI_COMMIT_REF_NAME
on_stop: stop_deployed_branch_in_k8s
script:
- deploy to k8s
stop_deployed_branch_in_k8s:
stage: deploy
only:
- branches
except:
- master
when: manual
dependencies: []
variables:
GIT_STRATEGY: none
environment:
name: branches/$CI_COMMIT_REF_NAME
action: stop
script:
- delete k8s resources
generate_config_tests:
stage: tests_prepare
only:
- branches
except:
- master
dependencies:
- build_api
....
script:
- python3 ./utils/generate-jobs-config.py > generated-config.yml
artifacts:
paths:
- generated-config.yml
create_maintenance_tests_pipeline:
stage: maintenance_tests
only:
- branches
except:
- master
trigger:
strategy: depend
include:
- artifact: generated-config.yml
job: generate_config_tests
variables:
PARENT_PIPELINE_ID: $CI_PIPELINE_ID
generated-config.yml looks something like this
stages:
- tests
run_maintenance_test_job___TEST_NAME__:
stage: tests
retry: 2
environment:
name: branches/$CI_COMMIT_REF_NAME
needs:
- pipeline: $PARENT_PIPELINE_ID
job: generate_config_maintenance_tests
script:
- deploy a run tests in k8s
If I'm not wrong, you should skip the needs part altogether in the child pipeline, since it is only used for jobs in the same pipeline. Its upstream will be the parent pipeline anyway.
I am trying to run some jobs in a "test" stage followed by one job in a "monitor" stage.
The trouble is if the unit tests fail in the test stage, the entire pipeline fails and it skips my job in the monitor stage altogether.
I can set the unit tests to allow failure, which lets the monitor stage run, but the pipeline will pass if the unit tests fail, and I don't want that.
How do I have the monitor stage run its job while still having the pipeline fail if the unit tests fail?
Here is the relevant configuration:
include:
- project: templates/kubernetes
ref: master
file: /.kube-api-version-checks.yaml
- local: .choose-runner.yaml
ref: master
.run_specs_script: &run_specs_script |
./kubernetes/integration/specs/run-specs.sh $CI_COMMIT_SHA $TEST_NAMESPACE $ECR_BASE_URL/test/$IMAGE_NAME $PROCESSES ${UNIT_TEST_INSTANCE_TYPE:-c5d.12xlarge}
.base_unit_tests:
image: XXX
stage: test
coverage: '/TOTAL\sCOVERAGE:\s\d+\.\d+%/'
variables:
GIT_DEPTH: 1
script:
- *run_specs_script
after_script:
- kubectl delete ns $TEST_NAMESPACE
artifacts:
when: always
reports:
junit: tmp/*.xml
paths:
- tmp/*.xml
- artifact.tar.gz
unit_tests:
extends:
- .base_unit_tests
- .integration
unit_tests_dependency_update:
extends:
- .base_unit_tests
- .low-priority
unit_tests_dependencies_next:
image: XXX
stage: test
allow_failure: true
except:
- web
- triggers
tags:
- integration-green-kube-runner
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^hint\/upgrade/
variables:
GIT_DEPTH: 1
DEPENDENCIES_NEXT: 1
IMAGE_NAME: next
script:
- *run_specs_script
after_script:
- kubectl delete ns $TEST_NAMESPACE
artifacts:
when: always
reports:
junit: tmp/*.xml
paths:
- tmp/*.xml
- artifact.tar.gz
unit_tests_datadog:
extends:
- .integration
stage: monitor
image: node
variables:
DD_API_KEY: XXX
before_script:
- npm install -g #datadog/datadog-ci
script:
- DD_ENV=ci DATADOG_API_KEY="$DD_API_KEY" DATADOG_SITE=datadoghq.com datadog-ci junit upload --service <service> ./tmp
dependencies:
- unit_tests
when: always
This might not be the best solution, but you could add a dedicated stage after your monitor stage that only runs on_failure (https://docs.gitlab.com/ee/ci/yaml/index.html#artifactswhen) and returns an exit code.
fail_pipeline:
stage: status
image: bash
script:
- exit 1
when: on_failure
I want to define a pipeline to compile, deploy to target and test my project.
This should happen in two distinct ways: an incremental (hopefully fast) build at each commit and a full build scheduled at night.
The following .gitlab-ci.yml has all jobs marked "manual" for testing purposes.
stages:
- build
- deploy
- test
variables:
BUILD_ARTIFACTS_DIR: "artifacts"
build-incremental:
timeout: 5h
stage: build
script:
- echo "Building"
- ./ci/do-prep
- echo "done."
artifacts:
paths:
- $BUILD_ARTIFACTS_DIR/
variables:
BUILD_TOP_DIR: "/workspace/builds"
tags:
- yocto
when: manual
build-nightly:
timeout: 5h
stage: build
script:
- echo "Building"
- ./ci/do-prep
- echo "done."
artifacts:
paths:
- $BUILD_ARTIFACTS_DIR/
tags:
- yocto
when: manual
deploy:
stage: deploy
script:
- echo "Deploying..."
- ./ci/do-deploy
- echo "done."
tags:
- yocto
dependencies:
- build
when: manual
test:
stage: test
script:
- echo "Testing..."
- ./ci/do-test
- echo "done."
tags:
- yocto
dependencies:
- deploy
when: manual
This fails with message: deploy job: undefined dependency: build.
Cow do I explain to GitLab deploy stage needs either build-incremental or build-nightly artifacts?
Later I will have to understand how to trigger build-incremental at commit and build-nightly using a schedule, but that seems to be a different problem.
To tell Gitlab that your deploy stage needs certain artifacts from a specific job: Try naming dependencies by job name. In deploy you are defining a dependency with build which is a stage name not the one of the job you want to pick the artifact.
Example:
deploy:
stage: deploy
script:
- echo "Deploying..."
- ./ci/do-deploy
- echo "done."
tags:
- yocto
dependencies:
- build-incremental
when: manual
more info and examples here dependencies
For your scenario, there are two separate paths through your pipeline depending on the "source": a 'push' or a 'schedule'. You can get the source of the pipeline with the CI_PIPELINE_SOURCE variable. I build these paths separately at first, then combine them:
# First source: push events
stages:
build
deploy
test
variables:
BUILD_ARTIFACTS_DIR: "artifacts"
build-incremental:
timeout: 5h
stage: build
script:
- echo "Building"
- ./ci/do-prep
- echo "done."
artifacts:
paths:
- $BUILD_ARTIFACTS_DIR/
variables:
BUILD_TOP_DIR: "/workspace/builds"
tags:
- yocto
rules:
- if: $CI_PIPELINE_SOURCE === 'push'
when: manual
- when: never
deploy-incremental:
stage: deploy
script:
- echo "Deploying..."
- ./ci/do-deploy
- echo "done."
tags:
- yocto
needs: ['build-incremental']
rules:
- if $CI_PIPELINE_SOURCE === 'push'
when: always
- when: never
test-incremental:
stage: test
script:
- echo "Testing..."
- ./ci/do-test
- echo "done."
tags:
- yocto
needs: ['deploy-incremental']
rules:
- if: $CI_PIPELINE_SOURCE === 'push'
when: always
- when: never
In this path, if the source is a push, the build step will run upon manual input, otherwise it will never run. Then, the deploy-incremental step will run automatically (without waiting for other jobs or stages) as long as the source is a push, otherwise it will never run. Finally the test-incremental job will run automatically without waiting for other jobs or stages if it's a push like above.
Now we can build the schedule path:
# Scheduled path:
stages:
build
deploy
test
variables:
BUILD_ARTIFACTS_DIR: "artifacts"
build-schedule:
timeout: 5h
stage: build
script:
- echo "Building"
- ./ci/do-prep
- echo "done."
artifacts:
paths:
- $BUILD_ARTIFACTS_DIR/
variables:
BUILD_TOP_DIR: "/workspace/builds"
tags:
- yocto
rules:
- if: $CI_PIPELINE_SOURCE === 'schedule'
when: manual
- when: never
deploy-schedule:
stage: deploy
script:
- echo "Deploying..."
- ./ci/do-deploy
- echo "done."
tags:
- yocto
needs: ['build-schedule']
rules:
- if $CI_PIPELINE_SOURCE === 'schedule'
when: always
- when: never
test-schedule:
stage: test
script:
- echo "Testing..."
- ./ci/do-test
- echo "done."
tags:
- yocto
needs: ['deploy-schedule']
rules:
- if: $CI_PIPELINE_SOURCE === 'schedule'
when: always
- when: never
This works the same way as the push path, but we check to see if the source is schedule.
Now we can combine the two paths:
Combined result:
stages:
build
deploy
test
variables:
BUILD_ARTIFACTS_DIR: "artifacts"
build-incremental:
timeout: 5h
stage: build
script:
- echo "Building"
- ./ci/do-prep
- echo "done."
artifacts:
paths:
- $BUILD_ARTIFACTS_DIR/
variables:
BUILD_TOP_DIR: "/workspace/builds"
tags:
- yocto
rules:
- if: $CI_PIPELINE_SOURCE === 'push'
when: manual
- when: never
build-schedule:
timeout: 5h
stage: build
script:
- echo "Building"
- ./ci/do-prep
- echo "done."
artifacts:
paths:
- $BUILD_ARTIFACTS_DIR/
variables:
BUILD_TOP_DIR: "/workspace/builds"
tags:
- yocto
rules:
- if: $CI_PIPELINE_SOURCE === 'schedule'
when: manual
- when: never
deploy-incremental:
stage: deploy
script:
- echo "Deploying..."
- ./ci/do-deploy
- echo "done."
tags:
- yocto
needs: ['build-incremental']
rules:
- if $CI_PIPELINE_SOURCE === 'push'
when: always
- when: never
deploy-schedule:
stage: deploy
script:
- echo "Deploying..."
- ./ci/do-deploy
- echo "done."
tags:
- yocto
needs: ['build-schedule']
rules:
- if $CI_PIPELINE_SOURCE === 'schedule'
when: always
- when: never
test-incremental:
stage: test
script:
- echo "Testing..."
- ./ci/do-test
- echo "done."
tags:
- yocto
needs: ['deploy-incremental']
rules:
- if: $CI_PIPELINE_SOURCE === 'push'
when: always
- when: never
test-schedule:
stage: test
script:
- echo "Testing..."
- ./ci/do-test
- echo "done."
tags:
- yocto
needs: ['deploy-schedule']
rules:
- if: $CI_PIPELINE_SOURCE === 'schedule'
when: always
- when: never
A pipeline like this is tedious and takes a bit to build, but works great when you have multiple paths/ways to build the project.
I try to create yml file where the build calls another test(project) if the build succeeds.
So I try to add a rule when on_success into the after_script block:
before_script:
- echo "Before script section"
- echo "For example you might run an update here or install a build dependency"
- echo "Or perhaps you might print out some debugging details"
after_script:
when: on_success
script:
- echo "success"
test2:
stage: test
script:
- echo "Hello world"
deploy:
stage: deploy
script:
- echo "Do your build here"
test1:
stage: test
script:
- exit 1
In this result, I got the error GitLab CI configuration is invalid: after_script config should be an array containing strings and arrays of strings
How to implement this logic in the correct way?
When you put before_script: and after_script: on top of the level in the .gitlab-ci.yml file, you're basically saying that all jobs (that includes test2, deploy and test) will run before_script: and after_script:.
I try to create yml file where the build calls another test(project) if the build succeeds.
In this case, take a look at allow_failure: (see: https://docs.gitlab.com/ee/ci/yaml/#allow_failure). By default, allow_failure: is already false. I can't really figure out from the sample .gitlab-ci.yml file that you shared about what you actually wanted to do, but I can give you a quick example:
stages:
- build
- test
- deploy
build:
stage: build
script:
- echo build something
test:
stage: test
script:
- echo test something
deploy:
stage: deploy
script:
- echo test something
Need to upload artifact binary.bin when job succeeded and build_trace.log when failed.
Looking on artifacts:when
I don't see such ability.
Is there some tricky hack?
I would like to see something like
job:
artifacts:
- name: failed_trace_log
when: on_failure
paths:
- build_trace.log
- name: succeed
when: on_success
paths:
- binary.bin
Current workaround is:
job:
artifacts:
when: always
paths:
- build_trace.log
- binary.bin
One alternative is to use when:on_failure in a cleanup job after the first.
stages:
- build
- cleanup_build
job:
stage: build
script:
- make build
artifacts:
paths:
- binary.bin
cleanup_job:
when: on_failure
stage: cleanup_build
script:
- do cleanup
artifacts:
paths:
- build_trace.log