Need to upload artifact binary.bin when job succeeded and build_trace.log when failed.
Looking on artifacts:when
I don't see such ability.
Is there some tricky hack?
I would like to see something like
job:
artifacts:
- name: failed_trace_log
when: on_failure
paths:
- build_trace.log
- name: succeed
when: on_success
paths:
- binary.bin
Current workaround is:
job:
artifacts:
when: always
paths:
- build_trace.log
- binary.bin
One alternative is to use when:on_failure in a cleanup job after the first.
stages:
- build
- cleanup_build
job:
stage: build
script:
- make build
artifacts:
paths:
- binary.bin
cleanup_job:
when: on_failure
stage: cleanup_build
script:
- do cleanup
artifacts:
paths:
- build_trace.log
Related
Seeing Found errors in your .gitlab-ci.yml:
'app-deploy' job needs 'app-verify' job
but 'app-verify' is not in any previous stage
You can also test your .gitlab-ci.yml in CI Lint
Where as both stages are defined
Cache set as below
cache:
key: ${CI_PIPELINE_ID}
paths:
- $CI_PROJECT_DIR/
- $CI_PROJECT_DIR/$CONTEXT/
Stages defined as below, snippets
app-build:
stage: build
# Extending the maven-build function from maven.yaml
extends: .maven-build
app-deploy:
stage: deploy
extends: .docker-publish
cache:
key: ${CI_PIPELINE_ID}
paths:
- $CI_PROJECT_DIR/
- $CI_PROJECT_DIR/$CONTEXT/
variables:
DOCKERFILE: Dockerfile
CONTEXT: app
OUTPUT: app.war
needs:
- app-build
- app-verify
dependencies:
- app-build
- app-verify
How to resolve the above error.
Error should go away and no error in pipeline run.
after adding dynamic child pipelines to our CI pipeline on stop action(eg. deleting branch), stopped working.
In stop job we are deleting created k8s resources, so its important to be executed.
What i noticed is that defining environment in child pipeline is probable cause(without environment section, on stop action is working).
Any ideas?
gitlab-ci.yaml looks like this
stages:
....
- deploy
- tests_prepare
- maintenance_tests
....
deploy_branch_to_k8s:
stage: deploy
only:
- branches
except:
- master
dependencies:
- build_api
environment:
name: branches/$CI_COMMIT_REF_NAME
on_stop: stop_deployed_branch_in_k8s
script:
- deploy to k8s
stop_deployed_branch_in_k8s:
stage: deploy
only:
- branches
except:
- master
when: manual
dependencies: []
variables:
GIT_STRATEGY: none
environment:
name: branches/$CI_COMMIT_REF_NAME
action: stop
script:
- delete k8s resources
generate_config_tests:
stage: tests_prepare
only:
- branches
except:
- master
dependencies:
- build_api
....
script:
- python3 ./utils/generate-jobs-config.py > generated-config.yml
artifacts:
paths:
- generated-config.yml
create_maintenance_tests_pipeline:
stage: maintenance_tests
only:
- branches
except:
- master
trigger:
strategy: depend
include:
- artifact: generated-config.yml
job: generate_config_tests
variables:
PARENT_PIPELINE_ID: $CI_PIPELINE_ID
generated-config.yml looks something like this
stages:
- tests
run_maintenance_test_job___TEST_NAME__:
stage: tests
retry: 2
environment:
name: branches/$CI_COMMIT_REF_NAME
needs:
- pipeline: $PARENT_PIPELINE_ID
job: generate_config_maintenance_tests
script:
- deploy a run tests in k8s
If I'm not wrong, you should skip the needs part altogether in the child pipeline, since it is only used for jobs in the same pipeline. Its upstream will be the parent pipeline anyway.
I am trying to run some jobs in a "test" stage followed by one job in a "monitor" stage.
The trouble is if the unit tests fail in the test stage, the entire pipeline fails and it skips my job in the monitor stage altogether.
I can set the unit tests to allow failure, which lets the monitor stage run, but the pipeline will pass if the unit tests fail, and I don't want that.
How do I have the monitor stage run its job while still having the pipeline fail if the unit tests fail?
Here is the relevant configuration:
include:
- project: templates/kubernetes
ref: master
file: /.kube-api-version-checks.yaml
- local: .choose-runner.yaml
ref: master
.run_specs_script: &run_specs_script |
./kubernetes/integration/specs/run-specs.sh $CI_COMMIT_SHA $TEST_NAMESPACE $ECR_BASE_URL/test/$IMAGE_NAME $PROCESSES ${UNIT_TEST_INSTANCE_TYPE:-c5d.12xlarge}
.base_unit_tests:
image: XXX
stage: test
coverage: '/TOTAL\sCOVERAGE:\s\d+\.\d+%/'
variables:
GIT_DEPTH: 1
script:
- *run_specs_script
after_script:
- kubectl delete ns $TEST_NAMESPACE
artifacts:
when: always
reports:
junit: tmp/*.xml
paths:
- tmp/*.xml
- artifact.tar.gz
unit_tests:
extends:
- .base_unit_tests
- .integration
unit_tests_dependency_update:
extends:
- .base_unit_tests
- .low-priority
unit_tests_dependencies_next:
image: XXX
stage: test
allow_failure: true
except:
- web
- triggers
tags:
- integration-green-kube-runner
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^hint\/upgrade/
variables:
GIT_DEPTH: 1
DEPENDENCIES_NEXT: 1
IMAGE_NAME: next
script:
- *run_specs_script
after_script:
- kubectl delete ns $TEST_NAMESPACE
artifacts:
when: always
reports:
junit: tmp/*.xml
paths:
- tmp/*.xml
- artifact.tar.gz
unit_tests_datadog:
extends:
- .integration
stage: monitor
image: node
variables:
DD_API_KEY: XXX
before_script:
- npm install -g #datadog/datadog-ci
script:
- DD_ENV=ci DATADOG_API_KEY="$DD_API_KEY" DATADOG_SITE=datadoghq.com datadog-ci junit upload --service <service> ./tmp
dependencies:
- unit_tests
when: always
This might not be the best solution, but you could add a dedicated stage after your monitor stage that only runs on_failure (https://docs.gitlab.com/ee/ci/yaml/index.html#artifactswhen) and returns an exit code.
fail_pipeline:
stage: status
image: bash
script:
- exit 1
when: on_failure
I want to define a pipeline to compile, deploy to target and test my project.
This should happen in two distinct ways: an incremental (hopefully fast) build at each commit and a full build scheduled at night.
The following .gitlab-ci.yml has all jobs marked "manual" for testing purposes.
stages:
- build
- deploy
- test
variables:
BUILD_ARTIFACTS_DIR: "artifacts"
build-incremental:
timeout: 5h
stage: build
script:
- echo "Building"
- ./ci/do-prep
- echo "done."
artifacts:
paths:
- $BUILD_ARTIFACTS_DIR/
variables:
BUILD_TOP_DIR: "/workspace/builds"
tags:
- yocto
when: manual
build-nightly:
timeout: 5h
stage: build
script:
- echo "Building"
- ./ci/do-prep
- echo "done."
artifacts:
paths:
- $BUILD_ARTIFACTS_DIR/
tags:
- yocto
when: manual
deploy:
stage: deploy
script:
- echo "Deploying..."
- ./ci/do-deploy
- echo "done."
tags:
- yocto
dependencies:
- build
when: manual
test:
stage: test
script:
- echo "Testing..."
- ./ci/do-test
- echo "done."
tags:
- yocto
dependencies:
- deploy
when: manual
This fails with message: deploy job: undefined dependency: build.
Cow do I explain to GitLab deploy stage needs either build-incremental or build-nightly artifacts?
Later I will have to understand how to trigger build-incremental at commit and build-nightly using a schedule, but that seems to be a different problem.
To tell Gitlab that your deploy stage needs certain artifacts from a specific job: Try naming dependencies by job name. In deploy you are defining a dependency with build which is a stage name not the one of the job you want to pick the artifact.
Example:
deploy:
stage: deploy
script:
- echo "Deploying..."
- ./ci/do-deploy
- echo "done."
tags:
- yocto
dependencies:
- build-incremental
when: manual
more info and examples here dependencies
For your scenario, there are two separate paths through your pipeline depending on the "source": a 'push' or a 'schedule'. You can get the source of the pipeline with the CI_PIPELINE_SOURCE variable. I build these paths separately at first, then combine them:
# First source: push events
stages:
build
deploy
test
variables:
BUILD_ARTIFACTS_DIR: "artifacts"
build-incremental:
timeout: 5h
stage: build
script:
- echo "Building"
- ./ci/do-prep
- echo "done."
artifacts:
paths:
- $BUILD_ARTIFACTS_DIR/
variables:
BUILD_TOP_DIR: "/workspace/builds"
tags:
- yocto
rules:
- if: $CI_PIPELINE_SOURCE === 'push'
when: manual
- when: never
deploy-incremental:
stage: deploy
script:
- echo "Deploying..."
- ./ci/do-deploy
- echo "done."
tags:
- yocto
needs: ['build-incremental']
rules:
- if $CI_PIPELINE_SOURCE === 'push'
when: always
- when: never
test-incremental:
stage: test
script:
- echo "Testing..."
- ./ci/do-test
- echo "done."
tags:
- yocto
needs: ['deploy-incremental']
rules:
- if: $CI_PIPELINE_SOURCE === 'push'
when: always
- when: never
In this path, if the source is a push, the build step will run upon manual input, otherwise it will never run. Then, the deploy-incremental step will run automatically (without waiting for other jobs or stages) as long as the source is a push, otherwise it will never run. Finally the test-incremental job will run automatically without waiting for other jobs or stages if it's a push like above.
Now we can build the schedule path:
# Scheduled path:
stages:
build
deploy
test
variables:
BUILD_ARTIFACTS_DIR: "artifacts"
build-schedule:
timeout: 5h
stage: build
script:
- echo "Building"
- ./ci/do-prep
- echo "done."
artifacts:
paths:
- $BUILD_ARTIFACTS_DIR/
variables:
BUILD_TOP_DIR: "/workspace/builds"
tags:
- yocto
rules:
- if: $CI_PIPELINE_SOURCE === 'schedule'
when: manual
- when: never
deploy-schedule:
stage: deploy
script:
- echo "Deploying..."
- ./ci/do-deploy
- echo "done."
tags:
- yocto
needs: ['build-schedule']
rules:
- if $CI_PIPELINE_SOURCE === 'schedule'
when: always
- when: never
test-schedule:
stage: test
script:
- echo "Testing..."
- ./ci/do-test
- echo "done."
tags:
- yocto
needs: ['deploy-schedule']
rules:
- if: $CI_PIPELINE_SOURCE === 'schedule'
when: always
- when: never
This works the same way as the push path, but we check to see if the source is schedule.
Now we can combine the two paths:
Combined result:
stages:
build
deploy
test
variables:
BUILD_ARTIFACTS_DIR: "artifacts"
build-incremental:
timeout: 5h
stage: build
script:
- echo "Building"
- ./ci/do-prep
- echo "done."
artifacts:
paths:
- $BUILD_ARTIFACTS_DIR/
variables:
BUILD_TOP_DIR: "/workspace/builds"
tags:
- yocto
rules:
- if: $CI_PIPELINE_SOURCE === 'push'
when: manual
- when: never
build-schedule:
timeout: 5h
stage: build
script:
- echo "Building"
- ./ci/do-prep
- echo "done."
artifacts:
paths:
- $BUILD_ARTIFACTS_DIR/
variables:
BUILD_TOP_DIR: "/workspace/builds"
tags:
- yocto
rules:
- if: $CI_PIPELINE_SOURCE === 'schedule'
when: manual
- when: never
deploy-incremental:
stage: deploy
script:
- echo "Deploying..."
- ./ci/do-deploy
- echo "done."
tags:
- yocto
needs: ['build-incremental']
rules:
- if $CI_PIPELINE_SOURCE === 'push'
when: always
- when: never
deploy-schedule:
stage: deploy
script:
- echo "Deploying..."
- ./ci/do-deploy
- echo "done."
tags:
- yocto
needs: ['build-schedule']
rules:
- if $CI_PIPELINE_SOURCE === 'schedule'
when: always
- when: never
test-incremental:
stage: test
script:
- echo "Testing..."
- ./ci/do-test
- echo "done."
tags:
- yocto
needs: ['deploy-incremental']
rules:
- if: $CI_PIPELINE_SOURCE === 'push'
when: always
- when: never
test-schedule:
stage: test
script:
- echo "Testing..."
- ./ci/do-test
- echo "done."
tags:
- yocto
needs: ['deploy-schedule']
rules:
- if: $CI_PIPELINE_SOURCE === 'schedule'
when: always
- when: never
A pipeline like this is tedious and takes a bit to build, but works great when you have multiple paths/ways to build the project.
Gitlab-ci: Here is my pipeline of a project with some stages:
stages:
- prepare
- build
- deploy
- build_test
- test
And some stages have more than one job to execute, e.g. using for each oracle database environment (aca, spt, fin..):
The question is:
My pipeline skipped a test job (test:aca), I understood that happened because a job of the same kind of dependencies failed, in that ss the job deploy:spt failed, but my test:aca skipped.
Look to the test:aca job script:
test:aca:
only:
- branches
allow_failure: true
stage: test
tags:
- teste
script:
- test script
dependencies:
- test:build_test
- deploy:aca
It doesnt has dependencies with the deploy:spt, only with test:build_test and
deploy:aca. How to enable to run the job test:aca ?
Have you tried removing deploy:aca and only using test:build_test as a dependency?
test:aca:
only:
- branches
allow_failure: true
stage: test
tags:
- teste
script:
- test script
dependencies:
- test:build_test