after adding dynamic child pipelines to our CI pipeline on stop action(eg. deleting branch), stopped working.
In stop job we are deleting created k8s resources, so its important to be executed.
What i noticed is that defining environment in child pipeline is probable cause(without environment section, on stop action is working).
Any ideas?
gitlab-ci.yaml looks like this
stages:
....
- deploy
- tests_prepare
- maintenance_tests
....
deploy_branch_to_k8s:
stage: deploy
only:
- branches
except:
- master
dependencies:
- build_api
environment:
name: branches/$CI_COMMIT_REF_NAME
on_stop: stop_deployed_branch_in_k8s
script:
- deploy to k8s
stop_deployed_branch_in_k8s:
stage: deploy
only:
- branches
except:
- master
when: manual
dependencies: []
variables:
GIT_STRATEGY: none
environment:
name: branches/$CI_COMMIT_REF_NAME
action: stop
script:
- delete k8s resources
generate_config_tests:
stage: tests_prepare
only:
- branches
except:
- master
dependencies:
- build_api
....
script:
- python3 ./utils/generate-jobs-config.py > generated-config.yml
artifacts:
paths:
- generated-config.yml
create_maintenance_tests_pipeline:
stage: maintenance_tests
only:
- branches
except:
- master
trigger:
strategy: depend
include:
- artifact: generated-config.yml
job: generate_config_tests
variables:
PARENT_PIPELINE_ID: $CI_PIPELINE_ID
generated-config.yml looks something like this
stages:
- tests
run_maintenance_test_job___TEST_NAME__:
stage: tests
retry: 2
environment:
name: branches/$CI_COMMIT_REF_NAME
needs:
- pipeline: $PARENT_PIPELINE_ID
job: generate_config_maintenance_tests
script:
- deploy a run tests in k8s
If I'm not wrong, you should skip the needs part altogether in the child pipeline, since it is only used for jobs in the same pipeline. Its upstream will be the parent pipeline anyway.
Related
Seeing Found errors in your .gitlab-ci.yml:
'app-deploy' job needs 'app-verify' job
but 'app-verify' is not in any previous stage
You can also test your .gitlab-ci.yml in CI Lint
Where as both stages are defined
Cache set as below
cache:
key: ${CI_PIPELINE_ID}
paths:
- $CI_PROJECT_DIR/
- $CI_PROJECT_DIR/$CONTEXT/
Stages defined as below, snippets
app-build:
stage: build
# Extending the maven-build function from maven.yaml
extends: .maven-build
app-deploy:
stage: deploy
extends: .docker-publish
cache:
key: ${CI_PIPELINE_ID}
paths:
- $CI_PROJECT_DIR/
- $CI_PROJECT_DIR/$CONTEXT/
variables:
DOCKERFILE: Dockerfile
CONTEXT: app
OUTPUT: app.war
needs:
- app-build
- app-verify
dependencies:
- app-build
- app-verify
How to resolve the above error.
Error should go away and no error in pipeline run.
I am trying to run some jobs in a "test" stage followed by one job in a "monitor" stage.
The trouble is if the unit tests fail in the test stage, the entire pipeline fails and it skips my job in the monitor stage altogether.
I can set the unit tests to allow failure, which lets the monitor stage run, but the pipeline will pass if the unit tests fail, and I don't want that.
How do I have the monitor stage run its job while still having the pipeline fail if the unit tests fail?
Here is the relevant configuration:
include:
- project: templates/kubernetes
ref: master
file: /.kube-api-version-checks.yaml
- local: .choose-runner.yaml
ref: master
.run_specs_script: &run_specs_script |
./kubernetes/integration/specs/run-specs.sh $CI_COMMIT_SHA $TEST_NAMESPACE $ECR_BASE_URL/test/$IMAGE_NAME $PROCESSES ${UNIT_TEST_INSTANCE_TYPE:-c5d.12xlarge}
.base_unit_tests:
image: XXX
stage: test
coverage: '/TOTAL\sCOVERAGE:\s\d+\.\d+%/'
variables:
GIT_DEPTH: 1
script:
- *run_specs_script
after_script:
- kubectl delete ns $TEST_NAMESPACE
artifacts:
when: always
reports:
junit: tmp/*.xml
paths:
- tmp/*.xml
- artifact.tar.gz
unit_tests:
extends:
- .base_unit_tests
- .integration
unit_tests_dependency_update:
extends:
- .base_unit_tests
- .low-priority
unit_tests_dependencies_next:
image: XXX
stage: test
allow_failure: true
except:
- web
- triggers
tags:
- integration-green-kube-runner
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_SOURCE_BRANCH_NAME =~ /^hint\/upgrade/
variables:
GIT_DEPTH: 1
DEPENDENCIES_NEXT: 1
IMAGE_NAME: next
script:
- *run_specs_script
after_script:
- kubectl delete ns $TEST_NAMESPACE
artifacts:
when: always
reports:
junit: tmp/*.xml
paths:
- tmp/*.xml
- artifact.tar.gz
unit_tests_datadog:
extends:
- .integration
stage: monitor
image: node
variables:
DD_API_KEY: XXX
before_script:
- npm install -g #datadog/datadog-ci
script:
- DD_ENV=ci DATADOG_API_KEY="$DD_API_KEY" DATADOG_SITE=datadoghq.com datadog-ci junit upload --service <service> ./tmp
dependencies:
- unit_tests
when: always
This might not be the best solution, but you could add a dedicated stage after your monitor stage that only runs on_failure (https://docs.gitlab.com/ee/ci/yaml/index.html#artifactswhen) and returns an exit code.
fail_pipeline:
stage: status
image: bash
script:
- exit 1
when: on_failure
I have the following script that pulls from a remote template. The remote template has the following stages: build, test, code_analysis, compliance, deploy.
The deploy step is manually triggered and executed AWS CLI to deploy a SAM project.
I want to add an additional step such that when the deploy step fails, it will execute a script to rollback the cloudformation stack to its last operational state.
I created a "cleanup-cloudformation-stack-failure" job and tried adding "extends: .deploy", but that didn't work.
I then added an additional stage called "cloudformation_stack_rollback" in the serverless-template.yml file and tried to use a mix of rules and when to get it to trigger on failure, but I'm getting errors flagged by GitLab's linter.
Does anyone know what I'm doing wrong?
include:
- remote: 'https://my-gitlab-server.com/ci-templates/-/raw/master/serverless-template.yml'
deploy-qas:
extends: .deploy
variables:
....
PARAMETER_OVERRIDES: "..."
environment: qas
only:
- qas
tags:
- serverless
cleanup-cloudformation-stack-failure:
variables:
STACK_NAME: $CI_PROJECT_NAME-$CI_ENVIRONMENT_NAME
stage: cloudformation_stack_rollback
rules:
- if: '$CI_JOB_MANUAL == true'
when: on_failure
script:
- aws cloudformation continue-update-rollback --stack-name ${STACK_NAME} --resources-to-skip ${STACK_NAME}
You forgot double quotes around true, however you can use Directed Asyclic Graphs to execute jobs conditionally
include:
- remote: 'https://my-gitlab-server.com/ci-templates/-/raw/master/serverless-template.yml'
deploy-qas:
extends: .deploy
variables:
....
PARAMETER_OVERRIDES: "..."
environment: qas
only:
- qas
tags:
- serverless
cleanup-cloudformation-stack-failure:
needs:
- deploy-qas
when: on_failure
variables:
STACK_NAME: $CI_PROJECT_NAME-$CI_ENVIRONMENT_NAME
stage: cloudformation_stack_rollback
script:
- aws cloudformation continue-update-rollback --stack-name ${STACK_NAME} --reso
Gitlab-ci: Here is my pipeline of a project with some stages:
stages:
- prepare
- build
- deploy
- build_test
- test
And some stages have more than one job to execute, e.g. using for each oracle database environment (aca, spt, fin..):
The question is:
My pipeline skipped a test job (test:aca), I understood that happened because a job of the same kind of dependencies failed, in that ss the job deploy:spt failed, but my test:aca skipped.
Look to the test:aca job script:
test:aca:
only:
- branches
allow_failure: true
stage: test
tags:
- teste
script:
- test script
dependencies:
- test:build_test
- deploy:aca
It doesnt has dependencies with the deploy:spt, only with test:build_test and
deploy:aca. How to enable to run the job test:aca ?
Have you tried removing deploy:aca and only using test:build_test as a dependency?
test:aca:
only:
- branches
allow_failure: true
stage: test
tags:
- teste
script:
- test script
dependencies:
- test:build_test
Is it possible to configure the gitlab-yml of the project in such a way that after the tag has been pushed out it can run several commands ? If so, how do you get it? I would also like to define the variables that I would like to use in these several commands.
My gitlab-ci looks like:
stages:
- build
- deploy
build:
stage: build
script:
- composer install --no-ansi
- vendor/bin/phar-composer build
artifacts:
paths:
- example.phar
tags:
- php:7.0
deploy:
stage: deploy
only:
- tags
dependencies:
- build
script:
- cp example.phar /opt/example/
tags:
- php:7.0
It's about running example.phar bin/console command1 $VARIABLE1 $VARIABLE2 $VARIABLE3 $VARIABLE4.
Please help me because I am not completely familiar with these matters.
You can trigger a job when one tag is pushed using only parameter:
build:
stage: build
image: alpine:3.6
script:
- echo "A tag has been pushed!"
only:
- tags