I need to configure a gitlab ci job to be re-executed when it fails. More specifically the deploy job. I set up the job with a retry value and tried to force it to fail to test it. But I couldn't achieve the job start again. Here an example of what I'm trying to do:
deploy:
stage: deploy
retry: 2
script:
- echo "running..."
- exit 1
only: [qa_branch]
Related
I set a schedule for my gitlab.yml file to run the pipeline. In my job I have set rules to run/not run the job. However, in my schedule the job is running no matter if any of my rules met.
here is the simplified yml file:
stages:
- build
variables:
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR : ""
DOCKER_NETWORK: "gitlab-network"
.docker_dind_service: &docker_dind_service
services:
- name: docker:20.10-dind
command: ["--insecure-registry", "my_server.net:7000"]
docker:custom:
stage: build
<<: *docker_dind_service
tags:
- docker_runner
image: docker
rules:
- if: '$FORCE_BUILD_DOCKER_IMAGE == "1"'
when: always
- changes:
- Dockerfile
- when: never
script:
- docker build -t my_image .
for the case above, the job is added to the schedule even though there is no change in my Dockerfile. I think I am lost, because when I do changes in my yml file and push it, this job is not added, which is right because there is no change in the Dockerfile. However, it is running for every scheduled pipeline.
Apparently according to the Gitlab documentation:
https://docs.gitlab.com/ee/ci/yaml/#using-onlychanges-without-pipelines-for-merge-requests
You should use rules: changes only with branch pipelines or merge request pipelines. You can use rules: changes with other pipeline types, but rules: changes always evaluates to true when there is no Git push event. Tag pipelines, scheduled pipelines, manual pipelines, and so on do not have a Git push event associated with them. A rules: changes job is always added to those pipelines if there is no if that limits the job to branch or merge request pipelines.
I have a pipeline, after the stages.
At the end of last two stages as you see below.
Teardown will delete the application from kubernetes and destroy will be delete the kubenretes cluster and other resources as a whole.
I have set automatic and allow failure true.
But, I want to set the last destroy stage as manual if the teardown stage fails.So that I could cross check and resume the job later.
If the teardown passed successfully then it should be done automatically.
How to set that?
Because, .gitlab-ci.yml does not allow rules when: on_failure and when: manual at the same time and same job rule. so I was used Parent-child pipelines to solve the problem.
this is a example:
create a cleanup job template like:
# file name: cleanup_job_tmpl.yml
.cleanup job tmpl:
script:
- echo "run cleanup job"
create auto cleanup job
# file name: cleanup_auto.yml
include: cleanup_job_tmpl.yml
cleanup job auto:
extends: .cleanup job tmpl
before_script:
- echo "auto run job"
create manual cleanup job
# file name: cleanup_manual.yml
include: cleanup_job_tmpl.yml
cleanup job manual:
extends: .cleanup job tmpl
when: manual
before_script:
- echo "manual run job"
and finally .gitlab-ci.yml
stages:
- "teardown"
- "cleanup"
default:
image: ubuntu:20.04
teardown job:
stage: teardown
script:
- echo "run teardown job and exit 10"
- exit 10
artifacts:
reports:
dotenv: cleanup.env
cleanup trigger auto:
stage: cleanup
when: on_success #(default on_success)
trigger:
include: cleanup_auto.yml
cleanup trigger manual:
stage: cleanup
when: on_failure
trigger:
include: cleanup_manual.yml
when teardown job: exit 10, will trigger cleanup trigger manual job, and when I remove teardown job: exit 10, will trigger cleanup trigger auto job.
There is my GitLab repo to demo this case.
manual-job-on-previous-stagefailure · GitLab
When success to autl run job - pipeline: Success Pipeline · GitLab
When failure to manual run job - pipeline: Failure Pipeline · GitLab
I have a gitlab ci pipeline like below, and there is a bash script notify.sh which will send different notification depends on stage build result(success or failed), currently I use a arg --result to control the logic, and I write two jobs for stage notify(success-notify and failed-notify) and assign the value of --result manually. Is there a way to get the stage build result directly(like STAGE_BUILD_STATE) instead of use statement when?
---
stages:
- build
- notify
build:
stage: build
script:
- build something
success-notify:
stage: notify
script:
- bash notify.sh --result success
failed-notify:
stage: notify
script:
- bash notify.sh --result failed
when: on_failure
I have build and test jobs in Gitlab CI yaml.
I want to trigger build job every evening at 16.00
and I want to trigger test jobs every morning 4.00 on GitLab
and I know on Gitlab CI/CD - Schedules - New Schedule
but I don't know how can I write this and works in Gitlab CI yaml
I have uploaded my Gitlab CI yaml file.
Can you show me please?
variables:
MSBUILD_PATH: 'C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe'
SOLUTION_PATH: 'Source/NewProject.sln'
stages:
- build
- test
build_job:
stage: build
script:
- '& "$env:MSBUILD_PATH" "$env:SOLUTION_PATH" /nologo /t:Rebuild /p:Configuration=Debug'
- pwd
artifacts:
paths:
- 'Output'
test_job:
stage: test
only:
- schedules
script:
- 'Output\bin\Debug\NewProject.exe'
Did you try only:variables/except:variables?
First you need to set proper variable in your schedule then add only variables to your yml config. Example:
...
build_job:
...
only:
variables:
- $SCHEDULED_BUILD == "True"
test_job:
...
only:
variables:
- $SCHEDULED_TEST == "True"
If you always want to have 12 hours of delay you could use just one schedule and add when:delayed
when: delayed
start_in: 12 hours
UPDATE: As per request in comments added complete example of simple pipeline configuration, job build should run when SCHEDULED_BUILD is set to True and test job should run when SCHEDULED_TEST is set to True:
build:
script:
- echo only build
only:
variables:
- $SCHEDULED_BUILD == "True"
test:
script:
- echo only test
only:
variables:
- $SCHEDULED_TEST == "True"
Gitlab-ci: Here is my pipeline of a project with some stages:
stages:
- prepare
- build
- deploy
- build_test
- test
And some stages have more than one job to execute, e.g. using for each oracle database environment (aca, spt, fin..):
The question is:
My pipeline skipped a test job (test:aca), I understood that happened because a job of the same kind of dependencies failed, in that ss the job deploy:spt failed, but my test:aca skipped.
Look to the test:aca job script:
test:aca:
only:
- branches
allow_failure: true
stage: test
tags:
- teste
script:
- test script
dependencies:
- test:build_test
- deploy:aca
It doesnt has dependencies with the deploy:spt, only with test:build_test and
deploy:aca. How to enable to run the job test:aca ?
Have you tried removing deploy:aca and only using test:build_test as a dependency?
test:aca:
only:
- branches
allow_failure: true
stage: test
tags:
- teste
script:
- test script
dependencies:
- test:build_test