I have build and test jobs in Gitlab CI yaml.
I want to trigger build job every evening at 16.00
and I want to trigger test jobs every morning 4.00 on GitLab
and I know on Gitlab CI/CD - Schedules - New Schedule
but I don't know how can I write this and works in Gitlab CI yaml
I have uploaded my Gitlab CI yaml file.
Can you show me please?
variables:
MSBUILD_PATH: 'C:\Program Files (x86)\MSBuild\14.0\Bin\msbuild.exe'
SOLUTION_PATH: 'Source/NewProject.sln'
stages:
- build
- test
build_job:
stage: build
script:
- '& "$env:MSBUILD_PATH" "$env:SOLUTION_PATH" /nologo /t:Rebuild /p:Configuration=Debug'
- pwd
artifacts:
paths:
- 'Output'
test_job:
stage: test
only:
- schedules
script:
- 'Output\bin\Debug\NewProject.exe'
Did you try only:variables/except:variables?
First you need to set proper variable in your schedule then add only variables to your yml config. Example:
...
build_job:
...
only:
variables:
- $SCHEDULED_BUILD == "True"
test_job:
...
only:
variables:
- $SCHEDULED_TEST == "True"
If you always want to have 12 hours of delay you could use just one schedule and add when:delayed
when: delayed
start_in: 12 hours
UPDATE: As per request in comments added complete example of simple pipeline configuration, job build should run when SCHEDULED_BUILD is set to True and test job should run when SCHEDULED_TEST is set to True:
build:
script:
- echo only build
only:
variables:
- $SCHEDULED_BUILD == "True"
test:
script:
- echo only test
only:
variables:
- $SCHEDULED_TEST == "True"
Related
I'm working on an application deployment pipeline.
I have multiple environment that should be populated by application instances on demand.
Desired behaviour: init_1 job triggers executor with specific parameter
Run a parametrised job for deployment
The above job gets parameter from user
User can only select parameter values from a predefined list
User should provide parameter to start the pipeline
What I tried:
(1) I have the deploy job with manual trigger where I set the parameter as environment variable [eg: ENV_NAME].
This solution works but error prone and it is pretty hard to rerun properly.
(2) I have an init job that sets an environment variable [eg: ENV_NAME] with preset value [eg: dev]. The deploy job is triggered after the init.
This solution works but it holds no real value.
(3) I have multiple init jobs that can set a single environment variable eg: ENV_NAME] with specific values [eg: dev, stage1, stage2]. The deploy job should be triggered after environment variable is set by the init jobs.
This solution does not work as init and deploy jobs are in two separate stages and the later does not start until all the jobs in the previous stage completes.
(3.a) Same as above but here the init jobs are set to allow_failure.
This solution does not work as init jobs are skipped totally, so deploy job does not get the required parameter.
stages:
- init
- execute
init_1:
stage: init
rules:
- when: manual
script:
- echo "INFRA_ID=pr1" >> build.env
artifacts:
reports:
dotenv: build.env
allow_failure: true
init_2:
stage: init
rules:
- when: manual
script:
- echo "INFRA_ID=pr2" >> build.env
artifacts:
reports:
dotenv: build.env
allow_failure: true
executor:
stage: execute
script:
- echo "Selected infrastructure $INFRA_ID"
(3.b) Same as above but dependency is declared among init and deploy jobs.
This solution does not work as deploy job depends on all of the init jobs.
(4) I create N streams of init and deploy.
This solution works but causes many duplicated code.
Do you see any solutions to my use-case?
Thanks in advance
I have found a solution. It is not the best as it needs two consecutive manual approve, but it works.
Concept
Define initiator jobs in the init stage, setting the predefined values as environment variables (with artifacts.report.dotenv)
Define a trigger step (trigger stage) that blocks the pipeline
Define executor job in a later stage
Set init and trigger jobs for manual
Set the init jobs to allow_failure to make them optional (you only need one set of params at the time)
Screenshots
Pipeline in idle state
Completed pipeline
Code
image: busybox:latest
stages:
- init
- trigger
- execute
init_1:
stage: init
rules:
- when: manual
allow_failure: true
script:
- echo "INFRA_ID=pr1" >> build.env
artifacts:
reports:
dotenv: build.env
init_2:
stage: init
rules:
- when: manual
allow_failure: true
script:
- echo "INFRA_ID=pr2" >> build.env
artifacts:
reports:
dotenv: build.env
start:
stage: trigger
rules:
- when: manual
script:
- echo "Pipeline is triggered with infra ID of $INFRA_ID"
executor:
stage: execute
script:
- echo "Selected infrastructure $INFRA_ID"
I have a job in gitlab-ci that looks like this:
job_name:
script:
- someExe.exe --auto-exit 120
- script.py
needs:
- some_needs
stage: stage
tags:
- tags
someExe.exe is an executable that will run for 120 seconds. I want to start this executable, and while it is running, i want to start script.py. The problem is, gitlab will wait until someExe.exe stops running, and then runs script.py.
Is there any way to do what i want?Preferably, in only one job(having 2 jobs, one that starts .exe and one that starts script.py is not good)
Do your requirements allow for two different jobs with the same stage name? If so, gitlab-ci will run them in parallel:
stages:
- my-stage
some-exe:
script:
- someExe.exe --auto-exit 120
needs:
- some_needs
stage: my-stage
tags:
- tags
py-script:
script:
- script.py
needs:
- some_needs
stage: my-stage
tags:
- tags
I have a gitlab ci pipeline like below, and there is a bash script notify.sh which will send different notification depends on stage build result(success or failed), currently I use a arg --result to control the logic, and I write two jobs for stage notify(success-notify and failed-notify) and assign the value of --result manually. Is there a way to get the stage build result directly(like STAGE_BUILD_STATE) instead of use statement when?
---
stages:
- build
- notify
build:
stage: build
script:
- build something
success-notify:
stage: notify
script:
- bash notify.sh --result success
failed-notify:
stage: notify
script:
- bash notify.sh --result failed
when: on_failure
I need to run pipeline everytime there is a commit on non-master branch. The pipeline starts but the code is from master. I need the code from the changed branch
Pipeline is like this:
variables:
IMAGE_TAG: ${CI_PIPELINE_IID}
BASE_NAME: ${CI_COMMIT_REF_NAME}
stages:
- validate
- build
check_image:
stage: validate
tags:
- runner
script:
- cd ~/path/${BASE_NAME}-base && packer validate ${BASE_NAME}-base.json
except: ['master']
create_image:
stage: build
tags:
- runner
script:
- cd ~/path/${BASE_NAME}-base && packer build -force ${BASE_NAME}-base.json
except: ['master']
Nevermind. I figured it out. I was running gitlab-runner under custom user so the environment is already set. I just have to add before_script to checkout the desired branch.
Gitlab-ci: Here is my pipeline of a project with some stages:
stages:
- prepare
- build
- deploy
- build_test
- test
And some stages have more than one job to execute, e.g. using for each oracle database environment (aca, spt, fin..):
The question is:
My pipeline skipped a test job (test:aca), I understood that happened because a job of the same kind of dependencies failed, in that ss the job deploy:spt failed, but my test:aca skipped.
Look to the test:aca job script:
test:aca:
only:
- branches
allow_failure: true
stage: test
tags:
- teste
script:
- test script
dependencies:
- test:build_test
- deploy:aca
It doesnt has dependencies with the deploy:spt, only with test:build_test and
deploy:aca. How to enable to run the job test:aca ?
Have you tried removing deploy:aca and only using test:build_test as a dependency?
test:aca:
only:
- branches
allow_failure: true
stage: test
tags:
- teste
script:
- test script
dependencies:
- test:build_test