Azure DevOps yaml trigger pipeline - azure-pipelines-yaml

Is there a way to trigger a pipeline if particular variables yaml file is updated in Azure Source Control and use exactly that updated variables yaml file values? I know how to gather values from variables yaml, but I can not figure out how to get the main pipeline to use exactly that particular variables yaml file. Example, folders hierarchy in SC:
Variables
var1.yaml
var2.yaml
var3.yaml
My main pipeline just a draft:
trigger:
batch: true
branches:
include:
- master
paths:
include:
- variables/var*
pool:
vmImage: ubuntu-latest
stages:
- stage: stage1
displayName: 'My stage'
jobs:
- job: Job 1
displayName: 'Run something'
variables:
- template: ../variables/...(not sure about this part yet)
steps:
- task: PowerShell#2
displayName: 'Run PoSH command'
inputs:
targetType: 'inline'
script: |
write-host '${{ variables.varx }}' (again not sure here yet)
Basically, is there a logic how to run main pipeline, based by the variables yaml template which has triggered the main pipeline. There will be many variables yaml files in SC, and once particular yaml, example var2.yaml is updated, the main pipeline gets triggered and uses var2.yaml values.

Related

How to trigger downstream pipeline with single parent and multiple dynamic child in gitlab?

I would like to trigger multiple downstream pipeline dynamically from single parent/Stage.
.gitlab-ci.yml
Stages:
- gen-yml
- run
gen-yml:
stage: gen-yml
script: bash gen_yml_env_wise.sh
trigger-downdtream:
include:
- local: dynamic-gen.yml
strategy: depend
dynamic-gen.yml
Env1:
stage: test
Variable:
env: "tst"
trigger:
project: main/test-checkes
branch: master
strategy: depend
only:
ref:
- feature1
Env2:
stage: uat
Variable:
env: "uat"
trigger:
project: main/test-checkes
branch: master
strategy: depend
only:
ref:
- feature1
Getting error which run pipeline due to multiple trigger, i guess :
Error in .gitlab.yml: job: Env1 config contains unknown keys: trigger
I creating dynamic-gen.yml file dynamically based on the available env for test & only i have passing env variable in downstream pipeline.
Here i would like to run two downstream pipeline for tst and uat respectively and env testing is dynamically it may be 1 and more.
For reference purposes, I looking solution as below (dynamically trigger downstream job)
Hope will get solution & thanks in advance.

Giltab CI Job stuck because the runner tag value hasn’t been assigned

I have a CICD configuration that looks something like this:
.rule_template: &rule_configuration
rules:
- changes:
- file/dev/script1.txt
variables:
DESTINATION_HOST: somehost1
RUNNER_TAG: somerunner1
- changes:
- file/test/script1.txt
variables:
DESTINATION_HOST: somehost2
RUNNER_TAG: somerunner2
default:
tags:
- scripts
stages:
- lint
deploy scripts 1/6:
<<: *rule_configuration
tags:
- ${RUNNER_TAG}
stage: lint
script: |
echo "Add linting here!"
....
In short, which runner to choose depends on which file was changed, hence the runner tag has to be conditionally decided. However, these jobs never execute and the value of never gets assigned as i always get:
This job is stuck because you don't have any active runners online or available with any of these tags assigned to them: ${RUNNER_TAG}
Any idea, what is it this way and what can I do to resolve this?
gitlab-runner --version
Version: 14.7.0
Git revision: 98daeee0
Git branch: 14-7-stable
GO version: go1.17.5
Built: 2022-01-19T17:11:48+0000
OS/Arch: linux/amd64
Tags map jobs to runners. I tag my runners with the type of executor they use, e.g. - shell, docker.
Based on the error message, you do not have any runners with the tag ${RUNNER_TAG}, which means that it is not resolving the variable the way you want it to.
Instead of combining rules like this, make separate jobs for each, and a rule for each to say when to trigger it.
I have faced this issue, and similar issues many times while trying to do some dynamic pipelines for a multi-client environment.
The config you have above should work for your purposes to the best of my knowledge, but since it is not there is another way to accomplish this with trigger jobs.
Create a trigger job for each possible runner tag. You can use extends to reduce the total code required for this.
gitlab-ci.yml
stages:
- trigger
- lint
.trigger:
stage: trigger
trigger:
include:
- local: ./lint-job.yml
strategy: depend
trigger-lint-script1:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner1
rules:
- changes:
- file/dev/script1.txt
trigger-lint-script2:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner2
rules:
- changes:
- file/dev/script2.txt
Create a trigger job with associated rules for each possible tag. This way you can change more than one of the specified files in a single commit with no issues. Define the triggered job in lint-job.yml
lint-job.yml
deploy scripts 1/6:
tags: [$RUNNER_TAG]
stage: lint
script: |
echo "Add linting here!"
There are other ways to accomplish this, but this method is by far the simplest and cleanest for this particular use.

Invoke GitLab CI jobs from inside other jobs

I have many different GitLab CI jobs in my repository and dependent on variables that are set by an user in a config file I want to execute different sequences of jobs. My approach is to create a scheduler job that analyzes the config file and executes the jobs accordingly. However, I cannot figure out how to execute another job from within a job.
Any help is appreciated!
This would be a good use case for dynamic child pipelines. This is pretty much the only way to customize a pipeline based on the outcome of another job.
From the docs:
generate-config:
stage: build
script: generate-ci-config > generated-config.yml
artifacts:
paths:
- generated-config.yml
child-pipeline:
stage: test
trigger:
include:
- artifact: generated-config.yml
job: generate-config
In your case, the script generate-ci-config would be the analysis of your config files and creates a job configuration conditionally based on the config contents.

Variables in gitlab CI

I just began with the implementation of CI jobs using gitlab-ci and I'm trying to create a job template. Basically the job uses the same image, tags and script where I use variables:
.job_e2e_template: &job_e2e
stage: e2e-test
tags:
- test
image: my_image_repo/siderunner
script:
- selenium-side-runner -c "browserName=$JOB_BROWSER" --server http://${SE_EVENT_BUS_HOST}:${SELENIUM_HUB_PORT}/wd/hub --output-directory docker/selenium/out_$FOLDER_POSTFIX docker/selenium/tests/*.side;
And here is one of the jobs using this anchor:
test-chrome:
<<: *job_e2e
variables:
JOB_BROWSER: "chrome"
FOLDER_POSTFIX: "chrome"
services:
- selenium-hub
- node-chrome
artifacts:
paths:
- tests/
- out_chrome/
I'd like this template to be more generic and I was wondering if I could also use variables in the services and artifacts section, so I could add a few more lines in my template like this:
services:
- selenium-hub
- node-$JOB_BROWSER
artifacts:
paths:
- tests/
- out_$JOB_BROWSER/
However I cannot find any example of that and the doc only talks about using that in scripts. I know that variables are like environment variables for jobs but I'm not sure if they can be used for other purposes.
Any suggestions?
Short answer, yes you can. Like described in this blog post, gitlab does a deep merge based on the keys.
You can see how your merged pipeline file looks like under CI/CD -> Editor -> View merged YAML.
If you want to modularize your pipeline even further I would recommend using include instead of yaml anchors, so you can reuse your templates in different pipelines.

"No Stages/Jobs jobs for this pipeline" for branch pipeline

Given the following .gitlab-ci.yml:
---
stages:
- .pre
- test
- build
compile-build-pipeline:
stage: .pre
script: [...]
artifacts:
paths: [".artifacts/build.yaml"]
lint-source:
stage: .pre
script: [...]
run-tests:
stage: test
rules:
- if: '$CI_COMMIT_BRANCH == "$CI_DEFAULT_BRANCH"'
trigger:
strategy: depend
include:
artifact: .artifacts/tests.yaml
job: "compile-test-pipeline"
needs: ["compile-test-pipeline"]
build-things:
stage: test
rules:
- if: '$CI_COMMIT_BRANCH == "$CI_DEFAULT_BRANCH"'
trigger:
strategy: depend
include:
artifact: .artifacts/build.yaml
job: "compile-build-pipeline"
needs: ["compile-build-pipeline"]
...
The configuration should always run (any branch, any source). Tests and build jobs should be run only on the default branch.
However, no jobs are run for merge requests, and manually triggering the pipeline on branches other than the default one give the error No Jobs/Stages for this Pipeline.
I've tried explicitly setting an always run rule using rules: [{if: '$CI_PIPELINE_SOURCE'}] to the jobs in the .pre stage, but no dice.
What am I doing wrong?
As per the docs:
You must have a job in at least one stage other than .pre or .post.
In the above configuration, no jobs other than the ones in .pre are added on merge request events, hence no jobs are added at all.
I'm glad you added this info into the question: However, no jobs are run for merge requests
CI_COMMIT_BRANCH is not available in MR-based events. Here is an fragment from official docs:
The commit branch name. Available in branch pipelines, including pipelines for the default branch.
**Not available in merge request pipelines or tag pipelines.**
When working with MR and willing to check against branch name, you might want to use:
CI_MERGE_REQUEST_SOURCE_BRANCH_NAME or CI_MERGE_REQUEST_TARGET_BRANCH_NAME
List of envs here