Passing variables from GitLab UI to child pipeline not working - gitlab-ci

I am having an issue with GitLab passing variables from parent pipeline to child pipeline. Variables declared globally are passed, but the ones specified in GitLab UI are not.
My example is like below:
parent.yml
variables:
ENVIRONMENT:
value: dev
options:
- "dev"
- "staging"
- "prod"
stages:
- child
deploy:
stage: child
trigger:
include: child_dir/child-pipeline.yml
and the child_dir/child-pipeline.yml is:
before_script:
- echo "$ENVIRONMENT"
- echo "$SOME_OTHER_ENV_VARIABLES_PASSED_THROUGH_UI"
ENVIRONMENT -> is passed fine, value "dev" is echoed in child pipeline.
SOME_OTHER_ENV_VARIABLES_PASSED_THROUGH_UI -> is passed by GitHub UI Environment Variables - it's protected, but not masked. Nothing is echoed in child pipeline.
I have tried calling the environment variable inside of child job:
deploy:
stage: child
variables:
SOME_OTHER_ENV_VARIABLES_PASSED_THROUGH_UI: $SOME_OTHER_ENV_VARIABLES_PASSED_THROUGH_UI
trigger:
include: child_dir/child-pipeline.yml
but didn't work.
Does anyone has experienced such issue, or I'm missing something here?
Thanks in advance!

Well, I found the issue. Variables must not be "protected". Removing "protected" checkbox from GitLab UI environment variable made it work perfectly.
Though, GitLab should maybe mention that in their documentation: https://docs.gitlab.com/ee/ci/pipelines/downstream_pipelines.html?#pass-cicd-variables-to-a-downstream-pipeline

Related

Gitlab CI sequence of instructions causing circular dependency

I have a CICD configuration that looks something like this:
.rule_template: &rule_configuration
rules:
- changes:
- file/dev/script1.txt
variables:
DESTINATION_HOST: somehost1
RUNNER_TAG: somerunner1
- changes:
- file/test/script1.txt
variables:
DESTINATION_HOST: somehost2
RUNNER_TAG: somerunner2
default:
tags:
- scripts
stages:
- lint
deploy scripts 1/6:
<<: *rule_configuration
tags:
- $RUNNER_TAG
stage: lint
script: |
echo "Add linting here!"
....
In short, which runner to choose depends on which file was changed, hence the runner tag has to be conditionally decided. However, these jobs never execute and the value of never gets assigned as I always get:
This job is stuck because you don't have any active runners online or available with any of these tags assigned to them: $RUNNER_TAG
I believe it is because the rules blocks isn't executed and hence the $RUNNER_TAG variable not resolved to its actual value at the point when job/workflow is being initialized and runner being searched.
If my doubt is correct, then probably it's a circular dependency that job initialization requires $RUNNER_TAG but the resolution of $RUNNER_TAG requires job initialization.
If the above is correct, what is the right way to handle it and what stage can I conditionally decide and assign $RUNNER_TAG its value so it doesn’t hinder job/workflow initialization?
gitlab-runner --version
Version: 14.7.0
Git revision: 98daeee0
Git branch: 14-7-stable
GO version: go1.17.5
Built: 2022-01-19T17:11:48+0000
OS/Arch: linux/amd64
I think what you are doing is over complicating what you need to do.
Instead of trying to abstract the tag and dynamically create some variable, simply make each job responsible for registering itself within a pipeline run based on if a particular file path changed.
It might feel like code duplication but it actually keeps your CI a lot simpler and easier to understand.
Job1:
Run when file changes
Tag : some tag
Job2:
Run when some other file changes
Tag: sometag2
Job 3:
Run when a third different file changes
Tag: sometag3

How to trigger downstream pipeline with single parent and multiple dynamic child in gitlab?

I would like to trigger multiple downstream pipeline dynamically from single parent/Stage.
.gitlab-ci.yml
Stages:
- gen-yml
- run
gen-yml:
stage: gen-yml
script: bash gen_yml_env_wise.sh
trigger-downdtream:
include:
- local: dynamic-gen.yml
strategy: depend
dynamic-gen.yml
Env1:
stage: test
Variable:
env: "tst"
trigger:
project: main/test-checkes
branch: master
strategy: depend
only:
ref:
- feature1
Env2:
stage: uat
Variable:
env: "uat"
trigger:
project: main/test-checkes
branch: master
strategy: depend
only:
ref:
- feature1
Getting error which run pipeline due to multiple trigger, i guess :
Error in .gitlab.yml: job: Env1 config contains unknown keys: trigger
I creating dynamic-gen.yml file dynamically based on the available env for test & only i have passing env variable in downstream pipeline.
Here i would like to run two downstream pipeline for tst and uat respectively and env testing is dynamically it may be 1 and more.
For reference purposes, I looking solution as below (dynamically trigger downstream job)
Hope will get solution & thanks in advance.

Giltab CI Job stuck because the runner tag value hasn’t been assigned

I have a CICD configuration that looks something like this:
.rule_template: &rule_configuration
rules:
- changes:
- file/dev/script1.txt
variables:
DESTINATION_HOST: somehost1
RUNNER_TAG: somerunner1
- changes:
- file/test/script1.txt
variables:
DESTINATION_HOST: somehost2
RUNNER_TAG: somerunner2
default:
tags:
- scripts
stages:
- lint
deploy scripts 1/6:
<<: *rule_configuration
tags:
- ${RUNNER_TAG}
stage: lint
script: |
echo "Add linting here!"
....
In short, which runner to choose depends on which file was changed, hence the runner tag has to be conditionally decided. However, these jobs never execute and the value of never gets assigned as i always get:
This job is stuck because you don't have any active runners online or available with any of these tags assigned to them: ${RUNNER_TAG}
Any idea, what is it this way and what can I do to resolve this?
gitlab-runner --version
Version: 14.7.0
Git revision: 98daeee0
Git branch: 14-7-stable
GO version: go1.17.5
Built: 2022-01-19T17:11:48+0000
OS/Arch: linux/amd64
Tags map jobs to runners. I tag my runners with the type of executor they use, e.g. - shell, docker.
Based on the error message, you do not have any runners with the tag ${RUNNER_TAG}, which means that it is not resolving the variable the way you want it to.
Instead of combining rules like this, make separate jobs for each, and a rule for each to say when to trigger it.
I have faced this issue, and similar issues many times while trying to do some dynamic pipelines for a multi-client environment.
The config you have above should work for your purposes to the best of my knowledge, but since it is not there is another way to accomplish this with trigger jobs.
Create a trigger job for each possible runner tag. You can use extends to reduce the total code required for this.
gitlab-ci.yml
stages:
- trigger
- lint
.trigger:
stage: trigger
trigger:
include:
- local: ./lint-job.yml
strategy: depend
trigger-lint-script1:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner1
rules:
- changes:
- file/dev/script1.txt
trigger-lint-script2:
extends:
- .trigger
variables:
RUNNER_TAG: somerunner2
rules:
- changes:
- file/dev/script2.txt
Create a trigger job with associated rules for each possible tag. This way you can change more than one of the specified files in a single commit with no issues. Define the triggered job in lint-job.yml
lint-job.yml
deploy scripts 1/6:
tags: [$RUNNER_TAG]
stage: lint
script: |
echo "Add linting here!"
There are other ways to accomplish this, but this method is by far the simplest and cleanest for this particular use.

How to set gitlab-ci variables dynamically?

How to set gitlab-ci varibales through script not just in "varibales" section in .gitlab-ci.yaml?So that I can set variables in one job and use in different job
There is currently no way in GitLab to pass environment variable between stages or jobs.
But there is a request for that: https://gitlab.com/gitlab-org/gitlab/-/issues/22638
Current workaround is to use artifacts - basically pass files.
We had a similar use case - get Java app version from pom.xml and pass it to various jobs later in the pipeline.
How we did it in .gitlab-ci.yml:
stages:
- prepare
- package
variables:
VARIABLES_FILE: ./variables.txt # "." is required for image that have sh not bash
get-version:
stage: build
script:
- APP_VERSION=...
- echo "export APP_VERSION=$APP_VERSION" > $VARIABLES_FILE
artifacts:
paths:
- $VARIABLES_FILE
package:
stage: package
script:
- source $VARIABLES_FILE
- echo "Use env var APP_VERSION here as you like ..."
If you run a script you can set an environment variable
export MY_VAR=the-value
once the environment variable is set it should persist in the current environment.
Now for why you do not want to do that.
A tool like Gitlab CI is meant to achieve repeatability in your
artifacts. Consistency is the matter here. What happens if a second job
has to pick up a variable from the first? Then you have multiple paths!
# CI is a sequence
first -> second -> third -> fourth -> ...
# not a graph
first -> second A -> third ...
\> second B />
How did you get to third? Now if you had to debug third which path do you test? If the build in third is broken who is responsible second A or second B?
If you need a variable use it now, not later in another job/script. Whenever you
want to write a longer sequence of commands make it a script and execute the script!
You can use either Artifact or Cache to achieve this, see the official documentation for more information around Artifact and Cache:
https://docs.gitlab.com/ee/ci/caching/#how-cache-is-different-from-artifacts

gitlab-ci: provide environment variable(s) to custom docker image in a pipeline

I want to set up a test stage for my gitlab-ci which depends on a custom docker image. I want to know how will I provide some config (like setting env variable to providing a .env file) to it so that the custom image runs properly and hence the stage.
Current config:
test_job:
only:
refs:
- master
- merge_requests
- web
stage: test
services:
- mongo:4.0.4
- redis:5.0.1
- registry.gitlab.com/myteam/myprivaterepo:latest
variables:
- PORT=3000
- SERVER_HOST=myprivaterepo
- SERVER_PORT=9090
script: npm test
I want to provide environment variables to myprivaterepo docker image which connects to mongo:4.0.4 and redis:5.0.1 services for its functioning.
EDIT: The variables are MONGODB_URI="mongodb://mongo:27017/aics" and REDIS_CLIENT_HOST: "redis". These have no meaning for the app being tested but has meaning for the myprivaterepo image without which the test stage will fail.
I figured it out. It is as simple as adding the environment variables in the variables: part of the yaml. This is what worked for me:-
test_job:
only:
refs:
- master
- merge_requests
- web
stage: test
services:
- mongo:4.0.4
- redis:5.0.1
- name: registry.gitlab.com/myteam/myprivaterepo:latest
alias: myprivaterepo
variables:
- MYPRIVATEREPO_PORT: 9090 # Had to modify image to use this variable
- MONGODB_URI: mongodb://mongo:27017/aics
- REDIS_CLIENT_HOST: redis
- PORT: 3000 # for app being tested
- SERVER_HOST: myprivaterepo
- SERVER_PORT: 9090
script: npm test
These variables seeem to be applied to all services.
NOTE: There is a catch - you cannot use 2 images using same environment variable names.
Like, I initially used PORT=???? as environment variables in both myprivaterepo and this app being tested so an error would pop up saying EADDRINUSE. So I had to update myprivaterepo to use MYPRIVATEREPO_PORT
There is a ticket raised in Gitlab-ce, who knows when it will be implemented.