Consider exmaple:
stage: ${opt:stage, 'dev'}
If --stage parameter is not pass dev value will be used.
stage: ${opt:stage, ${env.STAGE, 'dev'}}
Leads to error because stage value is not a string
stage: ${opt:stage, "${env.STAGE, 'dev'}"}
Resolves to dev even if I set STAGE system variable.
So is there a way to make the following logic:
1. if STAGE system variable is present - it should be used
1. if --stage parameter is present - it should be used. (overrides STAGE system variable if both are present)
1. when no parameter or system variable are provided default dev is used
How to define variables to do this?
Your last effort is so very close. You want env:STAGE, instead of env.STAGE.
stage: ${opt:stage, "${env:STAGE, 'dev'}"}
More on environment variables in Serverless here.
Related
I am having an issue with GitLab passing variables from parent pipeline to child pipeline. Variables declared globally are passed, but the ones specified in GitLab UI are not.
My example is like below:
parent.yml
variables:
ENVIRONMENT:
value: dev
options:
- "dev"
- "staging"
- "prod"
stages:
- child
deploy:
stage: child
trigger:
include: child_dir/child-pipeline.yml
and the child_dir/child-pipeline.yml is:
before_script:
- echo "$ENVIRONMENT"
- echo "$SOME_OTHER_ENV_VARIABLES_PASSED_THROUGH_UI"
ENVIRONMENT -> is passed fine, value "dev" is echoed in child pipeline.
SOME_OTHER_ENV_VARIABLES_PASSED_THROUGH_UI -> is passed by GitHub UI Environment Variables - it's protected, but not masked. Nothing is echoed in child pipeline.
I have tried calling the environment variable inside of child job:
deploy:
stage: child
variables:
SOME_OTHER_ENV_VARIABLES_PASSED_THROUGH_UI: $SOME_OTHER_ENV_VARIABLES_PASSED_THROUGH_UI
trigger:
include: child_dir/child-pipeline.yml
but didn't work.
Does anyone has experienced such issue, or I'm missing something here?
Thanks in advance!
Well, I found the issue. Variables must not be "protected". Removing "protected" checkbox from GitLab UI environment variable made it work perfectly.
Though, GitLab should maybe mention that in their documentation: https://docs.gitlab.com/ee/ci/pipelines/downstream_pipelines.html?#pass-cicd-variables-to-a-downstream-pipeline
I have a CICD configuration that looks something like this:
.rule_template: &rule_configuration
rules:
- changes:
- file/dev/script1.txt
variables:
DESTINATION_HOST: somehost1
RUNNER_TAG: somerunner1
- changes:
- file/test/script1.txt
variables:
DESTINATION_HOST: somehost2
RUNNER_TAG: somerunner2
default:
tags:
- scripts
stages:
- lint
deploy scripts 1/6:
<<: *rule_configuration
tags:
- $RUNNER_TAG
stage: lint
script: |
echo "Add linting here!"
....
In short, which runner to choose depends on which file was changed, hence the runner tag has to be conditionally decided. However, these jobs never execute and the value of never gets assigned as I always get:
This job is stuck because you don't have any active runners online or available with any of these tags assigned to them: $RUNNER_TAG
I believe it is because the rules blocks isn't executed and hence the $RUNNER_TAG variable not resolved to its actual value at the point when job/workflow is being initialized and runner being searched.
If my doubt is correct, then probably it's a circular dependency that job initialization requires $RUNNER_TAG but the resolution of $RUNNER_TAG requires job initialization.
If the above is correct, what is the right way to handle it and what stage can I conditionally decide and assign $RUNNER_TAG its value so it doesn’t hinder job/workflow initialization?
gitlab-runner --version
Version: 14.7.0
Git revision: 98daeee0
Git branch: 14-7-stable
GO version: go1.17.5
Built: 2022-01-19T17:11:48+0000
OS/Arch: linux/amd64
I think what you are doing is over complicating what you need to do.
Instead of trying to abstract the tag and dynamically create some variable, simply make each job responsible for registering itself within a pipeline run based on if a particular file path changed.
It might feel like code duplication but it actually keeps your CI a lot simpler and easier to understand.
Job1:
Run when file changes
Tag : some tag
Job2:
Run when some other file changes
Tag: sometag2
Job 3:
Run when a third different file changes
Tag: sometag3
I would like to trigger multiple downstream pipeline dynamically from single parent/Stage.
.gitlab-ci.yml
Stages:
- gen-yml
- run
gen-yml:
stage: gen-yml
script: bash gen_yml_env_wise.sh
trigger-downdtream:
include:
- local: dynamic-gen.yml
strategy: depend
dynamic-gen.yml
Env1:
stage: test
Variable:
env: "tst"
trigger:
project: main/test-checkes
branch: master
strategy: depend
only:
ref:
- feature1
Env2:
stage: uat
Variable:
env: "uat"
trigger:
project: main/test-checkes
branch: master
strategy: depend
only:
ref:
- feature1
Getting error which run pipeline due to multiple trigger, i guess :
Error in .gitlab.yml: job: Env1 config contains unknown keys: trigger
I creating dynamic-gen.yml file dynamically based on the available env for test & only i have passing env variable in downstream pipeline.
Here i would like to run two downstream pipeline for tst and uat respectively and env testing is dynamically it may be 1 and more.
For reference purposes, I looking solution as below (dynamically trigger downstream job)
Hope will get solution & thanks in advance.
I have tried to find out in what order the statements of the serverless file are evaluated (maybe it is more common to say that 'variables are resolved').
I haven't been able to find any information about this and to some extent it makes working with serverless feel like a guessing game for me.
As an example, the latest surprise I got was when I tried to run:
$ sls deploy
serverless.yaml
useDotenv: true
provider:
stage: ${env:stage}
region: ${env:region}
.env
region=us-west-1
stage=dev
I got an error message stating that env is not available at the time when stage is resolved. This was surprising to me since I have been able to use env to resolve other variables in the provider section, and there is nothing in the syntax to indicate that stage is resolved earlier.
In what order is the serverless file evaluated?
In effect you've created a circular dependency. Stage is special because it is needed to identify which .env file to load. ${env:stage} is being resolved from ${stage}.env, but Serverless needs to know what ${stage} is in order to find ${stage}.env etc.
This is why it's evaluated first.
Stage (and region, actually) are both optional CLI parameters. In your serverless.yml file what you're setting is a default, with the CLI parameter overriding it where different.
Example:
provider:
stage: staging
region: ca-central-1
Running serverless deploy --stage prod --region us-west-2 will result in prod and us-west-2 being used for stage and region (respectively) for that deployment.
I'd suggest removing any variable interpolation for stage and instead setting a default, and overriding via CLI when needed.
Then dotenv will know which environment file to use, and complete the rest of the template.
How to set gitlab-ci varibales through script not just in "varibales" section in .gitlab-ci.yaml?So that I can set variables in one job and use in different job
There is currently no way in GitLab to pass environment variable between stages or jobs.
But there is a request for that: https://gitlab.com/gitlab-org/gitlab/-/issues/22638
Current workaround is to use artifacts - basically pass files.
We had a similar use case - get Java app version from pom.xml and pass it to various jobs later in the pipeline.
How we did it in .gitlab-ci.yml:
stages:
- prepare
- package
variables:
VARIABLES_FILE: ./variables.txt # "." is required for image that have sh not bash
get-version:
stage: build
script:
- APP_VERSION=...
- echo "export APP_VERSION=$APP_VERSION" > $VARIABLES_FILE
artifacts:
paths:
- $VARIABLES_FILE
package:
stage: package
script:
- source $VARIABLES_FILE
- echo "Use env var APP_VERSION here as you like ..."
If you run a script you can set an environment variable
export MY_VAR=the-value
once the environment variable is set it should persist in the current environment.
Now for why you do not want to do that.
A tool like Gitlab CI is meant to achieve repeatability in your
artifacts. Consistency is the matter here. What happens if a second job
has to pick up a variable from the first? Then you have multiple paths!
# CI is a sequence
first -> second -> third -> fourth -> ...
# not a graph
first -> second A -> third ...
\> second B />
How did you get to third? Now if you had to debug third which path do you test? If the build in third is broken who is responsible second A or second B?
If you need a variable use it now, not later in another job/script. Whenever you
want to write a longer sequence of commands make it a script and execute the script!
You can use either Artifact or Cache to achieve this, see the official documentation for more information around Artifact and Cache:
https://docs.gitlab.com/ee/ci/caching/#how-cache-is-different-from-artifacts