I am using env as a (lowercase) input variable for my pipeline and
I want to be able to have this stage use the correct AWS account based on the environment I input. Right now I have it set just as AWS_ACCOUNT_DEV so I need to have separate stages.
I just want this one stage to be able to be used for all environments based on my input - how can I achieve this?
variables:
AWS_ACCOUNT_DEV: 000000000
AWS_ACCOUNT_NONPROD: 000000000
Import Alertmanager Endpoint:
stage: import
dependencies:
- "Validate Credentials"
tags:
- ${runner}
rules:
- if: $CI_PROJECT_ID != $MONITORING_PROJECT_ID
variables:
AWS_IAM_DEPLOYER_ROLE: "arn:aws:iam::${AWS_ACCOUNT_DEV}:role/${runner_role}"
...
figured it out by adding:
rules:
- if: $CI_PROJECT_ID != $MONITORING_PROJECT_ID
- if: $ecs_env == "dev"
variables:
AWS_IAM_DEPLOYER_ROLE: "arn:aws:iam::${AWS_ACCOUNT_DEV}:role/${runner_role}"
- if: $ecs_env == "nonprod"
variables:
AWS_IAM_DEPLOYER_ROLE: "arn:aws:iam::${AWS_ACCOUNT_NONPROD}:role/${runner_role}"
Related
(How) Can I use variable value as a name for another variable?
I have a job with matrix and dotenv artifacts as follows:
build-names:
stage: build
...
script:
...
<lines omitted>
...
- echo "${NAME}_DEB_PACKAGE_VERSION=${DEB_PACKAGE_VERSION}" >> build.env
artifacts:
reports:
dotenv: build.env
parallel:
matrix:
- NAME:
- name1
- name2
type:
- 0
- 1
environment: $NAME/$TYPE
Then I have a downstream trigger job again using matrix and I want to pass the appropriate package version based on the ${NAME}
build-images:
stage: .post
needs:
- job: build-names
artifacts: true
variables:
PACKAGE_VERSION_VARIABLE_NAME: ${NAME}_DEB_PACKAGE_VERSION
PACKAGE_VERSION: ${${BACKEND_VERSION_VARIABLE_NAME}}
// OR
PACKAGE_VERSION_VARIABLE_NAME: ${NAME}_DEB_PACKAGE_VERSION
PACKAGE_VERSION: ${!BACKEND_VERSION_VARIABLE_NAME}
trigger:
project: group/project-${NAME}
parallel:
matrix:
- NAME:
- name1
- name2
Neither of the two approaches above (double ${${}} or !) works in the variables section.
I could 'generate' the variable within script section, but AFAIK you cannot have both trigger and script within the same job.
Is there a workaround for similar use cases?
(Using self-hosted gitlab 15.4)
I want TEST_VAR to be overridden by the workflow but when I run a $NIGHTLY pipeline, it still prints "normal". What am I doing wrong here? According the the precedence, I believe it should work...
image: alpine
variables:
TEST_VAR: "normal"
workflow:
rules:
- if: $CI_COMMIT_BRANCH
- if: $NIGHTLY
variables:
TEST_VAR: "nightly"
test:
script:
- echo $TEST_VAR
One more option is hack with child pipelines.
nightly-run:
variables:
TEST_VAR: "nightly"
rules:
- if: $NIGHTLY
trigger:
include: .gitlab-ci.yml
strategy: depend
It will override global TEST_VAR if "nightly-run" triggers.
Pay attention all jobs that triggered by "parent pipeline" will with CI_PIPELINE_SOURCE == 'parent_pipeline'.
A solution can be resolve your issus is to update the $TEST_VAR in script part.
script:
- |
- if $NIGHTLY
then
$TEST_VAR = "nightly"
else
$TEST_VAR = "others values"
fi
I need to call a bash script from my gitlab cicd pipeline. When I call it, the input parameter needs to change depending on whether or not this is a merge into master. Basically what I want is this:
if master:
test:
stage: test
script:
- INPUT="foo"
- $(myscript.sh $INPUT)
if NOT master:
test:
stage: test
script:
- INPUT=""
- $(myscript.sh $INPUT)
I'm trying to figure out a way to set INPUT depending on which branch the pipeline is running on. I know there are rules as well as only/except, but they don't seem to allow you to set variable, only test them. I know the brute force way is to just write this twice, one with "only master" and another with "except master", but I would really like to avoid that.
Thanks
I implement this kind of thing using yml anchors to extend tasks.
I find it easier to read and customization can include other things, not only variables.
For example:
.test_my_software:
stage: test
script:
- echo ${MESSAGE}
- bash test.sh
test stable:
<<: *test_my_software
variables:
MESSAGE: testing stable code!
only:
- /stable.*/
test:
<<: *test_my_software
variables:
MESSAGE: testing our code!
except:
- /stable.*/
you get the idea...
Why not have two jobs to run the script and use rules to control when they are ran against master:
test:
stage: test
script:
- if [$CI_COMMIT_REF_NAME == 'master']; then export INPUT="foo"; else INPUT=""; fi
- $(myscript.sh $INPUT)
I am building a YAML pipeline in Azure DevOps. I'm generating couple of variables in first job, which I'm passing to other job and want to use them to conditionally run some steps.
Here's how the code looks like:
stages:
- stage: ConditionalVars
jobs:
- job: outputVars
steps:
- pwsh: |
Write-Host "##vso[task.setvariable variable=varExample;isOutput=true]testVar"
name: setVars
- job: useVars
dependsOn: outputVars
variables:
varFromJob: $[ dependencies.outputVars.outputs['setVars.varExample']]
steps:
- ${{ if eq($(varFromJob), 'testVar')}}:
- pwsh: |
Write-Output "Conditionally run"
I am able to print $(varFromJob) variable in step in job useVars. But I'm not able to use it in this if (condition) example.
I was trying different approaches like variables.varFromJob and other taken from https://learn.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml.
Have you guys been struggling with similar issue? How could I define it to make it work?
Thanks a lot!
Rafal
Try to use a condition for the step:
steps:
- pwsh: |
Write-Output "Conditionally run"
condition: eq(variables['varFromJob'], 'testVar')
${{ if eq... }} is a compile-time expression that is evaluated when the YAML file is compiled.
I'm fairly new to using gitlab-ci and as such, I've run into a problem where the following fails ci-lint because of my use of anchors/references:
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
.install_thing1: &install_thing1
- do things
- to install
- thing1
.install_thing2: &install_thing2
- do things to
- install thing2
.setup_thing1: &setup_things1
variables:
VAR: var
FOO: bar
script:
- all
- the
- things
before_script:
...
stages:
- deploy-test
- deploy-stage
- deploy-prod
test:
stage: deploy-test
variables:
RUN_ENV: "test"
...
only:
- tags
- branches
script:
- *install_thing1
- *install_thing2
- *setup_thing1
- other stuff
...
test:
stage: deploy-stage
variables:
RUN_ENV: "stage"
...
only:
- master
script:
- *install_thing1
- *install_thing2
- *setup_thing1
- other stuff
When I attempt to lint the gitlab-ci.yml, I get the following error:
Status: syntax is incorrect
Error: jobs:test:script config should be a string or an array of strings
The error eludes to just needing an array for the script piece, which I believe I have. Use of the <<: *anchor pragma causes an error as well.
So, how can one accomplish what I'm trying to do here where I don't have to repeat the code in every -block?
You can fix it and even make it more DRY, take a look at the Auto DevOps template Gitlab created.
It can fix your issue and even more improve your CI file, just have a template job like their auto_devops job, include it in a before_script and then you can combine and call multiple functions in a script block.
The anchors only give you limited flexibility.
(This concept made it possible for me to have one CI file for 20+ projects and a centralized functions file I wget and load in my before_script.)