Setting Environment Variable in gitlab-ci.yml in script - gitlab-ci

Please help me in converting below written GitHub Action to Gitlab CI script. I am new to scripting in Gitlab.
From Github documentation I could understand that the below written line is for setting the value of environment variable. But I couldn't find any resource for setting environment variable in Gitlab.
run: >
DISPLAY=:0 xvfb-run -s '-screen 0 1024x768x24' julia --project=monorepo -e 'using Pkg; Pkg.test("GLMakie", coverage=true)'
&& echo "TESTS_SUCCESSFUL=true" >> $GITHUB_ENV

There are multiple ways to set environment variables, and it depends on what you want to achieve:
use it within the same job
use it in another job
Use it within the same job
In Bash or other Shells you can set an environment variable via export - in your case it would look like:
job:
script:
- DISPLAY=:0 xvfb-run -s '-screen 0 1024x768x24' julia --project=monorepo -e 'using Pkg; Pkg.test("GLMakie", coverage=true)' && export TESTS_SUCCESSFUL=true
- echo $TESTS_SUCCESSFUL #verification that it is set and can be used within the same job
Use it within another job
To handover variables to another job you need to define an artifact:report:dotenv. It is a file which can contain a list of key-value-pairs which will be injected as Environment variable in the follow up jobs.
The structure of the file looks like:
KEY1=VALUE1
KEY2=VALUE2
and the definition in the .gitlab-ci.yml looks like
job:
# ...
artifacts:
reports:
dotenv: <path to file>
and in your case this would look like
job:
script:
- DISPLAY=:0 xvfb-run -s '-screen 0 1024x768x24' julia --project=monorepo -e 'using Pkg; Pkg.test("GLMakie", coverage=true)' && echo "TESTS_SUCCESSFUL=true" >> build.env
artifacts:
reports:
dotenv: build.env
job2:
needs: ["job"]
script:
- echo $TESTS_SUCCESSFUL
see https://docs.gitlab.com/ee/ci/variables/#pass-an-environment-variable-to-another-job for further information.

Related

Show remote command output in CI job results

I have CI pipeline which have stages like this. As it shows most of the stuff here is done on remote machine which is working fine.
The only issues I am unable to see the command outputs here. For e.g. scp is used with -v which if run manually on machine shows a lot of verbose information useful for debugging etc. same goes for cp -v but in job results it shows no such information.
So is there a way I can re-route the command outputs from remote machine to local (gitlab job output)
my job 1/6:
rules:
- changes:
- ${LOCA_FILE_PATH}
stage: prepare
allow_failure: true
script: |
ssh ${USER}#${HOST} '([ -f "${PATH}/test_conf_1.txt" ] && cp -v "${PATH}/test_conf_1.txt" ${PATH}/test_yaml_$CI_COMMIT_TIMESTAMP.txt)'
my job 2/6:
rules:
- changes:
- ${LOCA_FILE_PATH}
stage: scp
script:
scp -v ${TF_ROOT}${LOCA_FILE_PATH} ${USER}#${HOST}:${PATH}/
Perhaps you can try something like this:
ssh user#host 2>&1 command | tee ssh-session.log
cat ssh-session.log
In the script part you can define a variable and hold there the result of your command and you can print this out.
script:
- RESULT=$(scp -v ${TF_ROOT}${LOCA_FILE_PATH} ${USER}#${HOST}:${PATH}/)
- echo $RESULT

CI-pipeline ignore any commands that fail in a given step

I'm trying to debug a CI pipeline and want to create a custom logger stage that dumps a bunch of information about the environment in which the pipeline is running.
I tried adding this:
stages:
- logger
logger-commands:
stage: logger
allow_failure: true
script:
- echo 'Examining environment'
- echo PWD=$(pwd) Using image ${CI_JOB_IMAGE}
- git --version
- echo --------------------------------------------------------------------------------
- env
- echo --------------------------------------------------------------------------------
- npm --version
- node --version
- echo java -version
- mvn --version
- kanico --version
- echo --------------------------------------------------------------------------------
The problem is that the Java command is failing because java isn't installed. The error says:
/bin/sh: eval: line 217: java: not found
I know I could remove the line java -version, but I'm trying to come up with a canned logger that I could use in all my CI-Pipelines, so it would include: Java, Maven, Node, npm, python, and whatever else I want to include and I realize that some of those commands will fail because some of the commands are not found.
Searching for the above solution got me close.
GitLab CI: How to continue job even when script fails - Which did help. By adding allow_failure: true I found that even if the logger job failed the remaining stages would run (which is desirable). The answer also suggests a syntax to wrap commands in which is:
./script_that_fails.sh > /dev/null 2>&1 || FAILED=true
if [ $FAILED ]
then ./do_something.sh
fi
So that is helpful, but my question is this.
Is there anything built into gitlab's CI-pipeline syntax (or bash syntax) that allows all commands in a given step to run even if one command fails?
Is it possible to allow for a script in a CI/CD job to fail? - suggests adding the UNIX bash OR syntax as shown below:
- npm --version || echo nmp failed
- node --version || echo node failed
- echo java -version || echo java failed
That is a little cleaner (syntax) but I'm trying to make it simpler.
The answers already mentioned are good, but I was looking for something simpler so I wrote the following bash script. The script always returns a zero exit code so the CI-pipeline always thinks the command was successful.
If the command did fail, the command is printed along with the non-zero exit code.
# File: runit
#!/bin/sh
"$#"
EXITCODE=$?
if [ $EXITCODE -ne 0 ]
then
echo "CMD: $#"
echo "Ignored exit code ($EXITCODE)"
fi
exit 0
Testing it as follows:
./runit ls "/bad dir"
echo "ExitCode = $?"
Gives this output:
ls: cannot access /bad dir: No such file or directory
CMD: ls /bad dir
Ignored exit code (2)
ExitCode=0
Notice even though the command failed the ExitCode=0 shows what the ci-pipeline will see.
To use it in the pipeline, I have to have that shell script available. I'll research how to include it, but it must be in the CI runner job. For example,
stages:
- logger-safe
logger-safe-commands:
stage: logger-safe
allow_failure: true
script:
- ./runit npm --version
- ./runit java -version
- ./runit mvn --version
I don't like this solution because it requires extra file in the repo but this is in the spirit of what I'm looking for. So far the simplest built in solution is:
- some_command || echo command failed $?

Variable inside variable gitlab ci

Is there a way to use predefined variable inside custom variable in gitlab ci like this:
before_script:
- cat "${$CI_COMMIT_REF_NAME}" >> .env
to extract the name of branch from $CI_COMMIT_REF_NAME and use it as a name of custom variable
Update:
Check out GitLab 14.3 (September 2021)
Use variables in other variables
CI/CD pipeline execution scenarios can depend on expanding variables declared in a pipeline or using GitLab predefined variables within another variable declaration.
In 14.3, we are enabling the “variables inside other variables” feature on GitLab SaaS.
Now you can define a variable and use it in another variable definition within the same pipeline.
You can also use GitLab predefined variables inside of another variable declaration.
This feature simplifies your pipeline definition and eliminates pipeline management issues caused by the duplicating of variable data.
Note - for GitLab self-managed users the feature is disabled by default.
To use this feature, your GitLab administrator will need to enable the feature flag.
(demo -- video)
See Documentation and Issue.
dba asks in the comments:
Does this include or exclude using globally defined variables?
dba's own answer:
Global variables can be reused, but they need the local_var: ${global_var} syntax with recursive expansion (independent of the shell).
Check if this matches gitlab-org/gitlab-runner issue 1809:
Description
In the .gitlab-ci.yml file, a user can define a variable and use it in another variable definition within the same .gitlab-ci.yml file.
A user can also use a GitLab pre-defined variable in a variable declaration.
Example
variables:
variable_1: "foo" # here, variable_1 is assigned the value foo
variable_2: "${variable_1}" # variable_2 is assigned the value variable_1.
# The expectation is that the value in variable_2 = value set for variable_1
If it is, it should be completed/implemented for GitLab 14.1 (July 2021)
Lots of options.
But you could just pass the predefined var into the .env
image: busybox:latest
variables:
MY_CUSTOM_VARIABLE: $CI_JOB_STAGE
ANIMAL_TESTING: "cats"
before_script:
- echo "Before script section"
- echo $CI_JOB_STAGE
- echo $MY_CUSTOM_VARIABLE
- echo $MY_CUSTOM_VARIABLE >> .env
- echo $CI_COMMIT_BRANCH >> .env
- cat .env
example pipeline output
$ echo "Before script section"
Before script section
$ echo $CI_JOB_STAGE
build
$ echo $MY_CUSTOM_VARIABLE
build
$ echo $MY_CUSTOM_VARIABLE >> .env
$ echo $CI_COMMIT_BRANCH >> .env
$ cat .env
build
exper/ci-var-into-env
$ echo "Do your build here"
Do your build here
or pass it in earlier.
image: busybox:latest
variables:
MY_CUSTOM_VARIABLE: "${CI_JOB_STAGE}"
ANIMAL_TESTING: "cats"
before_script:
- echo "Before script section"
- echo $CI_JOB_STAGE
- echo $MY_CUSTOM_VARIABLE
- echo $MY_CUSTOM_VARIABLE >> .env
- cat .env
example: https://gitlab.com/codeangler/make-ci-var-custom-var-in-script/-/blob/master/.gitlab-ci.yml

How to run a script from file in another project using include in GitLab CI?

I'm trying to run a shell script from my template file located in another project via my include.
How should this be configured to work? Below scripts are simplified versions of my code.
Project A
template.yml
deploy:
before_script:
- chmod +x ./.run.sh
- source ./.run.sh
Project B
gitlab-ci.yml
include:
- project: 'project-a'
ref: master
file: '/template.yml'
stages:
- deploy
Clearly, the commands are actually being run from ProjectB and not ProjectA where the template resides. This can further be confirmed by adding ls -a in the template file.
So how should we be calling run.sh? Both projects are on the same GitLab instance under different groups.
If you have access project A and B, you can use multi-project pipelines. You trigger a pipeline in project A from project B.
In project A, you clone project B and run your script.
Project B
job 1:
variables:
PROJECT_PATH: "$CI_PROJECT_PATH"
RELEASE_BRANCH: "$CI_COMMIT_BRANCH"
trigger:
project: project-a
strategy: depend
Project A
job 2:
rules:
- if: '$CI_PIPELINE_SOURCE == "pipeline" && $PROJECT_PATH && $RELEASE_BRANCH'
script:
- git clone -b "${RELEASE_BRANCH}" --depth 50 https://gitlab-ci-token:${CI_JOB_TOKEN}#${CI_SERVER_HOST}/${PROJECT_PATH}.git $(basename ${PROJECT_PATH})
- cd $(basename ${PROJECT_PATH})
- chmod +x ../.run.sh
- source ../.run.sh
We've also run into this problem, and kinda wish Gitlab allowed includes to "import" non-yaml files. Nevertheless the simplest workaround we've found is to build a small docker image in repo A, which contains the script you want to run, and then repo B's job uses that docker image as the image, so the file run.sh is available :)
Minimal Dockerfile:
FROM bash:latest
COPY run.sh /usr/local/bin/
CMD run.sh
(Note: make sure you chmod +x run.sh before building your image, or add a RUN chmod +x /usr/local/bin/run.sh step)
Then, you'd just add this to your Project B's .gitlab-ci.yml:
stages:
- deploy
deploy:
image: registry.gitlab.com/... # Wherever you pushed your docker image to
script: run.sh
it's also possible to request a script by curl instead of copying a whole repository:
- curl -H "PRIVATE-TOKEN:$PRIVATE_TOKEN" --create-dirs "$CI_API_V4_URL/projects/$CI_DEPLOY_PROJECT_ID/repository/archive?path=pathToFolderWithScripts" -o $TEMP_DIR/archive.tar.gz
- tar zxvf $TEMP_DIR/archive.tar.gz -C $TEMP_DIR --strip-components 3
- bash $TEMP_DIR/run.sh
to make a curl request
to archive a folder with scripts
to unzip scripts in a temporary folder
to execute sh
ref This :: https://docs.gitlab.com/ee/api/repository_files.html#get-file-from-repository
GET /projects/:id/repository/files/:file_path/raw
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/13083/repository/files/app%2Fmodels%2Fkey%2Erb?ref=master"
it will display the file
to download this file just add >>
as below
curl --header "PRIVATE-TOKEN: <your_access_token>" "https://gitlab.example.com/api/v4/projects/13083/repository/files/app%2Fmodels%2Fkey%2Erb?ref=master" >> file.extension
As hinted by the answer above, multi project pipelines is the right approach for it.
Here's how it worked for me:
GroupX/ProjectA - contains reusable code
# .gitlab-ci.yml
stages:
- deploy
reusable_deploy_job:
stage: deploy
rules:
- if: '$CI_PIPELINE_SOURCE == "pipeline"' # run only if triggered by a pipeline
script:
- bash ./src/run.sh $UPSTREAM_CUSTOM_VARIABLE
GroupY/ProjectB - job that will reuse a code
# .gitlab-ci.yml
stages:
- deploy
deploy_job:
stage: deploy
variables:
UPSTREAM_CUSTOM_VARIABLE: CUSTOM_VARIABLE # pass this variable to downstream job
trigger: groupx/projecta

Conditional variables in gitlab-ci.yml

Depending on branch the build comes from I need to use slightly different command line arguments. Particularly I would like to upload snapshot nexus artifacts when building from a branch, and release artifact when building off master.
Is there a way to conditionally alter variables?
I tried to use except/only keywords like this
stages:
- stage
variables:
TYPE: Release
.upload_common:
stage: stage
tags: ["Win"]
script:
- echo Uploading %TYPE%
.upload_snapshot:
variables:
TYPE: "Snapshot"
except:
- master
upload:
extends:
- .upload_common
- .upload_snapshot
Unfortunately it skips whole upload step when building off master.
The reason I am using 'extends' pattern here is that I have win and mac platforms which use slightly different variables substitution syntax ($ vs %). I also have a few different build configuration - Debug/Release, 32bit/64bit.
The code below actually works, but I had to duplicate steps for release and snapshot, one is enabled at a time.
stages:
- stage
.upload_common:
stage: stage
tags: ["Win"]
script:
- echo Uploading %TYPE%
.upload_snapshot:
variables:
TYPE: "Snapshot"
except:
- master
.upload_release:
variables:
TYPE: "Release"
only:
- master
upload_release:
extends:
- .upload_common
- .upload_release
upload_snapshot:
extends:
- .upload_common
- .upload_snapshot
The code gets much larger when snapshot/release configuration is multiplied by Debug/Release, Mac/Win, and 32/64bits. I would like to keep number of configurations at minimum.
Having ability to conditionally altering just a few variables would help me reducing this code a lot.
Another addition in GitLab 13.7 are the rules:variables. This allows some logic in setting vars:
job:
variables:
DEPLOY_VARIABLE: "default-deploy"
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables: # Override DEPLOY_VARIABLE defined
DEPLOY_VARIABLE: "deploy-production" # at the job level.
- if: $CI_COMMIT_REF_NAME =~ /feature/
variables:
IS_A_FEATURE: "true" # Define a new variable.
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
Unfortunatelly YAML-anchors or GitLab-CI's extends don't seem to allow to combine things in script array of commands as of today.
I use built-in variable CI_COMMIT_REF_NAME in combination with global or job-only before_script to solve similar tasks without repeating myself.
Here is an example of my workaround on how to set environment variable to different values dynamically for PROD and DEV during delivery or deployment:
.provide ssh private deploy key: &provide_ssh_private_deploy_key
before_script:
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- |
if [ "$CI_COMMIT_REF_NAME" == "master" ]; then
echo "$SSH_PRIVATE_DEPLOY_KEY_PROD" > ~/.ssh/id_rsa
MY_DYNAMIC_VAR="we are in master (PROD)"
else
echo "$SSH_PRIVATE_DEPLOY_KEY_DEV" > ~/.ssh/id_rsa
MY_DYNAMIC_VAR="we are NOT in master (DEV)"
fi
- chmod 600 ~/.ssh/id_rsa
deliver-via-ssh:
stage: deliver
<<: *provide_ssh_private_deploy_key
script:
- echo Stage is deliver
- echo $MY_DYNAMIC_VAR
- ssh ...
Also consider this workaround for concatenation of "script" commands: https://stackoverflow.com/a/57209078/470108
hopefully it helps.
A nice way to prepare variables for other jobs is the dotenv report artifact. Unfortunately, these variables cannot be used in many places, but if you only need to access them from other jobs scripts, this is the way:
# prepare environment variables for other jobs
env:
stage: .pre
script:
# Set application version from GIT tag or fake it
- echo "APPVERSION=${CI_COMMIT_TAG:-0.1-dev-$CI_COMMIT_REF_SLUG}+$CI_COMMIT_SHORT_SHA" | tee -a .env
artifacts:
reports:
dotenv: .env
In the script of this job you can conditionally prepare and write your environment values into a file, then make a dotenv artifact out of it. Subsequent - or better yet dependent - jobs will pick up the variables for their scripts from there.