Drone CI - How to set pipeline env var to result of CLI output - drone.io

I recognize that within a pipeline step I can run a simple export, like:
commands:
- export MY_ENV_VAR=$(my-command)
...but if I want to use this env var throughout the whole pipeline, is it possible to do something like this:
environment:
MY_ENV_VAR: $(my-command)
When I do this, I get yaml: unmarshal errors: line 23: cannot unmarshal !!seq into map[string]*yaml.Variable which suggests this isn't possible. My end goal is to write a drone plugin that accepts the output of $(...) as one if it's settings. I'd prefer to have the drone plugin not run the command, but just use the output.
I've also attempted to use step dependencies to export an env var, however it's state doesn't carry over between steps:
- name: export
image: bash
commands:
- export MY_VAR=$(my-command)
- name: echo
image: bash
depends_on:
- export
commands:
- echo $MY_VAR // empty

Writing the command output to a script file might be a better way to do what you want, since filesystem changes are persisted between individual steps.
---
kind: pipeline
type: docker
steps:
- name: generate-script
image: bash
commands:
# - my-command > plugin-script.sh
- printf "echo Fetching Google;\n\ncurl -I https://google.com/" > plugin-script.sh
- name: test-script-1
image: curlimages/curl
commands:
- sh plugin-script.sh
- name: test-script-2
image: curlimages/curl
commands:
- sh plugin-script.sh
From Drone's Docker pipeline documentation:
Workspace
Drone automatically creates a temporary volume, known as your workspace, where it clones your repository. The workspace is the current working directory for each step in your pipeline.
Because the workspace is a volume, filesystem changes are persisted between pipeline steps. In other words, individual steps can communicate and share state using the filesystem.
⚠ Workspace volumes are ephemeral. They are created when the pipeline starts and destroyed after the pipeline completes.

if cant execute command in environment period.
maybe you can define a "command string" in "environment" block, like:
environment:
MY_ENV_VAR: 'echo "this is command to execute"' # note the single quote
then in commands block,
commands:
- eval $MY_ENV_VAR
worth a try

Related

transferring strings across gitlab ci tasks stages using variables

I am wanting to store the output from a script in a variable for use in subsequent commands from within Gitlab CI.
Here is the script:
image: ...
build c-ares:
variables:
CARES_ARTIFACTS_DIR: "-"
script:
- CARES_ARTIFACTS_DIR=$(./build-c-ares.sh)
after_script:
- echo $CARES_ARTIFACTS_DIR
artifacts:
name: CARES_ARTIFACTS
paths:
- $CARES_ARTIFACTS_DIR
My intention is to:
first declare the variable CARES_ARTIFACTS_DIR with global scope
Set the variable value using the output from the build-c-ares.sh script
Recover the output from the build-c-ares.sh script on a later command using the variable
My code does not behave as intended - on dereferencing the variable I find it contains the original value it was assigned at declaration:
$ CARES_ARTIFACTS_DIR=$(./build-c-ares.sh)
Cloning into 'c-ares'...
Running after_script
00:01
Running after script...
$ echo $CARES_ARTIFACTS_DIR
-
Uploading artifacts for successful job
00:00
Uploading artifacts...
WARNING: -: no matching files. Ensure that the artifact path is relative to the working directory
ERROR: No files to upload
It is probably easier to just redirect the script output to a file and define that as an artifact.
Something similar to:
image: ...
build c-ares:
script:
- ./build-c-ares.sh > script_output
- cat script_output
artifacts:
paths:
- script_output
In regards to the specific issue, the variables used in the "artefacts" step will again use the variable initialisation defined for the job. Both the "artefacts" and the "script" steps for the job will start with the custom CARES_ARTIFACTS_DIR variable set to the value "-":
build c-ares:
variables:
CARES_ARTIFACTS_DIR: "-"
script:
# $CARES_ARTIFACTS_DIR=="-"
- CARES_ARTIFACTS_DIR=$(./build-c-ares.sh)
# $CARES_ARTIFACTS_DIR=="hello from build-c-ares.sh"
- echo $CARES_ARTIFACTS_DIR # prints "hello from build-c-ares.sh"
after_script:
# $CARES_ARTIFACTS_DIR=="-"
- echo $CARES_ARTIFACTS_DIR # prints "-"
Fundamentally, Gitlab variables cannot feed information across job steps as intended in the original post. My subjective opinion is to keep steps independent where possible and restrict input to artefacts from upstream jobs or variables explicitly defined in the pipeline script or settings.

How to load an image in gitlab-ci.yml when the version tag is inside a file?

I want to create a file .terraform-version with the content:
0.13.4
Then in gitlab-ci.yml I want to load the version for select image, like:
variables:
TERRAFORM_DOCKER_TAG_VERSION: $(head -n 1 .terraform-version)
# ...
PlanMRDev:
image:
name: hashicorp/terraform:$TERRAFORM_DOCKER_TAG_VERSION
But the content of the $TERRAFORM_DOCKER_TAG_VERSION will be the string $(head -n 1 .terraform-version).
I also tried with:
before_script:
- export TERRAFORM_DOCKER_TAG_VERSION=$(cat .terraform-version)
But in this case the $TERRAFORM_DOCKER_TAG_VERSION will be empty.
Can anyone help me?
Thanks :)
You could use Gitlab Dotenv Artifacts. This is a feature of Gitlab that allows you to define a special kind of artifact - a file containing environment variables - that will be loaded automatically by all subsequent jobs that would normally load the artifacts (previous stages or defined with a need clause).
This is how the jobs would look:
prepare:
script:
- echo "export TERRAFORM_DOCKER_TAG_VERSION=$(cat .terraform-version)" >> version.env
artifacts:
reports:
dotenv: version.env
PlanMRDev:
image:
name: hashicorp/terraform:$TERRAFORM_DOCKER_TAG_VERSION

extends in Gitlab-ci pipeline

I'm trying to include a file in which I declare some repetitive jobs, I'm using extends.
I always have this error did not find expected key while parsing a block
this is the template file
.deploy_dev:
stage: deploy
image: nexus
script:
- ssh -i ~/.ssh/id_rsa -o "StrictHostKeyChecking=no" sellerbot#sb-dev -p 10290 'sudo systemctl restart mail.service'
only:
- dev
this is the main file
include:
- project: 'sellerbot/gitlab-ci'
ref: master
file: 'deploy.yml'
deploy_dev:
extends: .deploy_dev
Can anyone help me please
`
It looks like just stage: deploy has to be indented. In this case it's a good idea to use gilab CI line tool to check if CI pipeline code is valid or just YAML validator. When I checked section from template file in yaml linter I've got
(<unknown>): mapping values are not allowed in this context at line 3 column 8

Conditional variables in gitlab-ci.yml

Depending on branch the build comes from I need to use slightly different command line arguments. Particularly I would like to upload snapshot nexus artifacts when building from a branch, and release artifact when building off master.
Is there a way to conditionally alter variables?
I tried to use except/only keywords like this
stages:
- stage
variables:
TYPE: Release
.upload_common:
stage: stage
tags: ["Win"]
script:
- echo Uploading %TYPE%
.upload_snapshot:
variables:
TYPE: "Snapshot"
except:
- master
upload:
extends:
- .upload_common
- .upload_snapshot
Unfortunately it skips whole upload step when building off master.
The reason I am using 'extends' pattern here is that I have win and mac platforms which use slightly different variables substitution syntax ($ vs %). I also have a few different build configuration - Debug/Release, 32bit/64bit.
The code below actually works, but I had to duplicate steps for release and snapshot, one is enabled at a time.
stages:
- stage
.upload_common:
stage: stage
tags: ["Win"]
script:
- echo Uploading %TYPE%
.upload_snapshot:
variables:
TYPE: "Snapshot"
except:
- master
.upload_release:
variables:
TYPE: "Release"
only:
- master
upload_release:
extends:
- .upload_common
- .upload_release
upload_snapshot:
extends:
- .upload_common
- .upload_snapshot
The code gets much larger when snapshot/release configuration is multiplied by Debug/Release, Mac/Win, and 32/64bits. I would like to keep number of configurations at minimum.
Having ability to conditionally altering just a few variables would help me reducing this code a lot.
Another addition in GitLab 13.7 are the rules:variables. This allows some logic in setting vars:
job:
variables:
DEPLOY_VARIABLE: "default-deploy"
rules:
- if: $CI_COMMIT_REF_NAME == $CI_DEFAULT_BRANCH
variables: # Override DEPLOY_VARIABLE defined
DEPLOY_VARIABLE: "deploy-production" # at the job level.
- if: $CI_COMMIT_REF_NAME =~ /feature/
variables:
IS_A_FEATURE: "true" # Define a new variable.
script:
- echo "Run script with $DEPLOY_VARIABLE as an argument"
- echo "Run another script if $IS_A_FEATURE exists"
Unfortunatelly YAML-anchors or GitLab-CI's extends don't seem to allow to combine things in script array of commands as of today.
I use built-in variable CI_COMMIT_REF_NAME in combination with global or job-only before_script to solve similar tasks without repeating myself.
Here is an example of my workaround on how to set environment variable to different values dynamically for PROD and DEV during delivery or deployment:
.provide ssh private deploy key: &provide_ssh_private_deploy_key
before_script:
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- |
if [ "$CI_COMMIT_REF_NAME" == "master" ]; then
echo "$SSH_PRIVATE_DEPLOY_KEY_PROD" > ~/.ssh/id_rsa
MY_DYNAMIC_VAR="we are in master (PROD)"
else
echo "$SSH_PRIVATE_DEPLOY_KEY_DEV" > ~/.ssh/id_rsa
MY_DYNAMIC_VAR="we are NOT in master (DEV)"
fi
- chmod 600 ~/.ssh/id_rsa
deliver-via-ssh:
stage: deliver
<<: *provide_ssh_private_deploy_key
script:
- echo Stage is deliver
- echo $MY_DYNAMIC_VAR
- ssh ...
Also consider this workaround for concatenation of "script" commands: https://stackoverflow.com/a/57209078/470108
hopefully it helps.
A nice way to prepare variables for other jobs is the dotenv report artifact. Unfortunately, these variables cannot be used in many places, but if you only need to access them from other jobs scripts, this is the way:
# prepare environment variables for other jobs
env:
stage: .pre
script:
# Set application version from GIT tag or fake it
- echo "APPVERSION=${CI_COMMIT_TAG:-0.1-dev-$CI_COMMIT_REF_SLUG}+$CI_COMMIT_SHORT_SHA" | tee -a .env
artifacts:
reports:
dotenv: .env
In the script of this job you can conditionally prepare and write your environment values into a file, then make a dotenv artifact out of it. Subsequent - or better yet dependent - jobs will pick up the variables for their scripts from there.

Is there a way to define build stage in drone.yml?

I have some stages defined in drone.yml file. Is there a way to specify which stage need to be run through command line parameter? For example: below is my drone.yml file. I want to build buildOnContainer1 and buildOnContainer2 stage separately. So I am looking for a command such as drone exec buildOnContainer1. It only runs the command under buildOnContainer1.
buildOnContainer1:
image: container1
pull: true
commands:
- npm test:uat
buildOnContainer2:
image: container2
pull: true
commands:
- npm test:dev
My first thought to implement that granularity level of control is through environment variables.
Drone provides the ability to substitute environment variables at runtime. This gives us the ability to use dynamic build or commit details in our pipeline configuration.
You can pass environment variables to the command line and also to your Drone server using secrets. Check the Drone docs on ENV interpolation and docker exec command
You should build customized images for container1 and container2 to run the commands or skip them based on the values of specific environment variables.
A dirty example would be something like the following .drone.yml:
buildOnContainer1:
image: container1
pull: true
environment:
- SKIP=${skip.buildOnContainer1}
commands:
- ./myscript.sh test:uat
buildOnContainer2:
image: container2
pull: true
environment:
- SKIP=${skip.buildOnContainer2}
commands:
- ./mysqcrypt.sh test:dev
Your custom image should contain a mysqcript.sh bash script in your working directory. The script could check whether the value of the ENVAR SKIP is true or false. If true, it would do nothing. If false is would execute npm command with whatever args you had passed to the script.
Then you could execute the build locally:
drone exec --secret skip.buildOnContainer1=true --secret skip.buildOnContainer2=false