YAML Variables, can you reference variables within variables? - variables

I am using a variables.yml file as a template to store different variables, and I was curious if I am able to reference variables within the yml file itself to essentially nest them.
For example:
#variables.yml file
variables:
food1: 'pineapple'
food2: 'pizza'
favoriteFood: '$(food1) $(food2)'
So that when I eventually call upon this variable "favoriteFood", I can just use ${{ variables.favoriteFood }} and the value should be "pineapple pizza"
Example:
#mainPipeline.yml file
variables:
- template: 'variables.yml'
steps:
- script: echo My favorite food is ${{ variables.favoriteFood }}.
Am I on the right track here? I can't seem to google to any examples of if this is possible.

Yes! It is in fact possible, just follow the syntax outlined above. Don't forget spacing is critical in YML files.

Related

Env var required but not provided - dbt CLI

We have an environment variable set in dbtCloud called DBT_SNOWFLAKE_ENV that selects the right database depending on which environment is used.
At the moment, I'm trying to set up dbt CLI with VSCode. I created a profiles.yml file that looks like this:
default:
target: development
outputs:
development:
type: snowflake
account: skxxxx.eu-central-1
user: <name>
password: <pass>
role: sysadmin
warehouse: transformations_dw
database: " {{ env_var('DBT_SNOWFLAKE_ENV', 'analytics_dev') }} "
schema: transformations
threads: 4
I added the env_var line after some suggestions but I realise that the environment variable still doesn't exist yet. The problem I see is that if I hardcode analytics_dev in that place (which makes sense), the error still persists.
I wouldn't want anybody who's going to use dbt to have to change the environment variable if they want to run something on production.
What are my options here?
You can set up a source file for the variables on dbt cli - for example you would create a bash script called set_env_var.sh and then source set_env_var.sh in your terminal.
An example of the bash script would be:
export SNOWFLAKE_ACCOUNT=xxxxx
export SNOWFLAKE_USER=xxxxx
export SNOWFLAKE_ROLE=xxxx
export SNOWFLAKE_SCHEMA=xxxx
export SNOWFLAKE_WAREHOUSE=xxxxx
and in your profiles.yml you can add all the variables you want, for example..
warehouse: "{{ env_var('SNOWFLAKE_WAREHOUSE') }}"
database: "{{ env_var('SNOWFLAKE_DATABASE') }}"
Hope this helps.
First of all younyou have to give hard code database name the other syntax is wrong. Secondly try to make a dynamic variable for environment and then use it like this when you want to use dbt, mean
**DBT snapshot --profile --vars $DBT_SNOWFLAKE_ENV **
As when you run it can easily pick up from env.
Currently i am working on dbt with everything dynamic even the full profile is dynamic according to schema and db.
In my case in my DBT model my variable was declared as part of vars within my dbt_project.yml file, so instead of accessing the variable like
"{{ env_var('MY_VARIABLE') }}"
I should have used:
"{{ var('MY_VARIABLE') }}"

How to modify variable inside ansible jinja2 template

I am passing a variable called x_version=v5.5.9.1 to the Ansible jinja2 template(bash).
But inside the receiving bash script (jinja2) variable x_version should be modified to v5.5.9.
version_defined_in_ansible={{ x_version }}
Below modification helped me.
version_defined_in_ansible=v{{ x_version.split('v')[1][0:5] }}
Given the variable
x_version: v5.5.9.1
The simplest approach is to split the extension
{{ x_version|splitext|first }}
evaluates to
v5.5.9

how to pass value from commit to GitLab CI pipeline as variable?

I need to dynamically pass value to GitLab CI pipeline to pass the value further to jobs. The problem is: the value cannot be stored in the code and no pipeline reconfiguration should be needed (e.g. I can pass the value in "variables" section of .gitlab-ci.yml but it means store value in the code, or changes in "Environment variables" section of "CI / CD Settings" means manual reconfiguration). Also, branch name cannot be used for that purpose too.
It is not a secret string but a keyword which modifies pipeline execution.
So, how can I do it?
You didn't specify the source of this value.
You say "pass value from commit to ..."
If it's some meta information about the commit itself, look at the list of Predefined environment variables
There's quite a lot of vars named CI_COMMIT_* which might work for you.
However,
if it's some value that you generate in the pipeline in one job and want to pass to another job - it's a different case.
There is a long-living request to Pass variables between jobs, which is still not implemented.
The workaround for this moment is to use artifacts - files to pass information between jobs in stages.
Our use case is to extract Java app version from pom.xml and pass it to some packaging job later.
Here is how we do it in our .gitlab-ci.yml:
...
variables:
VARIABLES_FILE: ./variables.txt # "." is required for image that have sh not bash
...
get-version:
stage: prepare
image: ...
script:
- APP_VERSION=...
- echo "export APP_VERSION=$APP_VERSION" > $VARIABLES_FILE
artifacts:
paths:
- $VARIABLES_FILE
...
package:
stage: package
image: ...
script:
- source $VARIABLES_FILE
- echo "Use env var APP_VERSION here as you like ..."

Does Gitlab-CI support variable expansion within only:refs?

I've got a large .gitlab-ci.yml file with lots of jobs in it. Many of these jobs are filtered to only run on certain branches. When managing this file it would be convenient to define the names of these branches as variables at the top of the file so that only the variables need to be updated if the branch names change. This is a pretty standard practice for constants in most programming languages.
Unfortunately, it doesn't look like this works in Gitlab-CI:
variables:
THIS_DOES_NOT_WORK: "this_works"
lots:
only:
refs:
- this_works
script:
- echo "lots"
of:
only:
refs:
- $THIS_DOES_NOT_WORK
script:
- echo "of"
jobs:
only:
refs:
- $THIS_DOES_NOT_WORK
script:
- echo "jobs"
In the above example, only the "lots" job will be run since the THIS_DOES_NOT_WORK variable is not expanded in the "of" and "jobs" jobs.
The closest documentation which I can find doesn't mention anything about the only:refs keyword. It does go into details on the only:variables keyword. This keyword could provide a nice workaround if we could do something like this instead:
variables:
THIS_DOES_NOT_WORK: "this_works"
lots:
only:
variables:
- $CI_COMMIT_REF_NAME == "this_works"
script:
- echo "lots"
of:
only:
variables:
- $CI_COMMIT_REF_NAME == $THIS_DOES_NOT_WORK
script:
- echo "of"
jobs:
only:
variables:
- $CI_COMMIT_REF_NAME == $THIS_DOES_NOT_WORK
script:
- echo "jobs"
In this case it's explicitly stated in the documentation that this won't work.
The only:variables keyword used for filtering on variable comparisons is ironically incapable of expanding variables.
Is there some other workaround here? Am I missing something?
According to the https://docs.gitlab.com/ee/ci/variables/where_variables_can_be_used.html, it seems that the capability of the ci pipeline increased and it can actually expand variables, now. For me the solution was to use parts of your example:
jobs:
only:
variables:
- $THIS_DOES_WORK == $CI_COMMIT_REF_NAME
script:
- echo "jobs"
In my case $THIS_DOES_WORK is a variable passed from the gitlab ui via the CI/CD variables tab. Gitlab states the constraints variables used in this scope have:
The variable must be in the form of $variable. Not supported are the following:
Variables that are based on the environment’s name (CI_ENVIRONMENT_NAME, CI_ENVIRONMENT_SLUG).
Any other variables related to environment (currently only CI_ENVIRONMENT_URL).
Persisted variables.
Additionally, you should pay attention that the variable is not set to protected, if working on an unprotected branch.

How to parse variables in Ansible group_vars dictionary?

I have previously been placing all of my variables within the inventory file, such as
dse_dir=/app/dse
dse_bin_dir={{ dse_dir }}/bin
dse_conf_dir={{ dse_dir }}/resources/dse/conf
dse_yaml_loc={{ dse_conf_dir }}/dse.yaml
cass_conf_dir={{ dse_dir }}/resources/cassandra/conf
cass_yaml_loc={{ cass_conf_dir }}/cassandra.yaml
cass_bin_dir={{ dse_dir }}/resources/cassandra/bin
I did not need to use any quotes for these variables in the inventory file and it worked quite well.
Now I am trying to make use of the group_vars functionality, to separate variables per group of hosts. This has a different format, being a dictionary. So now I have:
dse_dir: "/app/dse"
dse_bin_dir: "{{ dse_dir }}/bin"
dse_conf_dir: "{{ dse_dir }}/resources/dse/conf"
dse_yaml_loc: "{{ dse_conf_dir }}/dse.yaml"
cass_conf_dir: "{{ dse_dir }}/resources/cassandra/conf"
cass_yaml_loc: "{{ cass_conf_dir }}/cassandra.yaml"
cass_bin_dir: "{{ dse_dir }}/resources/cassandra/bin"
In order to avoid parsing complains, I need to place quotes around these parameters. But now when I have a playbook such as the following:
---
# Copy CQL files across
- include: subtasks/copy_scripts.yml
- name: Create users
command: '{{ cass_bin_dir })/cqlsh'
I get the following error. Omitting the single quotes or replacing them with double quotes does not work either.
ERROR: There was an error while parsing the task 'command {{ cass_bin_dir })/cqlsh'.
Make sure quotes are matched or escaped properly
All of the documentation that I could find only shows hardcoded values in the dictionary, i.e. without variables including other variables, but I would assume that Ansible would support this.
Any advice on how to parse these properly?
See the “Gotchas” section here for understanding why you needed to add the quotes in your group_vars. (It's the yaml/ansible problematic : {{ combo.)
To address the error in your command, fix the typo: you have a }) instead of }}.