Env var required but not provided - dbt CLI - dbt

We have an environment variable set in dbtCloud called DBT_SNOWFLAKE_ENV that selects the right database depending on which environment is used.
At the moment, I'm trying to set up dbt CLI with VSCode. I created a profiles.yml file that looks like this:
default:
target: development
outputs:
development:
type: snowflake
account: skxxxx.eu-central-1
user: <name>
password: <pass>
role: sysadmin
warehouse: transformations_dw
database: " {{ env_var('DBT_SNOWFLAKE_ENV', 'analytics_dev') }} "
schema: transformations
threads: 4
I added the env_var line after some suggestions but I realise that the environment variable still doesn't exist yet. The problem I see is that if I hardcode analytics_dev in that place (which makes sense), the error still persists.
I wouldn't want anybody who's going to use dbt to have to change the environment variable if they want to run something on production.
What are my options here?

You can set up a source file for the variables on dbt cli - for example you would create a bash script called set_env_var.sh and then source set_env_var.sh in your terminal.
An example of the bash script would be:
export SNOWFLAKE_ACCOUNT=xxxxx
export SNOWFLAKE_USER=xxxxx
export SNOWFLAKE_ROLE=xxxx
export SNOWFLAKE_SCHEMA=xxxx
export SNOWFLAKE_WAREHOUSE=xxxxx
and in your profiles.yml you can add all the variables you want, for example..
warehouse: "{{ env_var('SNOWFLAKE_WAREHOUSE') }}"
database: "{{ env_var('SNOWFLAKE_DATABASE') }}"
Hope this helps.

First of all younyou have to give hard code database name the other syntax is wrong. Secondly try to make a dynamic variable for environment and then use it like this when you want to use dbt, mean
**DBT snapshot --profile --vars $DBT_SNOWFLAKE_ENV **
As when you run it can easily pick up from env.
Currently i am working on dbt with everything dynamic even the full profile is dynamic according to schema and db.

In my case in my DBT model my variable was declared as part of vars within my dbt_project.yml file, so instead of accessing the variable like
"{{ env_var('MY_VARIABLE') }}"
I should have used:
"{{ var('MY_VARIABLE') }}"

Related

Can you use a varialben to build a variable in Gitlab CI?

Is it possible in a gitlab CICD pipline to build a variable dynamically with a variable?
Sample:
i have a variable in gitlab "TEST_MASTER".
script:
- echo "$TEST_"$CI_COMMIT_BRANCH""
I need the result from the variable TEST_MASTER, but the part of MASTER must come from the branch variable.
That looks like a bash script to me so assuming TEST_MASTER already has a value, you should be able echo it like this:
script:
- myvar=TEST_"$CI_COMMIT_BRANCH"
echo "${!myvar}"
For more information, check this question and it's answers

YAML Variables, can you reference variables within variables?

I am using a variables.yml file as a template to store different variables, and I was curious if I am able to reference variables within the yml file itself to essentially nest them.
For example:
#variables.yml file
variables:
food1: 'pineapple'
food2: 'pizza'
favoriteFood: '$(food1) $(food2)'
So that when I eventually call upon this variable "favoriteFood", I can just use ${{ variables.favoriteFood }} and the value should be "pineapple pizza"
Example:
#mainPipeline.yml file
variables:
- template: 'variables.yml'
steps:
- script: echo My favorite food is ${{ variables.favoriteFood }}.
Am I on the right track here? I can't seem to google to any examples of if this is possible.
Yes! It is in fact possible, just follow the syntax outlined above. Don't forget spacing is critical in YML files.

how to pass value from commit to GitLab CI pipeline as variable?

I need to dynamically pass value to GitLab CI pipeline to pass the value further to jobs. The problem is: the value cannot be stored in the code and no pipeline reconfiguration should be needed (e.g. I can pass the value in "variables" section of .gitlab-ci.yml but it means store value in the code, or changes in "Environment variables" section of "CI / CD Settings" means manual reconfiguration). Also, branch name cannot be used for that purpose too.
It is not a secret string but a keyword which modifies pipeline execution.
So, how can I do it?
You didn't specify the source of this value.
You say "pass value from commit to ..."
If it's some meta information about the commit itself, look at the list of Predefined environment variables
There's quite a lot of vars named CI_COMMIT_* which might work for you.
However,
if it's some value that you generate in the pipeline in one job and want to pass to another job - it's a different case.
There is a long-living request to Pass variables between jobs, which is still not implemented.
The workaround for this moment is to use artifacts - files to pass information between jobs in stages.
Our use case is to extract Java app version from pom.xml and pass it to some packaging job later.
Here is how we do it in our .gitlab-ci.yml:
...
variables:
VARIABLES_FILE: ./variables.txt # "." is required for image that have sh not bash
...
get-version:
stage: prepare
image: ...
script:
- APP_VERSION=...
- echo "export APP_VERSION=$APP_VERSION" > $VARIABLES_FILE
artifacts:
paths:
- $VARIABLES_FILE
...
package:
stage: package
image: ...
script:
- source $VARIABLES_FILE
- echo "Use env var APP_VERSION here as you like ..."

How to use Bamboo plan variables in an inline script task?

When defining a Bamboo plan variable, the page has this.
For task configuration fields, use the syntax
${bamboo.myvariablename}. For inline scripts, variables are exposed as
shell environment variables which can be accessed using the syntax
$BAMBOO_MY_VARIABLE_NAME (Linux/Mac OS X) or %BAMBOO_MY_VARIABLE_NAME%
(Windows).
However, that doesn't work in my Linux inline script. For example, I have the following defined a a plan variable
name: my_plan_var value: some_string
My inline script is simply...
PLAN_VAR=$BAMBOO_MY_PLAN_VAR
echo "Plan var: $PLAN_VAR"
and I just get a blank string.
I've tried this
PLAN_VAR=${bamboo.my_plan_var}
But I get
${bamboo.my_plan_var}: bad substitution
on the log viewer window.
Any pointers?
I tried the following and it works:
On the plan, I set my_plan_var to "it works" (w/o quotes)
In the inline script (don't forget the first line):
#/bin/sh
PLAN_VAR=$bamboo_my_plan_var
echo "testing: $PLAN_VAR"
And I got the expected result:
testing: it works
I also wanted to create a Bamboo variable and the only thing I've found to share it between scripts is with inject-variables like following:
Add to your bamboo-spec.yaml the following after your script that will create the variable:
Build:
tasks:
- script: create-bamboo-var.sh
- inject-variables:
file: bamboo-specs/vars.yaml
scope: RESULT
# namespace: plan
- script: echo ${bamboo.inject.GIT_VERSION} # just for testing
Note: Namespace defaults to inject.
In create-bamboo-var.sh create the file bamboo-specs/vars.yaml:
#!bin/bash
versionStr=$(git describe --tags --always --dirty --abbrev=4)
echo "GIT_VERSION: ${versionStr}" > ./bamboo-specs/vars.yaml
Or for multiple lines you can use:
SW_NUMBER_DIGITS=${1} # Passed as first parameter to build script
cat <<EOT > ./bamboo-specs/vars.yaml
GIT_VERSION: ${versionStr}
SW_NUMBER_APP: ${SW_NUMBER_DIGITS}
EOT
Scope can be local or result. Local means it's only available for current job and result means it can be used in subsequent stages of this plan and releases that are created from the result.
Namespace is just used to avoid naming collisions with other variables.
With the above you can use that variable in later scripts with ${bamboo.inject.GIT_VERSION}. The last script task is just to see that it is working in other scripts. You can also see the variables in the web app as build meta data.
I'm using the above script before the build (in my case compiling C-Code) takes place so I can also create a version.h file that can be used by the source code.
This is still a bit cumbersome but I'm happy with it and I hope it will help others to configure Bamboo. Bamboo documentation could be better. (Still a lot try and error)

How to use the value of an Ansible variable to target and get the value of a hostvar?

I'm in a situation where I have multiple Ansible roles, using multiple group_vars. Spread around each host's vars (depending on the host) is a number of directory paths, each in different places within the hostvar tree.
I need to ensure that a certain number of these directories exist when provisioning. So I created a role that uses the file module to ensure that these directories exist. Well, it would do, if I could figure out how to get it to work.
I have a group_var something similar to:
ensure_dirs:
- "daemons.builder.dirs.pending"
- "processor.prep.logdir"
- "shed.logdir"
Each of these 3 values maps directly to a group var that contains a string value that represents the corresponding filesystem path for that var, for example:
daemons:
builder:
dirs:
pending: /home/builder/pending
I would like to somehow iterate over ensure_dirs and evaluate each item's value in order to resolve it to the FS path.
I've tried several approaches, but I can't seem to get the value I need. The following is the most success I've had, which simply returns the literal of the constructed string.
- file:
dest: "hostvars['{{ ansible_hostname }}']['{{ item.split('.') | join(\"']['\") }}']"
state: directory
with_items: "{{ ensure_dirs }}"
This results in directories named, for example, hostvars['builder']['daemons']['builder']['dirs']['pending'] in the working directory. Of course, what I want the file module to work with the the value stored at that path in the hostvars, so that it will instead ensure that /home/builder/pending exists.
Anybody have any ideas?
There is a simple way – template your group variable.
group_var
ensure_dirs:
- "{{ daemons.builder.dirs.pending }}"
- "{{ processor.prep.logdir }}"
- "{{ shed.logdir }}"
task
- file:
path: "{{ item }}"
state: directory
with_items: "{{ ensure_dirs }}"
I suggest you to create and use a lookup plugin.
Ansible defines lots of lookup plugins, the most popular is 'items' when you use 'with_items'. Convention is 'with_(plugin name)'.
To create you lookup plugin:
Edit file ansible.cfg and uncomment key 'lookup_plugins' with value './plugins/lookup'
Create a plugin file named 'dirs.py' in './plugins/lookup'
Use it in your playbook:
- file:
dest: "{{ item }}"
state: directory
with_dirs: "{{ ensure_dirs }}"
implement you plugin dirs.py with something like that (see lookup plugins for more examples)
class LookupModule(LookupBase):
def run(self, terms, **kwargs):
return [dir.replace('.', '/') for dir in terms]
Advantages:
* Your playbook is more easy to read
* You can create python unitary tests for you plugin and improve it