Referencing Azure DevOps Variable Group values in a YAML Pipeline - variables

I have created three Variable Groups in Azure DevOps which comprise of three variables:
WebApp-DEV
WebApp-QA
WebApp-Prod
Each variable group has a variable named environment whose value is appropriately named to correspond with the environment.
What I'd like to do now is reference each environment variable in my Azure Pipeline yaml file. The screenshot below depicts how I'm currently referencing the variable group WebApp-DEV in the pipeline DEV stage.
How do I however call or reference the environment values for each variable group, as per the above depiction for WebApp-DEV? In other words, what is the syntax required to reference the value of my variable from a variable group?

When you reference variable group all variables from variable group are available in the scope which you reference variable group. So ig you have WebApp-Dev referenced on job level you can use variables from this group in this job. And the syntax, well if you have env variable defined in variable group it should be $(env) and just it.
stages:
- stage: DEV
jobs:
- job: DEV
variables:
- group: WebApp-Dev
steps:
- script: echo '$(env)' #env comes from WebApp-Dev variable group

Related

How to replace Database name in SQL file in Gitlab Pipeline?

I am deploying database through gitlab. I have a case where database name should be passed as variable and replaced in the sql file but it's not working.
e.g,
the script has {{ db_name}}.schema.table
I have set db_name as variable in gitlab-ci.yml but this variable doesn't get picked up and pipeline fails.
variables:
db_name: xxxx
Is there a way to set database name as variable which will replace the dataabse name and runs the pipeline

How to best implement dynamic dbt datasets

I'm cleaning up a dbt + BigQuery environment and trying to implement a staging environment that pulls from a staging dataset. Problem is that the current .yml files with source information all explicitly point to a production dataset.
One option that I am considering is a source wrapper function that will serve as an adapter and inject the proper dataset depending on some passed CLI var or profile target (which is different for the staging vs prod environments).
However, I'm fairly new to dbt so unsure if this is the best way to go about this. Would appreciate any insight you kind folks have :)
EDIT: I'm realizing that a source wrapper is not the way to go as it would mess with the generated DAG
You can supply the name of the schema for a source in a variable or environment variable, and set that variable at runtime.
In your sources.yml:
version: 2
sources:
- name: jaffle_shop
schema: "{{ var('source_jaffle_shop_schema) }}"
tables:
- name: orders
In your dbt_project.yml:
vars:
- source_jaffle_shop_schema: MY_DEFAULT_SCHEMA
And then to override at runtime:
dbt run --vars "{source_jaffle_shop_schema: my_other_schema}"

how to change default parameter values at pipeline level dynamically in azure data factory while moving from dev to prod

I have few parameters specified at pipeline level in ADF and i have used default values in dev environment.Now i want to move this pipeline to prod environment and want to change the parameter values according to the production.
Earlier is SSIS we used to have configurations(sql,xml...) to do such changes without changing anything in the SSIS package.
can we do the same thing in ADF i:e without changing the default values manually in the package,can we use values stored in sql table to pass as pipeline parameters.
You don't need to worry about the values defined in a pipeline parameter as long as you are going to have a trigger on it. Just make sure to publish different versions of triggers in dev and prod repositories and pass different values to the pipeline parameters.
If however you want to change parameters, you can do so by invoking the pipeline from a parent pipeline through an execute pipeline activity. The values you pass as parameters to the execute pipeline activity can be coming from a lookup (over some configuration file or table).

Getting the JOB_ID variable in Pentaho Data Integration

When you log a job in Pentaho Data Integration, one of the fields is ID_JOB, described as "the batch id- a unique number increased by one for each run of a job."
Can I get this ID? I can see it in my logging tables, but I want to set up a transformation to get it. I think there might be a runtime variable that holds an ID for the running job.
I've tried using the Get Variables and Get System Info transformation steps to no avail. I am a new Kettle user.
You have batch_ids of the current transformation and of the parent job available on the Get System Info step. On PDI 5.0 they come before the "command line arguments", but order changes with each version, so you may have to look it up.
You need to create the variable yourself to house the parent job batch ID. The way to do this is to add another transformation as the first step in your job that sets the variable and makes it available to all the other subsequent transformations and job steps that you'll call from the job. Steps:
1) As you have probably already done, enable logging on the job
JOB SETTINGS -> SETTINGS -> CHECK: PASS BATCH ID
JOB SETTINGS -> LOG -> ENABLE LOGGING, DEFINE DATABASE LOG TABLE, ENABLE: ID_JOB FIELD
2) Add a new transformation call it "Set Variable" as the first step after the start of your job
3) Create a variable that will be accessible to all your other transformations that contains the value of the current jobs batch id
3a) ADD A GET SYSTEM INFO STEP. GIVE A NAME TO YOUR FIELD - "parentJobBatchID" AND TYPE OF "parent job batch ID"
3b) ADD A SET VARIABLES STEP AFTER THE GET SYSTEM INFO STEP. DRAW A HOP FROM THE GET SYSTEM INFO STEP TO THE SET VARIABLES STEP AS ITS MAIN OUTPUT
3c) IN THE SET VARIABLES STEP SET FIELDNAME: "parentJobBatchID", SET A VARIABLE NAME - "myJobBatchID", VARIABLE SCOPE TYPE "Valid in the Java Virtual Machine", LEAVE DEFAULT VALUE EMPTY
And that's it. After that, you can go back to your job and add subsequent transformations and steps and they will all be able to access the variable you defined by substituting ${myJobBatchID} or whatever you chose to name it.
IT IS IMPORTANT THAT THE SET VARIABLES STEP IS THE ONLY THING THAT HAPPENS IN THE "Set Variables" TRANSFORMATION AND ANYTHING ELSE YOU WANT TO ACCESS THAT VARIABLE IS ADDED ONLY TO OTHER TRANSFORMATIONS CALLED BY THE JOB. This is because transformations in Pentaho are multi-threaded and you cannot guarantee that the set variables step will happen before other activities in that transformation. The parent job, however, executes sequentially so you can be assured that once you establish the variable containing parent job batch ID in the first transformation of the job that all other transformaitons and job steps will be able to use that variable.
You can test that it worked before you add other functionality by adding a "Write To Log" step after the Set Variables transformation that writes the variable ${myJobBatchID} to the log for you to view and confirm it is working.

Within SSIS - Is it possible to deploy one package multiple times in the same instance and set different ConfigFilters (I'm using SQL for config)

In my environment my Dev and QA Database Instances are on the same server. I would like to deploy the same package (or different versions of the package) into SSIS and set the filter to select different rows in the Config table. Is this possible? This is SQL 2005.
For the sake of this question lets say I have one variable, which is a directory path. I would like to have these variables in the table twice (with different Filters applied (Dev and QA) as below (simplified) . . .
Filter / Variable Value / Variable Name
Dev / c:\data\dev / FilePath
QA / c:\data\qa / FilePath
Do I need to apply a change within the settings of the package in SSIS or is it changed on the job step within Agent?
Any help would be appreciated.
See http://www.sqlservercentral.com/articles/SSIS/69739/