Resolve Azure YAML Pipeline overlapping variable names in multiple variable groups - variables

We're working on converting our Classic Azure Pipelines to YAML Pipelines. One thing that is not clear is how to ensure that two different variable groups with variables with the same name but different meaning don't step on each other.
For example, if I have variable groups vg1 and vg2, each with variable named secretDataDestination, how do I ensure that the correct secretDataDestination is used in a YAML Pipeline?
A more concerning example is, if we initially have two variable groups without overlapping variable names, how do we ensure that adding a newly-overlapping variable name to a group doesn't replace use of the variable as originally intended?

A workaround is leveraging output variables in Azure DevOps with some small inline PowerShell task code.
First, create 2 jobs. Each job with their own variable group, in this case Staging and Prod. Both groups contain the variables apimServiceName and apimPrefix. Add the variables as a job output by echoing them as isOutput=true like this:
- job: StagingVars
dependsOn:
variables:
- group: "Staging"
steps:
- powershell: >-
echo "##vso[task.setvariable variable=apimServiceName;isOutput=true]$(apimServiceName)"
echo "##vso[task.setvariable variable=apimPrefix;isOutput=true]$(apimPrefix)"
name: setvarStep
- job: ProdVars
dependsOn:
variables:
- group: "Prod"
steps:
- powershell: >-
echo "##vso[task.setvariable variable=apimServiceName;isOutput=true]$(apimServiceName)"
echo "##vso[task.setvariable variable=apimPrefix;isOutput=true]$(apimPrefix)"
name: setvarStep
Then, use the variables in a new job, where you specify a new variable name and navigate to the job output to get a value, this works because the variable groups are each placed into their own job, so they will not overwrite any variable:
- job:
dependsOn:
- StagingVars
- ProdVars
variables:
ServiceNameSource: "$[ dependencies.StagingVars.outputs['setvarStep.apimServiceName'] ]"
UrlprefixSource: "$[ dependencies.StagingVars.outputs['setvarStep.apimPrefix'] ]"
ServiceNameDestination: "$[ dependencies.ProdVars.outputs['setvarStep.apimServiceName'] ]"
UrlprefixDestination: "$[ dependencies.ProdVars.outputs['setvarStep.apimPrefix'] ]"

if I have variable groups vg1 and vg2, each with variable named secretDataDestination, how do I ensure that the correct secretDataDestination is used in a YAML Pipeline?
Whether we use classic mode or YAML, it is not recommended to define a variable with the same name in different variable groups. Because when you refer to different variable groups containing the same variable name in the same pipeline, you cannot avoid step on each other.
When you use the same variable name in different variable group in the same pipeline, just like Matt said,
"You can reference multiple variable groups in the same pipeline. If
multiple variable groups include the same variable, the variable group
included last in your YAML file will set the variable's value."
variables:
- group: variable-group1
- group: variable-group2
That means that the variable value in the variable group written later will overwrite the variable value in the variable group written first
I guess you already know this, so you post your second question. Let us now turn to the second question.
if we initially have two variable groups without overlapping variable
names, how do we ensure that adding a newly-overlapping variable name
to a group doesn't replace use of the variable as originally intended?
Indeed, Azure devops currently does not have such a function or mechanism to intelligently detect whether different variable groups have the same variable name, and give a prompt.
I think this is a reasonable request, I add your request for this feature on our UserVoice site which is our main forum for product suggestions:
The ability to detect the same variable in a variable group
As workaround, the simplest and most direct way is that open the variable group of your pipeline link in the Library tab, and directly ctrl + F to search for the existence of the same variable.
Another way is to use REST API Variablegroups - Get Variable Groups By Id to get all the variables, then the loop compares with the variable we are going to enter whether the same variable exists.

Related

DBT Test configuration for particular scenario

Hello Could anyone help me how to simulate this scenario. Example I want to validate these 3 fields on my table "symbol_type", "symbol_subtype", "taker_symbol" and return unique combination/result.
I tried to use this command, however Its not working properly on my test. Not sure if this is the correct syntax to simulate my scenario. Your response is highly appreciated.
Expected Result: These 3 fields should return my unique combination using DBT commands.
I'd recommend to either:
use the generate_surrogate_key (docs) macro in the model, or
use the dbt_utils.unique_combination_of_columns (docs) generic test.
For the first case, you would need to define the following in the model:
select
{{- dbt_utils.generate_surrogate_key(['symbol_type', 'symbol_subtype', 'taker_symbol']) }} as hashed_key_,
(...)
from your_model
This would create a hashed value of the three columns. You could then use a unique test in your YAML file.
For the second case, you would only need to add the generic test in your YAML file as follows:
# your model's YAML file
- name: your_model_name
description: ""
tests:
- dbt_utils.unique_combination_of_columns:
combination_of_columns:
- symbol_type
- symbol_subtype
- taker_symbol
Both these approaches will let you check whether the combination of the three columns is unique over the whole model's output.

read specific files names in adf pipeline

I have got requirement saying, blob storage has multiple files with names file_1.csv,file_2.csv,file_3.csv,file_4.csv,file_5.csv,file_6.csv,file_7.csv. From these i have to read only filenames from 5 to 7.
how we can achieve this in ADF/Synapse pipeline.
I have repro’d in my lab, please see the below repro steps.
ADF:
Using the Get Metadata activity, get a list of all files.
(Parameterize the source file name in the source dataset to pass ‘*’ in the dataset parameters to get all files.)
Get Metadata output:
Pass the Get Metadata output child items to ForEach activity.
#activity('Get Metadata1').output.childItems
Add If Condition activity inside ForEach and add the true case expression to copy only required files to sink.
#and(greater(int(substring(item().name,4,1)),4),lessOrEquals(int(substring(item().name,4,1)),7))
When the If Condition is True, add copy data activity to copy the current item (file) to sink.
Source:
Sink:
Output:
I took a slightly different approaching using a Filter activity and the endsWith function:
The filter expression is:
#or(or(endsWith(item().name, '_5.csv'),endsWith(item().name, '_6.csv')),endsWith(item().name, '_7.csv'))
Slightly different approaches, similar results, it depends what you need.
You can always do what #NiharikaMoola-MT suggested . But since you already know the range of the files ( 5-7) , I suggest
Declare two paramter as an upper and lower range
Create a Foreach loop and pass the parameter and to create a range[lowerlimit,upperlimit]
Create a paramterized dataset for source .
Use the fileNumber from the FE loop to create a dynamic expression like
#concat('file',item(),'.csv')

Terraform, how to call 100 times module with seperate values

I have a problem with terraform configuration which I really don't know how to resolve. I have written module for policy assigments, module as parameters taking object with 5 attributes. The question is if it is possible to split into folder structure tfvars file. I mean eg.
I have main folder subscriptions -> folder_subscription_name -> some number of files tfvars with configuration for each of the policy assigment
Example of each of the file
testmap = {
var1 = "test1"
var2 = "test2"
var3 = "test3"
var4 = "test4"
var5 = "test5"
}
In the module I would like to iterate over all of the maps combine into list of maps. Is it good approach? How to achive that or maybe I should use something other to do it like terragrunt ?
Please give me some tips what is the best way to achive that, basically the goal is that I don't want to have one insanely big tvars file with list of 100 maps but splitted into 100 configuration file for each of the assignment.
Based on the comment on the original question:
The question is preety simple how to keep input variables for each of
resource in seperate file instead of keeping all of them in one very
big file
I would say you aim at having different .tfvars files. For example, you could have a dev.tfvars and a prod.tfvars file.
To plan your deployment, you can pass those file with:
terraform plan --var-file=dev.tfvars -out planoutput
To apply those changes
terraform apply planoutput

Where does SimObject name get set?

I want to know where does SimObject names like mem_ctrls, membus, replacement_policy are set in gem5. After looking at the code, I understood that, these name are used in stats.txt.
I have looked into SimObject code files(py,cc,hh files). I printed all Simobject names by stepping through root descendants in Simulation.py and then searched some of the names like mem_ctrls using vscode, but could not find a place where these names are set.
for obj in root.descendants():
print("object name:%s\n"% obj.get_name())
These names are the Python variable names from the configuration/run script.
For instance, from the Learning gem5 simple.py script...
from m5.objects import *
# create the system we are going to simulate
system = System()
# Set the clock fequency of the system (and all of its children)
system.clk_domain = SrcClockDomain()
system.clk_domain.clock = '1GHz'
system.clk_domain.voltage_domain = VoltageDomain()
# Set up the system
system.mem_mode = 'timing' # Use timing accesses
system.mem_ranges = [AddrRange('512MB')] # Create an address range
The names will be system, clk_domain, mem_ranges.
Note that only the SimObjects will have a name. The other parameters (e.g., integers, etc.) will not have a name.
You can see where this is set here: https://gem5.googlesource.com/public/gem5/+/master/src/python/m5/SimObject.py#1352

How to get string length and subtring an User Defined Variables in Jmeter

I define an User Defined Variables with name:
message_title: "Test searching by title message"
Then I need to run a test case that input is a child string of above variables such as: "search" or "title".
I used an User Parameter and define 2 variables with name:
len : ${__strLen(${message_title})}
middle_search: ${__substring(${message_title}, 5, ${__intSum(${len},-5)})}
But when I run test case it throw error:
51 ERROR - jmeter.threads.JMeterThread: Test failed! java.lang.NumberFormatException: For input string: "${__strLen(${message_title})}....
How can I get length and a child string of User Defined Variables?
Thanks,
For length this works for me, I store result in len variable:
${__strLen(${message_title},len)}
Then:
${__substring(${message_title},5,${__intSum(${len},-5)},)}
As per User Defined Variables documentation:
UDVs are processed in the order they appear in the Plan, from top to bottom.
So you can basically go for 2 User Defined Variables instances
Add User Defined Variables #1 to your Test Plan and define the following variables there:
message_title = Test searching by title message
len - ${__strLen(message_title,)}
Add User Defined Variables #2 to your Test Plan and define the following variable there:
middle_search= ${__substring(${message_title},5,${__intSum(${len},-5)},)}
That's it, you should be able to access the defined variables in Thread Group(s)
Just in case check out Using User Defined Variables article to learn more about User Defined Variables concept.