ADF same code different processing depending on environment - azure-data-factory-2

I'd like to delete files in the dev data lake and archive them to long term storage in the production environment. How would I check what environment I'm running ADFv2 in so that I can switch/branch depending on environment.

You could have a pipeline with an IF activity that checks for either the data factory name or you could set a global parameter called pENV which would be set to "DEV" or "PROD" based on the environment.
In the IF activity, check the value of the global parameter and either go to the execution path with the delete activity or the path with your archive logic

Related

ADF: Using ForEach and Execute Pipeline with Pipeline Folder

I have a folder of pipelines, and I want to execute the pipelines inside the folder using a single pipeline. There will be times when there will be another pipeline added to the folder, so creating a pipeline filled with Execute Pipelines is not an option (well, it is the current method, but it's not very "automate-y" and adding another Execute Pipeline whenever a new pipeline is added is, as you can imagine, a pain). I thought of the ForEach Activity, but I don't know what the approach is.
I have not tried this approach but I think we can use the
ADF RestAPI to get all the details of the pipelines which needs to be executed. Since the response is in JSON you can write it back to temp blob and add filter and focus on what you need .
https://learn.microsoft.com/en-us/rest/api/datafactory/pipelines/list-by-factory?tabs=HTTP
You can use the Create RUN API to trigger the pipeline .
https://learn.microsoft.com/en-us/rest/api/datafactory/pipelines/create-run?tabs=HTTP
As Joel called out , if different pipeline has different count of paramter , it will be little messy to maintain .
Folders are really just organizational structures for the code assets that describe pipelines (same for Datasets and Data Flows), they have no real substance or purpose inside the executing environment. This is why pipeline names have to be globally unique rather than unique to their containing folder.
Another problem you are going to face is that the "Execute Pipeline" activity is not very dynamic. The pipeline name has to be known as design time, and while parameter values are dynamic, the parameter names are not. For these reasons, you can't have a foreach loop that dynamically executes child pipelines.
If I were tackling this problem, it would be through an external pipeline management system that you would have to build yourself. This is not trivial, and in your case would have additional challenges because of the folder level focus.

Update a variable group snapshot for an Azure DevOps release

I have an Azure DevOps release pipeline that references several variable groups, with each stage in the release being linked to a different group.
When I first deploy to a stage, one of the values required for that variable group cannot be known (a key for a function on a Function App). However, it's required for some post deployment checks that need to be completed.
It's not ideal, but I thought I'd be able to do the deployment, have it fail, update the variable group and then try again. This is based on the following statement when you edit a release -
You can edit approvals, tasks, and variables
However, it seems that if you edit a variable group there is no way to pull the updated values in to the release. This means that the only way I can get my deployment out is to create a second release.
I'm really hoping that I'm missing something, because other tools (e.g. Octopus) offer this functionality out of the box. Is it possible to update a variable group snapshot for a release?
Variable group snapshot cannot be modified in the deployed release currently. Azure devops release doesnot support this feature yet.
However, variables defined under Variables tab can be modified in previously deployed release.
So as workaround, you can define a variable which has the same name under Variables to override the variable defined in the variable group. (For example, i have a variable Name in variable group, then i defined a new variable under Variables also named Name)
See below:
1, Edit your release and choose Edit release
2, Click Variables tab --> Add a new variable(eg. Name) which has the same name with the variable in the variable group--> Save
3,After saving the changes. Go to Pipeline tab--> to Redeploy your release. Then the updated values will be pulled in to the release
You can click here to submit a user voice(Click suggest a feature and choose Azure Devops)to Microsoft Development team. Hope they will consider adding this feature in the feature sprint

To sync the different environment what we should do data base refresh or package import

I am very new to RSA Archer and want to know about ..
To sync the different environment what we should do data base refresh or package import.
I guess it depends on how far out of sync your environments are... The further out of sync you should be looking at a full database "refresh".*. Otherwise you should generally develop in environment A and package to other environments and keep up that cycle.
*When I say refresh I mean take a backup of your 'updated' instance environment and then restore your 'out of sync' instance environment using that backup.
As "ArcherHero" said - you can clone your instance and configuration databases from one environment to another. This is the fastest way to sync environments. In fact per RSA internal team they sync their internal environments every 3 months (based on 2016 video they published).
You may not have an option to clone databases, in this case you can delete application/questionnaire and install the package from another environment. Don't forget to package the app/questionnaires referencing the module you deleted, roles and workspaces. In addition you need to move data feeds that use deleted module....
So cloning data bases is much faster...

Qlikview - set up a global variable defining a server

Intro:
Every QVW has their own variables that manually defined where to store/load the records they pulled in their SQL scripts.
ex:
SET vLoadPath = \\dev_server\Extract QVD\;
SET vStorePath = \\dev_server\Transform QVD\;
Scenario:
As Qlikview Admin, we promote tested QVWs from DEV to PROD environment.
Variable path is always hardcoded in DEV (see example code above)
When we promote the QVW in PROD, we will change the defined path variable to have a 'prod_server'.
Question:
Is there a way to make the sample variables above, be a Global variable? These variables will change its value when we promote the QVW from DEV to PROD environment.
Reason:
I think the manual changing of hardcoded path variable is a poor practice.
What if the server should change? Or the subdirectory?
Then you’ll have to go back into every single QVW script and change the hardcoded subdirectory path.
Not only is that a high cost, it also introduces the possibility of error.
The way I've generally handled such things is via a master config file. You $include (or probably $must_include) this file in all QVWs and it sets your environment variables. Make the path to the config file relative so when you deploy between environments the config file in the new environment is picked up.

Accessing User-created Parameters

This question is regarding Spinnaker. Within each Pipeline, you have the ability to define custom parameters. When a Pipeline is triggered, you have the ability to use the default value, or supply a new value to those parameters.
I assume I can create Stages within that Pipeline that will use the value of the parameters, when the Pipeline is triggered. However, I can't figure out how to access these values in any Stage of the Pipeline.
For example, I have a Pipeline "Test". I create a parameter "Version", in the configuration for "Test".
Creating a parameter
Then, I add a Pipeline Stage to execute a Jenkins job. The job I have selected has a parameter, "Version".
Using a parameter's value
When the Pipeline "Test" is triggered, I want it to use the value of the Pipeline parameter "Version" and supply it to the Jenkins job. I tried the following syntax for the Jenkins job's Version field: $Version, {{Version}}, #Version, ((Version)), (Version), {Version}, #Version, and more. Nothing seems to translate into the value of the Pipeline parameter "Version", when the Pipeline is triggered. How do I do this?
On a related note, is there a way to use the Application name or Pipeline name in Pipeline Stages?
Parameters (and templated pipelines in general) are accessed via Spring Expression Language.
If your pipeline has a Version parameter and your Jenkins stage has a Version parameter, then in the Jenkins stage configuration you explicitly have to map the pipeline's Version to the Jenkins stage's Version using a value ${parameters.Version}.
Any pipeline parameter is accessible via the ${parameters.XXX} syntax.
Have a look at the pipeline expressions guide for more examples.