How to set the Synapse integrate pipeline parameter during deployment?
I am using the Synapse deployment task with GIT to deploy the workspace to multiple environments. There is one parameter that is environment specific that I need to pass to the job.
I don't see this parameter in the TemplateParametersForWorkspace.json in the workspace_publish branch.
Is it supposed to show up here or do I need to follow some other method to set the parameter to Synapse Integrate Pipelines?
It seems you are expecting a variable in the ARM template .json file as shared in the screenshot. Pass the same value as parameter and check after publishing if that appears in the ARM template inside workspace_publish branch or not.
Related
I work for an enterprise with a number of different gitlab repositories all deploying applications using Terraform. The majority of these code bases uses a standardised module that defines certain tags for resources in our cloud provider. I am wanting to add the Gitlab project id as one of the default tags and while I know how to use pre-defined vars in Terraform by defining an env var starting with TF_VAR, I don't want to have to modify all code bases in order to set it.
What I want to do is set the project id (env var) in the module so that any codebase that consumes this module will automatically have this environment variable set for use in the gitlab pipeline.
Does anyone have any ideas how I might do this?
Thanks,
Adam
I have a dynamically created variable inside a CI Pipeline, let's call it "var: $(version.number).$(Date:yyyyMMdd)". I wish to reuse this as a part of the Publish Test Results task in a CD Pipeline so I can link both together and have a valid reference. But I can't fathom how to do this.
This is the yaml for the Publish Test Results task in it's most basic form.
steps:
- task: PublishTestResults#2
displayName: 'Publish Test Results **/TEST-*.xml'
Any pointers will be gratefully accepted.
We could write it out to a json/xml file via power shell task, and publish the file as artifacts. Then read in that file via PowerShell in your release definition.
Build Definition
ConvertTo-Json | Out-File "file.json"
Release Definition
Get-Content "file.json" | ConvertFrom-Json
Also, we could pass the variable from build to release via the extension Variable Tools for Azure DevOps Services.
Steps:
Build Definition and result
Release Definition and result
In addition, I found a blog and save the variable to csv file, you could also refer to Passing variables in VSTS, from Build to Release and between environments.
I have a task to deploy aspnet core React application to 2 different environments: development and production environments. Each of this environments should be configured separately.
I use Azure devops for CI/CD
AspNet project contains following commands for building application
<Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build" />
I use adal for authorization that is why I have to pass some secret variables that are different for Dev and Prod
const adalConfig = {
tenant: process.env.REACT_APP_TENANT,
clientId: process.env.REACT_APP_CLIENT_ID,
redirectUri: process.env.REACT_APP_REDIRECT_URI,
In Azure devops I set params with command:
echo ##vso[task.setvariable variable=REACT_APP_TENANT;isOutput=true]c00000-00ce-000-0f00-0000000004000
in the azure devops I have next standard commands for aspnet core build app
.Net core installer
Resore
run command (to set env variables)
Build
publish
Issues:
Environment variable is not set.
I even don't know how to build another artefact for production, but not for development.
Maybe you already had task to deploy core react app to 2 different environments? Or please give advice if I need to change deployment strategy at all.
The only solutions what I found is to use .env file but I have to commit this file to git - to deploy it from master. And I still don't know how to use different files for dev and prod.
TLDR;
You have isOutput=true in your task.setvariable command. This only sets a variable in Pipelines engine, to be available to other steps, but doesn't actually map to an env variable. Remove isOutput and you shall see REACT_APP_TENANT env variable.
But only in the following steps - the env variable is not immediately accessible in the same pipeline step.
You can define variables at pipeline level if you know their values upfront - that should simplify things. task.setvariable is more useful for dynamic variables.
If you need different process (or a different set of variables) for different environments, I recommend using multistage YAML pipelines or classic Releases. They both allow for setting up different stages, each with their set of variables.
Long story
We need to distinguish two separate processes:
Deployment pipeline that's executed on CI agent
Web application that may be hosted in many different ways - Azure Web Apps, self hosting, docker/k8s, etc.
Doing echo ##vso[task.setvariable ...] sets the variable in the pipeline (1.).
The place where reading variables takes place (like tenant: process.env.REACT_APP_TENANT) isn't that obvious. If it's nodejs server-side code, it'll be executed in 2. If it's a part of some build script, it'll be read in 1.
React is tricky, because:
It react behaves differently in development and release mode. In Release mode, during the build phase, the whole client-side code is compiled down to a static JS file. So the env variables you set in your pipeline should work.
It cannot simply access any env variable (to protect you from exposing your server env variables on client browser by accident). If using create-react-app (which is what ASP.NET React App does by default), you have to prefix env variables with REACT_APP_ to use them. If using Webpack directly, you'll need some plugin for this.
I'm using Drone 0.5. Our build process compiles code to generate an artifact that is deployed to an artifact repository. I need a reference to this artifact for use in later build steps.
Is there a way to pass arbitrary data between build steps? Maybe through environment variables?
I don't know about creating env variables during the build process and persisting them, but the file system is definitely preserved. So you could put whatever data you need into a file for the next step.
I've started using AS 7 after a migration and trying to work out whether the hot deployment works the same way as the console method of uploading applications?
If the hot deployment stays in the deployment folder, where do the applications "go" when they are loaded by the console (or the cli?). Which method should I be using in an admin role? What happens if I use both?
If you use hotdeploy your application will stay in "deployments", otherwise if you use cli your application will stay in "data" folder.
You can use hotdeploy or cli deploy both, last deployed is the current.
here the documentation about deploy command:
[standalone#localhost:9999 /] deploy --help SYNOPSIS
deploy (file_path [--name=deployment_name] [--runtime_name=deployment_runtime_name] [--force | --disabled] |
--name=deployment_name)
[--server-groups=group_name (,group_name)* | --all-server-groups]
[--headers={operation_header (;operation_header)*}]
DESCRIPTION
Deploys the application designated by the file_path or enables an already existing
but disabled in the repository deployment designated by the name argument.
If executed w/o arguments, will list all the existing deployments.
ARGUMENTS
file_path - the path to the application to deploy. Required
in case the deployment
doesn't exist in the repository.
The path can be either absolute or relative to the current directory.
--name - the unique name of the deployment. If the file
path argument is specified
the name argument is optional with the file name been the default value.
If the file path argument isn't specified then the command is supposed to
enable an already existing but disabled deployment, and in this case the
name argument is required.
--runtime_name - optional, the runtime name for the deployment.
--force - if the deployment with the specified name
already exists, by default,
deploy will be aborted and the corresponding message will printed.
Switch --force (or -f) will force the replacement of the existing deployment
with the one specified in the command arguments.
--disabled - indicates that the deployment has to be added
to the repository disabled.
--server-groups - comma separated list of server group names the
deploy command should apply to.
Either server-groups or all-server-groups is required in the domain mode.
This argument is not applicable in the standalone mode.
--all-server-groups - indicates that deploy should apply to all the
available server groups.
Either server-groups or all-server-groups is required in domain mode.
This argument is not applicable in the standalone mode.
-l - in case none of the required arguments is
specified the command will
print all of the existing deployments in the repository. The presence of the -l switch
will make the existing deployments printed one deployment per line, instead of
in columns (the default).
--headers - a list of operation headers separated by a
semicolon. For the list of supported
headers, please, refer to the domain management documentation or use tab-completion.
I believe the only way to have a hot deployment is to use the file system deployments, e.g. the deployment scanner. You can get some information about that on the application deployment documentation.
When you deploy through the console or CLI the deployment stays compressed and goes into the content directory. There's not much you can really do with the content of it there though.
For production it's advised to not use the deployment scanner. There are several ways to deploy your application, but the easiest tend to be with the web console, CLI or the maven plug-in. There are Java API's as well or you could write a script to execute CLI commands.