This question is regarding Spinnaker. Within each Pipeline, you have the ability to define custom parameters. When a Pipeline is triggered, you have the ability to use the default value, or supply a new value to those parameters.
I assume I can create Stages within that Pipeline that will use the value of the parameters, when the Pipeline is triggered. However, I can't figure out how to access these values in any Stage of the Pipeline.
For example, I have a Pipeline "Test". I create a parameter "Version", in the configuration for "Test".
Creating a parameter
Then, I add a Pipeline Stage to execute a Jenkins job. The job I have selected has a parameter, "Version".
Using a parameter's value
When the Pipeline "Test" is triggered, I want it to use the value of the Pipeline parameter "Version" and supply it to the Jenkins job. I tried the following syntax for the Jenkins job's Version field: $Version, {{Version}}, #Version, ((Version)), (Version), {Version}, #Version, and more. Nothing seems to translate into the value of the Pipeline parameter "Version", when the Pipeline is triggered. How do I do this?
On a related note, is there a way to use the Application name or Pipeline name in Pipeline Stages?
Parameters (and templated pipelines in general) are accessed via Spring Expression Language.
If your pipeline has a Version parameter and your Jenkins stage has a Version parameter, then in the Jenkins stage configuration you explicitly have to map the pipeline's Version to the Jenkins stage's Version using a value ${parameters.Version}.
Any pipeline parameter is accessible via the ${parameters.XXX} syntax.
Have a look at the pipeline expressions guide for more examples.
Related
I have a Synapse pipeline with 10 notebooks executed in sequence. These notebooks take various parameters, some of which are common to all or a few of the notebooks. Rather than define the value of these parameters for each notebook (which is repetitive) I wonder can I define them once at the pipeline level and pass them into each notebook that uses them?
So far I tried defining one of the parameters at the pipeline level myparam with a default value and then in the notebook parameters I reference the pipeline parameter as #pipeline().parameter.myparam which I thought would take the default value defined at the pipeline level - but it doesn't. Is what I'm trying to do even possible? Thanks in advance.
Yes, you can pass parameters to multiple notebooks in a Synapse pipeline.
As per this official document,
You can use parameters to pass external values into pipelines,
datasets, linked services, and data flows. Once the parameter has been
passed into the resource, it cannot be changed. By parameterizing
resources, you can reuse them with different values each time.
Parameters can be used individually or as a part of expressions. JSON
values in the definition can be literal or expressions that are
evaluated at runtime.
Below are some referred Parameters example which might help you.
I have a Node.js app running on ECS/Fargate. I want to set some environment variables, and from what I've read, this should be done in the Task definition. However, there does not seem to be any way to edit environment variables after the task is initially defined. When I view the task, they are not editable, and there does not seem to be any way to edit the task. Is there a way to do this?
Container solutions are built to be immutable, which means any form of change, should force a new deployment. This leaves us with the option of retrieving the current TaskDefinition, updating its Environment variables, and updating the Service with the new definition:
aws ecs describe-task-definition --task-definition my_task_def
This retrieves the ACTIVE Task Definition. From here you can update Environment variables and register a new Task Definition:
aws ecs register-task-definition \
--cli-input-json file://<path_to_json_file>/task_def.json
Then Update the service
aws ecs update-service --service my-service --task-definition my_task_def
This will pick up ACTVE Task Definition.
I used CLI for illustration but using SDKs like Boto3 might be much easier handling JSON.
I have a drone file containing multiple pipelines that run in a sequence via dependancies.
In the first pipeline a value is generated that I would like to store as a variable and use in one of the other pipelines.
How would I go about doing this? I’ve seen that variables can be passed between steps via a file but this isn’t possible with pipelines from what i’ve seen and tried.
Thanks
The way my ADF setup currently works, is that I have multiple pipelines, each containing atleast one activity. Then I have one big pipeline that sort of chains these pipelines together.
However, now in the big "master" pipeline, I would like to use the output of an activity from one pipeline and then pass it to another pipeline. All of this orchestrated from the "master" pipeline.
My "master" pipeline would look something like this:
What I have tried to do is adding a parameter to "Execute Pipeline2", and I have tried passing:
#activity('Execute Pipeline1').output.pipeline.runId.output.runOutput
#activity('Execute Pipeline1').output.pipelineRunId.output.runOutput
#activity('Execute Pipeline1').output.runOutput
How would one go about doing this?
unfortunately we don't have a way to pass the output of an activity across pipelines. Right now pipelines don't have outputs (only activities).
We have a workitem that will allow a user to choose what should be the output for a pipeline (imagine a pipeline with 40 activities, user would be able to choose the output of activity 3 as pipeline output). However, this workitem is in very early stages so don't expect to see this soon.
For now, the only way would be to save the output that you want in storage (blob, for example) and then read it and pass it to the other pipeline. Another method could be a web activity that gets the pipeline run (passing run id) and you get the output using ADF SDK or REST API, and then you pass that to the next Execute Pipeline activity.
I want to dynamically change Connection string in Custom task and then want this to reflect in ADF pipeline? Is there a way I can set the pipeline Parameter value in Custom Code task and make my Connection String Parametrised in ADF pipeline?
Thanks
This feature is now supported by data factory, read more here: https://learn.microsoft.com/en-us/azure/data-factory/parameterize-linked-services
Always think of the context where it will be running, for example if you reference a pipeline parameter from the linked service you will receive a warning. But if at runtime there is a pipeline that matches what you configured at the linked service, you will have no problems.
Hope this helped!