How to access a parameter value in the Configuration stage of a Spinnaker pipeline? - spinnaker

I've added a parameter, SERVER_PROPERTY_NAME, to a Spinnaker pipeline and have a stage conditional on that parameter being set. The condition is ${!SERVER_PROPERTY_NAME.isEmpty()} but it's evaluating as false even when the parameter is set. What's the right way to access the value of the parameter?

Use ${!parameters.SERVER_PROPERTY_NAME.isEmpty()}.

Related

Use `needs` keyword in GitLab CI with variable stage name

My pipeline has two different contexts, per se. If a developer is running on a branch other than main, a job called scan_sandbox is created on the pipeline of the merge request to scan the Dockerfile that the developer is working on currently.
When the branch is merged into main, a scan_production job is created, implying that the image is going to be pushed to the registry and later used in production environment.
My problem is to deal with this variable stage name, either scan_sandbox or scan_production with the needs statement, in order to fetch and publish the scan results. I've tried...
needs: ["scan_production", "scan_sandbox"]
But it returns an error, since both stages aren't going to be declared in different contexts. Also tried...
needs: ["container_scan"]
Which is the name of the stage where both scans will run, but GitLab CI also doesn't interpret it this way.
Anyone has any ideas?
Here is an image of the problem:
You can specify a dependency to be optional in Gitlab yaml. In case if it's optional, Gitlab won't fail if the stage was not executed. It is specifically added to handle the stages with rules, only, or except conditions.
So you can specify
needs:
- job: scan_sandbox
optional: true
- job: scan_production
optional: true
Notes:
This will work for Gitlab version >= 13.9
Doc link: https://docs.gitlab.com/ee/ci/yaml/#needsoptional

Can one build pipeline send a value as a parameter to the next pipeline it triggers in Azure DevOps

I have a build pipeline, lets say A - that stores a file (this file has a variable value that is set within that build pipeline) within a folder. This Pipeline A triggers another Pipeline B that Publishes the folder as an artifact using the Publish artifact task. But the folder name is dynamic as it is fetched from that file within Pipeline A. I need to pass on the file with that variable value from Pipeline A to Pipeline B while triggering it. Is there any way to do this in Azure DevOps, without using the yaml pipelines?
I have a little complex set of pipelines that I set up using the Classic mode, and converting them all to yaml would take a long time, so would like to know if there is any work around to this.
There are few workarounds:
Create a variable group, and during the Pipeline A set the variable value there with Rest API, then Pipeline B use this variable.
During Pipeline A update the Pipeline B definition with the new value with Rest API.
In Pipeline A trigger the Pipeline B with Trigger Build Task, there you can pass the variable value to the Pipeline B (you do it in the "Build Parameters" field).
I don't think there's a clean way to do this if you need to trigger the build by adding Pipeline A under the triggers section of Pipeline B.
Consider triggering Pipeline B when Pipeline A completes using the REST API. That way, you can have your 'file path' as a variable on Pipeline B and pass it in the parameters collection.
Something like:
POST https://dev.azure.com/{organization}/{project}/_apis/build/builds?ignoreWarnings={ignoreWarnings}&checkInTicket={checkInTicket}&sourceBuildId={sourceBuildId}&api-version=5.0
{
"definition": {
"id": 1234
},
"parameters": "{\"fileName\":\"yourfilename\"}"
}
filePath would be the name of your variable in Pipeline B
Have a look at the Builds - Queue documentation for more info.

Unable to create VM from existing template using powershell

I am getting below error message when creating VM from existing parameter.json and template.json file.
Code : InvalidDeploymentParameterValue
Message : The value of deployment parameter 'publicIPAddresses_azuse2qaautovm2_ip_name' is null. Please specify the
value or use the parameter reference. See https://aka.ms/arm-deploy/#parameter-file for details.
You need to provide a value for that parameter (in a parameters file) publicIPAddresses_azuse2qaautovm2_ip_name or provide a default value for that parameter.

Urban Code deploy: How to use ${p:stepName/propName}?

I have a groovy script step in my process, this script sets a output property. I want to use this property value to set a property on a subsequent step.
Or
Simply use this property value in another groovy script step in my process.
The documentation says i need to use ${p:stepName/propName}. But how do I use it. can anyone give me an example. Assume that the process is the following
1) groovy step with name 'Run Groovy Step', this set the value of property 'CityName' to 'London'.
2) groovy step that wants to use the value of 'CityName'.
How do i use ${p:stepName/propName}?
is it ${p:Run Groovy Step/CityName}?
Yes, that's basically it. You set the output property either with Groovy or in the post-processing script of a step. Then you access it with ${p:stepName/propName} or with properties.get(stepName/propName). Your code ${p:Run Groovy Step/CityName} should work.
For an example:
http://ibm.com/support/knowledgecenter/en/SS4GSP_6.2.2/com.ibm.udeploy.doc/topics/output_properties.html

Salt: Pass parameters to custom module executed inside a pillar

I am coding a custom module that is executed inside a pillar (to set a pillar variable) but I need it to retrieve an external parameter.
The idea is to retrieve a parameter from the master server. For example, if I execute
salt 'myminion' state.highstate
the custom module will be called and it should retrieve a parameter to generate the pillar.
I was looking into options like:
Using environment variables: It doesn't work as it seems that the execution modules does nothave access to the shell environment of the salt command.
Using command line paramenters: I dont know if it is even possible as I couldn't find any documentation.
Using an additional pillar in the command line: It doesn't work as the execution module is executed during pillar evaluation so it does not have access to __pillar__ or __salt__['pillar.get'] (both empty).
Reading from stdin: Does not workfrom a custom module.
Using a file to read info: I didn't even tryied this because it is not an option for me for security reasons. I dont want the information stored.
Any ideas if or how is this possible to do?
Thanks a lot!
By:
a custom module that is executed inside a pillar (to set a pillar variable)
do you mean an external pillar?
If so, passing it parameters is covered in that document:
You can pass a single argument, a list of arguments or a dictionary of arguments to your pillar:
ext_pillar:
- example_a: some argument
- example_b:
- argumentA
- argumentB
- example_c:
keyA: valueA
keyB: valueB
External pillars merge their data into the pillar dictionary, and are "custom modules", so I think that would fit your case.
If that's not what you're trying to do, can you update the question? Where is this parameter coming from? Is it different depending on the minion (minion_id is always passed to an external pillar)?
(edit) Adding a couple links about safely storing secrets:
using vault
dotgpg
blackbox