what are drone.io 0.8.5 plugin/gcr secretes' acceptable values? - drone.io

I'm having trouble pushing to gcr with the following
gcr:
image: plugins/gcr
registry: us.gcr.io
repo: dev-221608/api
tags:
- ${DRONE_BRANCH}
- ${DRONE_COMMIT_SHA}
- ${DRONE_BUILD_NUMBER}
dockerfile: src/main/docker/Dockerfile
secrets: [GOOGLE_CREDENTIALS]
when:
branch: [prod]
...Where GOOGLE_CREDENTIALS will work, but if named say GOOGLE_CREDENTIALS_DEV it will not be properly picked up. GCR_JSON_KEY works fine. I recall reading legacy documentation that spelled out the acceptable variable names, of which GOOGLE_CREDENTIALS and GCR_JSON_KEY were listed among other variants but as of version 1 they've done some updates omitting that info.
So, question is, is the plugin capable of accepting whatever variable name or is it expecting specific variable names and if so what are they?

The Drone GCR plugin accepts the credentials in a secret named PLUGIN_JSON_KEY, GCR_JSON_KEY, GOOGLE_CREDENTIALS, or TOKEN (see code here)
If you stored the credentials in drone as GOOGLE_CREDENTIALS_DEV then you can rename it in the .drone.yml file like this:
...
secrets:
- source: GOOGLE_CREDENTIALS_DEV
target: GOOGLE_CREDENTIALS
...

Related

BitBucket pipelines, set variable value at runtime

For my deployment I would like to be able to set the container tag at runtime. For example.
I have 2 containers:
container-1:1.0.2
container-2:0.1.0
I have a manually triggered deployment step. I would like to be able to do something like this in my code:
- helm install ${container_name}_chart --version=${helm_version} --set cotainer_version=${container_version}
Where container_name, helm_version, and container_version are set by the user at runtime.
At runtime the user can enter (or even better, if possible select from a list) the container/app name and version.
Is this possible?
Turns out you can use runtime parameters with custom pipelines only.
https://support.atlassian.com/bitbucket-cloud/docs/pipeline-triggers/
pipelines:
custom:
custom-name-and-region: #name of this pipeline
- variables: #list variable names under here
- name: Username
- name: Region
- step:
script:
- echo "User name is $Username"
- echo "and they are in $Region"
Also, there is no drop down functionality.

How to mix env variables with output variables in the environment declaration

So I have a env.yml file which lets me have a different variables for each stage:
provider:
name: aws
environment: ${file(env.yml):${opt:stage}}
I also need to share some output variables to Lambda which are declared like so:
Outputs:
UserPoolId:
Value:
Ref: QNABUserPool
Export:
Name: ${self:provider.stage}-UserPoolId
UserPoolClientId:
Value:
Ref: QNABUserPoolClient
Export:
Name: ${self:provider.stage}-UserPoolClientId
I've seen I can do this by adding this to my provider but this conflicts with my env.yml
environment:
COGNITO_USER_POOL_ID: ${cf:${self:service}-${self:provider.stage}.UserPoolId}
COGNITO_USER_POOL_CLIENT_ID: ${cf:${self:service}-${self:provider.stage}.UserPoolClientId}
I tried putting these into the env.yml but that didn't work:
Trying to request a non exported variable from CloudFormation. Stack name: "XXXX-alpha" Requested variable: "UserPoolId".
I tried using custom instead of environment and it deployed but the Lambda functions no longer had access to the variables.
So how can I mix these two together?
Thank you so much!
You can reference the Output values from your current service using the Fn::ImportValue function.
The serverless system adds sls-[service_name] to the variable but you can find them in the Outputs area of the CloudFormation Stack.
Navigate to CloudFormation --> Stacks --> [select your service] --> Outputs (tab). From there you'll see a column called Exports name.
Use that Exports name and use that for the import.
e.g. you have a WebSocket service and you need the service endpoint. If you look in the tab it will have an export ~ sls-wss-[your_service_name]-[stage]-ServiceEndpointWebsocket. Thus, you can import that into an environment variable:
Environment:
Variables:
ENDPOINT:
Fn::ImportValue: sls-wss-[your_service_name]-${opt:stage}-ServiceEndpointWebsocket

What is "hellostepfunc1" in the serverless documenation for setup AWS stepfunctions?

In these documentation from the serverless website - How to manage your AWS Step Functions with Serverless and GiTHUb - serverless-step-functions, we can find this word hellostepfunc1: in the serverless.yml file. I could not find reference to it. I dont understand what is it, and I can't find any reference to it, even after the State Machine was created into AWS.
If I delete it I get the follow error
Cannot use 'in' operator to search for 'role' in myStateMachine
But if I change its name for someName for example I have no error and the State Machine will works good.
I could assume it is only an identifier but I not sure.
Where can I find reference to it?
This is quite specific to the library you are using and how it names the statemachine which is getting created based upon whether the name: field is provided under the hellostepfunc1: or not.
Have a look at the testcases here and here to understand better.
In-short a .yaml like
stateMachines:
hellostepfunc1:
definition:
Comment: 'comment 1'
.....
has name of statemachine like hellostepfunc1StepFunctionsStateMachine as no name was specified.
Whereas for a .yaml like
stateMachines:
hellostepfunc1:
name: 'alpha'
definition:
Comment: 'comment 1'
.....
the name of statemachine is alpha as you had name was specified.

How to override environment variables in jenkins_job_builder at job level?

I am trying to find a way to inherit/override environment variables in jenkins jobs defined via jenkins-job-builder (jjb).
Here is one template that does not work:
#!/usr/bin/env jenkins-jobs test
- defaults: &sample_defaults
name: sample_defaults
- job-template:
name: 'sample-{product_version}'
project-type: pipeline
dsl: ''
parameters:
- string:
name: FOO
default: some-foo-value-defined-at-template-level
- string:
name: BAR
default: me-bar
- project:
defaults: sample_defaults
name: sample-{product_version}
parameters:
- string:
name: FOO
value: value-defined-at-project-level
jobs:
- 'sample-{product_version}':
product_version:
- '1.0':
parameters:
- string:
name: FOO
value: value-defined-at-job-level-1
- '2.0':
# this job should have:
# FOO=value-defined-at-project-level
# BAR=me-bar
Please note that it is key to be able to override these parameters at job or project level instead of template.
Requirements
* be able to add as many environment variables like this without having to add one JJB variable for each of them
* user should not be forced to define these at template or job levels
* those var need to endup being exposed as environment variables at runtime for pipelines and freestyle jobs.
* syntax is flexible but a dictionary approach would be highly appreciated, like:
vars:
FOO: xxx
BAR: yyy
The first thing to understand is how JJB priorities where it will pull variables in from.
job-group section definition
project section definition
job-template variable definition
defaults definition
(This is not an exhaustive list but it's covers the features I use)
From this list we can immediately see that if we want to make job-templates have override-able then using JJB defaults configuration is useless as it has the lowest precedence when JJB is deciding where to pull from.
On the other side of the spectrum, job-groups has the highest precedence. Which unfortunately means if you define a variable in a job-group with the intention of of overriding it at the project level then you are out of luck. For this reason I avoid setting variables in job-groups unless I want to enforce a setting for a set of jobs.
Declaring variable defaults
With that out of the way there are 2 ways JJB allows us to define defaults for a parameter in a job-template:
Method 1) Using {var|default}
In this method we can define the default along with the definition of the variable. For example:
- job-template:
name: '{project-name}-verify'
parameters:
- string:
name: BRANCH
default: {branch|master}
However where this method falls apart if you need to use the same JJB variable in more than one place as you will have multiple places to define the default value for the template. For example:
- job-template:
name: '{project-name}-verify'
parameters:
- string:
name: BRANCH
default: {branch|master}
scm:
- git:
refspec: 'refs/heads/{branch|master}'
As you can see we now have 2 places were we are declaring {branch|master} not ideal.
Method 2) Defining the default variable value in the job-template itself
With this method we declare the default value of the variable in the job-template itself just once. I like to section off my job-templates like this:
- job-template:
name: '{project-name}-verify'
#####################
# Variable Defaults #
#####################
branch: master
#####################
# Job Configuration #
#####################
parameters:
- string:
name: BRANCH
default: {branch}
scm:
- git:
refspec: 'refs/heads/{branch}'
In this case there is still 2 branch definitions for the job-template. However we also provide the default value for the {branch} variable at the top of the file. Just once. This will be the value that the job takes on if it is not passed in by a project using the template.
Overriding job-templates variables
When a project now wants to use a job-template I like to use one of 2 methods depending on the situation.
- project:
name: foo
jobs:
- '{project-name}-merge'
- '{project-name}-verify'
branch: master
This is the standard way that most folks use and it will set branch: master for every job-template in the list. However sometimes you may want to provide an alternative value for only 1 job in the list. In this case the more specific declaration takes precendence.
- project:
name: foo
jobs:
- '{project-name}-merge':
branch: production
- '{project-name}-verify'
branch: master
In this case the verify job will get he value "master" but the merge job will instead get the branch value "production".

Salt: Pass parameters to custom module executed inside a pillar

I am coding a custom module that is executed inside a pillar (to set a pillar variable) but I need it to retrieve an external parameter.
The idea is to retrieve a parameter from the master server. For example, if I execute
salt 'myminion' state.highstate
the custom module will be called and it should retrieve a parameter to generate the pillar.
I was looking into options like:
Using environment variables: It doesn't work as it seems that the execution modules does nothave access to the shell environment of the salt command.
Using command line paramenters: I dont know if it is even possible as I couldn't find any documentation.
Using an additional pillar in the command line: It doesn't work as the execution module is executed during pillar evaluation so it does not have access to __pillar__ or __salt__['pillar.get'] (both empty).
Reading from stdin: Does not workfrom a custom module.
Using a file to read info: I didn't even tryied this because it is not an option for me for security reasons. I dont want the information stored.
Any ideas if or how is this possible to do?
Thanks a lot!
By:
a custom module that is executed inside a pillar (to set a pillar variable)
do you mean an external pillar?
If so, passing it parameters is covered in that document:
You can pass a single argument, a list of arguments or a dictionary of arguments to your pillar:
ext_pillar:
- example_a: some argument
- example_b:
- argumentA
- argumentB
- example_c:
keyA: valueA
keyB: valueB
External pillars merge their data into the pillar dictionary, and are "custom modules", so I think that would fit your case.
If that's not what you're trying to do, can you update the question? Where is this parameter coming from? Is it different depending on the minion (minion_id is always passed to an external pillar)?
(edit) Adding a couple links about safely storing secrets:
using vault
dotgpg
blackbox