How to override environment variables in jenkins_job_builder at job level? - jenkins-job-builder

I am trying to find a way to inherit/override environment variables in jenkins jobs defined via jenkins-job-builder (jjb).
Here is one template that does not work:
#!/usr/bin/env jenkins-jobs test
- defaults: &sample_defaults
name: sample_defaults
- job-template:
name: 'sample-{product_version}'
project-type: pipeline
dsl: ''
parameters:
- string:
name: FOO
default: some-foo-value-defined-at-template-level
- string:
name: BAR
default: me-bar
- project:
defaults: sample_defaults
name: sample-{product_version}
parameters:
- string:
name: FOO
value: value-defined-at-project-level
jobs:
- 'sample-{product_version}':
product_version:
- '1.0':
parameters:
- string:
name: FOO
value: value-defined-at-job-level-1
- '2.0':
# this job should have:
# FOO=value-defined-at-project-level
# BAR=me-bar
Please note that it is key to be able to override these parameters at job or project level instead of template.
Requirements
* be able to add as many environment variables like this without having to add one JJB variable for each of them
* user should not be forced to define these at template or job levels
* those var need to endup being exposed as environment variables at runtime for pipelines and freestyle jobs.
* syntax is flexible but a dictionary approach would be highly appreciated, like:
vars:
FOO: xxx
BAR: yyy

The first thing to understand is how JJB priorities where it will pull variables in from.
job-group section definition
project section definition
job-template variable definition
defaults definition
(This is not an exhaustive list but it's covers the features I use)
From this list we can immediately see that if we want to make job-templates have override-able then using JJB defaults configuration is useless as it has the lowest precedence when JJB is deciding where to pull from.
On the other side of the spectrum, job-groups has the highest precedence. Which unfortunately means if you define a variable in a job-group with the intention of of overriding it at the project level then you are out of luck. For this reason I avoid setting variables in job-groups unless I want to enforce a setting for a set of jobs.
Declaring variable defaults
With that out of the way there are 2 ways JJB allows us to define defaults for a parameter in a job-template:
Method 1) Using {var|default}
In this method we can define the default along with the definition of the variable. For example:
- job-template:
name: '{project-name}-verify'
parameters:
- string:
name: BRANCH
default: {branch|master}
However where this method falls apart if you need to use the same JJB variable in more than one place as you will have multiple places to define the default value for the template. For example:
- job-template:
name: '{project-name}-verify'
parameters:
- string:
name: BRANCH
default: {branch|master}
scm:
- git:
refspec: 'refs/heads/{branch|master}'
As you can see we now have 2 places were we are declaring {branch|master} not ideal.
Method 2) Defining the default variable value in the job-template itself
With this method we declare the default value of the variable in the job-template itself just once. I like to section off my job-templates like this:
- job-template:
name: '{project-name}-verify'
#####################
# Variable Defaults #
#####################
branch: master
#####################
# Job Configuration #
#####################
parameters:
- string:
name: BRANCH
default: {branch}
scm:
- git:
refspec: 'refs/heads/{branch}'
In this case there is still 2 branch definitions for the job-template. However we also provide the default value for the {branch} variable at the top of the file. Just once. This will be the value that the job takes on if it is not passed in by a project using the template.
Overriding job-templates variables
When a project now wants to use a job-template I like to use one of 2 methods depending on the situation.
- project:
name: foo
jobs:
- '{project-name}-merge'
- '{project-name}-verify'
branch: master
This is the standard way that most folks use and it will set branch: master for every job-template in the list. However sometimes you may want to provide an alternative value for only 1 job in the list. In this case the more specific declaration takes precendence.
- project:
name: foo
jobs:
- '{project-name}-merge':
branch: production
- '{project-name}-verify'
branch: master
In this case the verify job will get he value "master" but the merge job will instead get the branch value "production".

Related

Problem passing user defined variables (JMeter Script)

I don't know how to pass User Defined Variables (from JMeter .jmx Script) on jenkins-taurus.yml (Taurus BlazeMeter configuration file).
It keeps pushing the fixed variables:
[1]: https://i.stack.imgur.com/igIK3.png
I need these fields (User Defined Variables) to be blank, and the info inside them to be pushed from the Taurus configuration file:
As you can see, I'm trying to pass the parameters through Taurus configuration file (.yml)
[2]: https://i.stack.imgur.com/kMpRx.png
SI need to know how to pass these variables inside Taurus script,
should I use user.{userDefinedParametersHere} or is there another kind of syntax?
This is necessary because the server URL and login/password could be changed easily this way.
You're using incorrect keyword, if you want to populate the User Defined Variables via Taurus you should use variables, not properties
---
execution:
- scenario:
variables:
foo: bar
baz: qux
script: test.jmx
It will create another instances of User Defined Variables called Variables from Taurus
If you additionally need to disable all existing User Defined Variables instances you could do something like:
---
execution:
- scenario:
variables:
foo: bar
baz: qux
script: test.jmx
#if you want to additionally disable User Defined Variables:
modifications:
disable: # Names of the tree elements to disable
- User Defined Variables
If you have defined your variables at Test Plan level - don't worry, just override them via Taurus and the script will use the "new" values (the ones you supply via variables keyword)

BitBucket pipelines, set variable value at runtime

For my deployment I would like to be able to set the container tag at runtime. For example.
I have 2 containers:
container-1:1.0.2
container-2:0.1.0
I have a manually triggered deployment step. I would like to be able to do something like this in my code:
- helm install ${container_name}_chart --version=${helm_version} --set cotainer_version=${container_version}
Where container_name, helm_version, and container_version are set by the user at runtime.
At runtime the user can enter (or even better, if possible select from a list) the container/app name and version.
Is this possible?
Turns out you can use runtime parameters with custom pipelines only.
https://support.atlassian.com/bitbucket-cloud/docs/pipeline-triggers/
pipelines:
custom:
custom-name-and-region: #name of this pipeline
- variables: #list variable names under here
- name: Username
- name: Region
- step:
script:
- echo "User name is $Username"
- echo "and they are in $Region"
Also, there is no drop down functionality.

How to mix env variables with output variables in the environment declaration

So I have a env.yml file which lets me have a different variables for each stage:
provider:
name: aws
environment: ${file(env.yml):${opt:stage}}
I also need to share some output variables to Lambda which are declared like so:
Outputs:
UserPoolId:
Value:
Ref: QNABUserPool
Export:
Name: ${self:provider.stage}-UserPoolId
UserPoolClientId:
Value:
Ref: QNABUserPoolClient
Export:
Name: ${self:provider.stage}-UserPoolClientId
I've seen I can do this by adding this to my provider but this conflicts with my env.yml
environment:
COGNITO_USER_POOL_ID: ${cf:${self:service}-${self:provider.stage}.UserPoolId}
COGNITO_USER_POOL_CLIENT_ID: ${cf:${self:service}-${self:provider.stage}.UserPoolClientId}
I tried putting these into the env.yml but that didn't work:
Trying to request a non exported variable from CloudFormation. Stack name: "XXXX-alpha" Requested variable: "UserPoolId".
I tried using custom instead of environment and it deployed but the Lambda functions no longer had access to the variables.
So how can I mix these two together?
Thank you so much!
You can reference the Output values from your current service using the Fn::ImportValue function.
The serverless system adds sls-[service_name] to the variable but you can find them in the Outputs area of the CloudFormation Stack.
Navigate to CloudFormation --> Stacks --> [select your service] --> Outputs (tab). From there you'll see a column called Exports name.
Use that Exports name and use that for the import.
e.g. you have a WebSocket service and you need the service endpoint. If you look in the tab it will have an export ~ sls-wss-[your_service_name]-[stage]-ServiceEndpointWebsocket. Thus, you can import that into an environment variable:
Environment:
Variables:
ENDPOINT:
Fn::ImportValue: sls-wss-[your_service_name]-${opt:stage}-ServiceEndpointWebsocket

what are drone.io 0.8.5 plugin/gcr secretes' acceptable values?

I'm having trouble pushing to gcr with the following
gcr:
image: plugins/gcr
registry: us.gcr.io
repo: dev-221608/api
tags:
- ${DRONE_BRANCH}
- ${DRONE_COMMIT_SHA}
- ${DRONE_BUILD_NUMBER}
dockerfile: src/main/docker/Dockerfile
secrets: [GOOGLE_CREDENTIALS]
when:
branch: [prod]
...Where GOOGLE_CREDENTIALS will work, but if named say GOOGLE_CREDENTIALS_DEV it will not be properly picked up. GCR_JSON_KEY works fine. I recall reading legacy documentation that spelled out the acceptable variable names, of which GOOGLE_CREDENTIALS and GCR_JSON_KEY were listed among other variants but as of version 1 they've done some updates omitting that info.
So, question is, is the plugin capable of accepting whatever variable name or is it expecting specific variable names and if so what are they?
The Drone GCR plugin accepts the credentials in a secret named PLUGIN_JSON_KEY, GCR_JSON_KEY, GOOGLE_CREDENTIALS, or TOKEN (see code here)
If you stored the credentials in drone as GOOGLE_CREDENTIALS_DEV then you can rename it in the .drone.yml file like this:
...
secrets:
- source: GOOGLE_CREDENTIALS_DEV
target: GOOGLE_CREDENTIALS
...

How to set constants in an ansible role?

In an ansible role, i need to define constants for some paths, not changeable by users in their playbook.
Here's my need:
the role will have a {{app_base_path}} variable (changeable by user), and then i want to set 2 constants:
app_instance_path: "{{app_base_path}}/appinstance"
app_server_path: "{{app_instance_path}}/appserver"
I need each value several times in my tasks so i can't set only one variable for it
What's the best way to do it?
Thanks.
As far as I know, ansible has no constants.
You can do the following:
In the file <rolname>/defaults/main.yml
---
# Don't change this variables
app_instance_path: "{{ app_base_path }}/appinstance"
app_server_path: "{{ app_instance_path }}/appserver"
And add an assertion task in to the <rolename>/tasks/main.yml file:
---
# ...
- name: Check some constants
assert:
that:
- "app_instance_path == app_base_path + '/appinstance'"
- "app_server_path == app_instance_path + '/appserver'"
Further more you can document for the users to only set app_base_path and leave app_instance_path and app_server_path as it is.
Finally, i got it with set_fact, unfortunately, it seems to have a very low priority in variables order, so my role execution can fail if the user defines extra_vars in his playbook...