How to use generated variables from one job in jenkins build flow and utilize those into next job in the same jenkins build flow - dynamic

I have 3 jobs configured on jenkins build flow and the desired activity is to get dynamic variables resulted from post-build task of b1 to b2 and variables of b2 to b3 and so on so forth.
list = ["foo", "bar"]
b1 = build("ExecuteJob1", param1: list[idx])
b2 = build("ExecuteJob2", param1: <some dynamic variable from b1>)
b3 = build("ExecuteJob3", param1: <some dynamic variable from b2>, param2: some dynamic variable from b1)
As specified above, there are dynamic variables generated by each previous job (as a part of post build action - I'm using description setter plugin in one instance to generate one dynamic variable and other I want BUILD_URL of b1 to be used in b3)
In order to accomplish this, I came across a post present in this link and used EnvInject Plugin. Using this I performed the following -
I created a job (envInj) in order to inject those dynamic variables into environment
I used that envInj job as post-condition job for b1 and given some time out between b1 and b2 to make sure post-condition job executes before b2 begins.
This actually injects the variables required to environment (if checked in console logs as well as environment variables of envInj job).
But the issue I am facing is that those newly injected variables are not available for b2 to access and the same case for b3.
So, is there any option to get those b1 variables to b2 (so on and so forth) or is there any better way to achieve the desired ?

I got solution for the above question, as specified by #Dave Bacher in the above link,
I dumped the parameters to a file using batch script in Post Build Task plugin.
This file was injected back to environment using EvnInj plugin.
This allowed me to access those parameters in other jobs of build flow
b1 = build("ExecuteJob1", param1: list[idx])
b2 = build("ExecuteJob2", param1: b1.dynamicVariableX)
b3 = build("ExecuteJob3", param1: b1.dynamicVariableY, param2:nb2.dynamicVariableZ)
This works perfectly allowing to access dynamic variables through environment

Related

How to pass environment variables in gitlab dynamically?

I am working on database deployment using gitlab CICD. Now there are two databases e.g. ABC and XYZ. One team is working on DB ABC and we are working on DB XYZ. Now the logic is same but if we need to pass DB name according to the team in gitlab pipeline, Whats the process fotr that ? for example if team 1 is working they will select DB ABC and all changes will be reflected on ABC and same for the other. I have already set up variables in gitlab-ci.yml but the task is manual as one team has to overwrite name of DB of other team and when it merges to master it chanhges the variable name everytime which is hard to manage .
variables:
DB_NAME_dev: DEMO_DB
DB_NAME_qa: DEMO_DB
DB_NAME_prod: DEMO_DB
Now if team 2 wants to work on their pipeline they have to change the value of DB_NAME_dev to their database which is a manual task. Is there a smart way to select DB name and the pipeline runs only for that database rather than manually editing the DB name ?
How do you pass variables in GitLab?
An alternative is to use Gitlab Variables. Go to your project page, Settings tab -> CI/CD, find Variables and click on the Expand button. Here you can define variable names and values, which will be automatically passed into the gitlab pipelines, and are available as environment variables there.
You can also use the git branch method. Let's say the 'ABC' and 'XYZ' team pushes their code to specific branches (eg. branch starting with 'abc' or 'xyz'). For those, you need to export variables in before_script with only parameter.
Create branch-specific jobs in your CI file:
abc-dev-job:
before_script:
- export DB_NAME_dev: $DEMO_DB_abc
- export DB_NAME_qa: $DEMO_DB_abc
- export DB_NAME_prod: $DEMO_DB_abc
only:
- /^abc/.*$/#gitlab-org/gitlab
xyz-dev-job:
before_script:
- export DB_NAME_dev: $DEMO_DB_xyz
- export DB_NAME_qa: $DEMO_DB_xyz
- export DB_NAME_prod: $DEMO_DB_xyz
only:
- /^xyz/.*$/#gitlab-org/gitlab
This pipeline will only run when Team 'XYZ' or 'ABC' pushes their code to their team-specific branches which might start with the prefix xyz or abc (eg. xyz-dev, xyz/dev, abc-dev, etc.)
And it will use variables accordingly.
Note: you need to define variables in CI/CD settings.
Thank you!

Problem passing user defined variables (JMeter Script)

I don't know how to pass User Defined Variables (from JMeter .jmx Script) on jenkins-taurus.yml (Taurus BlazeMeter configuration file).
It keeps pushing the fixed variables:
[1]: https://i.stack.imgur.com/igIK3.png
I need these fields (User Defined Variables) to be blank, and the info inside them to be pushed from the Taurus configuration file:
As you can see, I'm trying to pass the parameters through Taurus configuration file (.yml)
[2]: https://i.stack.imgur.com/kMpRx.png
SI need to know how to pass these variables inside Taurus script,
should I use user.{userDefinedParametersHere} or is there another kind of syntax?
This is necessary because the server URL and login/password could be changed easily this way.
You're using incorrect keyword, if you want to populate the User Defined Variables via Taurus you should use variables, not properties
---
execution:
- scenario:
variables:
foo: bar
baz: qux
script: test.jmx
It will create another instances of User Defined Variables called Variables from Taurus
If you additionally need to disable all existing User Defined Variables instances you could do something like:
---
execution:
- scenario:
variables:
foo: bar
baz: qux
script: test.jmx
#if you want to additionally disable User Defined Variables:
modifications:
disable: # Names of the tree elements to disable
- User Defined Variables
If you have defined your variables at Test Plan level - don't worry, just override them via Taurus and the script will use the "new" values (the ones you supply via variables keyword)

Can one build pipeline send a value as a parameter to the next pipeline it triggers in Azure DevOps

I have a build pipeline, lets say A - that stores a file (this file has a variable value that is set within that build pipeline) within a folder. This Pipeline A triggers another Pipeline B that Publishes the folder as an artifact using the Publish artifact task. But the folder name is dynamic as it is fetched from that file within Pipeline A. I need to pass on the file with that variable value from Pipeline A to Pipeline B while triggering it. Is there any way to do this in Azure DevOps, without using the yaml pipelines?
I have a little complex set of pipelines that I set up using the Classic mode, and converting them all to yaml would take a long time, so would like to know if there is any work around to this.
There are few workarounds:
Create a variable group, and during the Pipeline A set the variable value there with Rest API, then Pipeline B use this variable.
During Pipeline A update the Pipeline B definition with the new value with Rest API.
In Pipeline A trigger the Pipeline B with Trigger Build Task, there you can pass the variable value to the Pipeline B (you do it in the "Build Parameters" field).
I don't think there's a clean way to do this if you need to trigger the build by adding Pipeline A under the triggers section of Pipeline B.
Consider triggering Pipeline B when Pipeline A completes using the REST API. That way, you can have your 'file path' as a variable on Pipeline B and pass it in the parameters collection.
Something like:
POST https://dev.azure.com/{organization}/{project}/_apis/build/builds?ignoreWarnings={ignoreWarnings}&checkInTicket={checkInTicket}&sourceBuildId={sourceBuildId}&api-version=5.0
{
"definition": {
"id": 1234
},
"parameters": "{\"fileName\":\"yourfilename\"}"
}
filePath would be the name of your variable in Pipeline B
Have a look at the Builds - Queue documentation for more info.

Cannot change value of variable in groovy soapui

def activeEnvironment = "a"
activeEnviornment="b"
log.info("active environment = $activeEnvironment")
When I run the code above, on the log it shows active environemnt = a. Why doesnt it show b?
Your spelling of environment is wrong in your second assignment, so you've created a new variable
Change
activeEnviornment="b"
To
activeEnvironment="b"

How do I dynamically trigger downstream builds in jenkins?

We want to dynamically trigger integration tests in different downstream builds in jenkins. We have a parametrized integration test project that takes a test name as a parameter. We dynamically determine our test names from the git repo.
We have a parent project that uses jenkins-cli to start a build of the integration project for each test found in the source code. The parent project and integration project are related via matching fingerprints.
The problem with this approach is that the aggregate test results doesn't work. I think the problem is that the "downstream" integration tests are started via jenkins-cli, so jenkins doesn't realize they are downstream.
I've looked at many jenkins plugins to try to get this working. The Join and Parameterized Trigger plugins don't help because they expect a static list of projects to build. The parameter factories available for Parameterized Trigger won't work either because there's no factory to create an arbitrary list of parameters. The Log Trigger plugin won't work.
The Groovy Postbuild Plugin looks like it should work, but I couldn't figure out how to trigger a build from it.
def job = hudson.model.Hudson.instance.getJob("job")
def params = new StringParameterValue('PARAMTEST', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)
This is what finally worked for me.
NOTE: The Pipeline Plugin should render this question moot, but I haven't had a chance to update our infrastructure.
To start a downstream job without parameters:
job = manager.hudson.getItem(name)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
manager.hudson.queue.schedule(job, 0, causeAction)
To start a downstream job with parameters, you have to add a ParametersAction. Suppose Job1 has parameters A and C which default to "B" and "D" respectively. I.e.:
A == "B"
C == "D"
Suppose Job2 has the same A and B parameters, but also takes parameter E which defaults to "F". The following post build script in Job1 will copy its A and C parameters and set parameter E to the concatenation of A's and C's values:
params = []
val = ''
manager.build.properties.actions.each {
if (it instanceof hudson.model.ParametersAction) {
it.parameters.each {
value = it.createVariableResolver(manager.build).resolve(it.name)
params += it
val += value
}
}
}
params += new hudson.model.StringParameterValue('E', val)
paramsAction = new hudson.model.ParametersAction(params)
jobName = 'Job2'
job = manager.hudson.getItem(jobName)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
def waitingItem = manager.hudson.queue.schedule(job, 0, causeAction, paramsAction)
def childFuture = waitingItem.getFuture()
def childBuild = childFuture.get()
hudson.plugins.parameterizedtrigger.BuildInfoExporterAction.addBuildInfoExporterAction(
manager.build, childProjectName, childBuild.number, childBuild.result
)
You have to add $JENKINS_HOME/plugins/parameterized-trigger/WEB-INF/classes to the Groovy Postbuild plugin's Additional groovy classpath.
Execute this Groovy script
import hudson.model.*
import jenkins.model.*
def build = Thread.currentThread().executable
def jobPattern = "PUTHEREYOURJOBNAME"
def matchedJobs = Jenkins.instance.items.findAll { job ->
job.name =~ /$jobPattern/
}
matchedJobs.each { job ->
println "Scheduling job name is: ${job.name}"
job.scheduleBuild(1, new Cause.UpstreamCause(build), new ParametersAction([ new StringParameterValue("PROPERTY1", "PROPERTY1VALUE"),new StringParameterValue("PROPERTY2", "PROPERTY2VALUE")]))
}
If you don't need to pass in properties from one build to the other just take the ParametersAction out.
The build you scheduled will have the same "cause" as your initial build. That's a nice way to pass in the "Changes". If you don't need this just do not use new Cause.UpstreamCause(build) in the function call
Since you are already starting the downstream jobs dynamically, how about you wait until they done and copy the test result files (I would archive them on the downstream jobs and then just download the 'build' artifacts) to the parent workspace. You might need to aggregate the files manually, depending if the Test plugin can work with several test result pages. In the post build step of the parent jobs configure the appropriate test plugin.
Using the Groovy Postbuild Plugin, maybe something like this will work (haven't tried it)
def job = hudson.getItem(jobname)
hudson.queue.schedule(job)
I am actually surprised that if you fingerprint both jobs (e.g. with the BUILD_TAG variable of the parent job) the aggregated results are not picked up. In my understanding Jenkins simply looks at md5sums to relate jobs (Aggregate downstream test results and triggering via the cli should not affect aggregating results. Somehow, there is something additional going on to maintain the upstream/downstream relation that I am not aware of...
This worked for me using "Execute system groovy
script"
import hudson.model.*
def currentBuild = Thread.currentThread().executable
def job = hudson.model.Hudson.instance.getJob("jobname")
def params = new StringParameterValue('paramname', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)