We want to dynamically trigger integration tests in different downstream builds in jenkins. We have a parametrized integration test project that takes a test name as a parameter. We dynamically determine our test names from the git repo.
We have a parent project that uses jenkins-cli to start a build of the integration project for each test found in the source code. The parent project and integration project are related via matching fingerprints.
The problem with this approach is that the aggregate test results doesn't work. I think the problem is that the "downstream" integration tests are started via jenkins-cli, so jenkins doesn't realize they are downstream.
I've looked at many jenkins plugins to try to get this working. The Join and Parameterized Trigger plugins don't help because they expect a static list of projects to build. The parameter factories available for Parameterized Trigger won't work either because there's no factory to create an arbitrary list of parameters. The Log Trigger plugin won't work.
The Groovy Postbuild Plugin looks like it should work, but I couldn't figure out how to trigger a build from it.
def job = hudson.model.Hudson.instance.getJob("job")
def params = new StringParameterValue('PARAMTEST', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)
This is what finally worked for me.
NOTE: The Pipeline Plugin should render this question moot, but I haven't had a chance to update our infrastructure.
To start a downstream job without parameters:
job = manager.hudson.getItem(name)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
manager.hudson.queue.schedule(job, 0, causeAction)
To start a downstream job with parameters, you have to add a ParametersAction. Suppose Job1 has parameters A and C which default to "B" and "D" respectively. I.e.:
A == "B"
C == "D"
Suppose Job2 has the same A and B parameters, but also takes parameter E which defaults to "F". The following post build script in Job1 will copy its A and C parameters and set parameter E to the concatenation of A's and C's values:
params = []
val = ''
manager.build.properties.actions.each {
if (it instanceof hudson.model.ParametersAction) {
it.parameters.each {
value = it.createVariableResolver(manager.build).resolve(it.name)
params += it
val += value
}
}
}
params += new hudson.model.StringParameterValue('E', val)
paramsAction = new hudson.model.ParametersAction(params)
jobName = 'Job2'
job = manager.hudson.getItem(jobName)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
def waitingItem = manager.hudson.queue.schedule(job, 0, causeAction, paramsAction)
def childFuture = waitingItem.getFuture()
def childBuild = childFuture.get()
hudson.plugins.parameterizedtrigger.BuildInfoExporterAction.addBuildInfoExporterAction(
manager.build, childProjectName, childBuild.number, childBuild.result
)
You have to add $JENKINS_HOME/plugins/parameterized-trigger/WEB-INF/classes to the Groovy Postbuild plugin's Additional groovy classpath.
Execute this Groovy script
import hudson.model.*
import jenkins.model.*
def build = Thread.currentThread().executable
def jobPattern = "PUTHEREYOURJOBNAME"
def matchedJobs = Jenkins.instance.items.findAll { job ->
job.name =~ /$jobPattern/
}
matchedJobs.each { job ->
println "Scheduling job name is: ${job.name}"
job.scheduleBuild(1, new Cause.UpstreamCause(build), new ParametersAction([ new StringParameterValue("PROPERTY1", "PROPERTY1VALUE"),new StringParameterValue("PROPERTY2", "PROPERTY2VALUE")]))
}
If you don't need to pass in properties from one build to the other just take the ParametersAction out.
The build you scheduled will have the same "cause" as your initial build. That's a nice way to pass in the "Changes". If you don't need this just do not use new Cause.UpstreamCause(build) in the function call
Since you are already starting the downstream jobs dynamically, how about you wait until they done and copy the test result files (I would archive them on the downstream jobs and then just download the 'build' artifacts) to the parent workspace. You might need to aggregate the files manually, depending if the Test plugin can work with several test result pages. In the post build step of the parent jobs configure the appropriate test plugin.
Using the Groovy Postbuild Plugin, maybe something like this will work (haven't tried it)
def job = hudson.getItem(jobname)
hudson.queue.schedule(job)
I am actually surprised that if you fingerprint both jobs (e.g. with the BUILD_TAG variable of the parent job) the aggregated results are not picked up. In my understanding Jenkins simply looks at md5sums to relate jobs (Aggregate downstream test results and triggering via the cli should not affect aggregating results. Somehow, there is something additional going on to maintain the upstream/downstream relation that I am not aware of...
This worked for me using "Execute system groovy
script"
import hudson.model.*
def currentBuild = Thread.currentThread().executable
def job = hudson.model.Hudson.instance.getJob("jobname")
def params = new StringParameterValue('paramname', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)
Related
This question arose when I was trying to reboot my Nexus3 container on a weekly schedule and connect to an S3 bucket I have. I have my container set up to connect to the S3 bucket just fine (it creates a new [A-Z,0-9]-metrics.properties file each time) but the previous artifacts are not found when looking though the UI.
I used the Repair - Reconcile component database from blob store task from the UI settings and it works great!
But... all the previous steps are done automatically through scripts and I would like the same for the final step of Reconciling the blob store.
Connecting to the S3 blob store is done with reference to examples from nexus-book-examples. As below:
Map<String, String> config = new HashMap<>()
config.put("bucket", "nexus-artifact-storage")
blobStore.createS3BlobStore('nexus-artifact-storage', config)
AWS credentials are provided during the docker run step so the above is all that is needed for the blob store set up. It is called by a modified version of provision.sh, which is a script from the nexus-book-examples git page.
Is there a way to either:
Create a task with a groovy script? or,
Reference one of the task types and run the task that way with a POST?
depending on the specific version of repository manager that you are using, there may be REST endpoints for listing and running scheduled tasks. This was introduced in 3.6.0 according to this ticket: https://issues.sonatype.org/browse/NEXUS-11935. For more information about the REST integration in 3.x, check out the following: https://help.sonatype.com/display/NXRM3/Tasks+API
For creating a scheduled task, you will have to add some groovy code. Perhaps the following would be a good start:
import org.sonatype.nexus.scheduling.TaskConfiguration
import org.sonatype.nexus.scheduling.TaskInfo
import org.sonatype.nexus.scheduling.TaskScheduler
import groovy.json.JsonOutput
import groovy.json.JsonSlurper
class TaskXO
{
String typeId
Boolean enabled
String name
String alertEmail
Map<String, String> properties
}
TaskXO task = new JsonSlurper().parseText(args)
TaskScheduler scheduler = container.lookup(TaskScheduler.class.name)
TaskConfiguration config = scheduler.createTaskConfigurationInstance(task.typeId)
config.enabled = task.enabled
config.name = task.name
config.alertEmail = task.alertEmail
task.properties?.each { key, value -> config.setString(key, value) }
TaskInfo taskInfo = scheduler.scheduleTask(config, scheduler.scheduleFactory.manual())
JsonOutput.toJson(taskInfo)
I search a way to request components or assets from groupId and artefactId.
Documentation provides any help about how to create this request.
This doc has been a great help.
Unfortunately, it's not enough to resolve my need and I try to create query to request components from groupId and artefactId like that:
Query.builder().where('group = ').param('###').and('name = ').param('###').and('version = ').param('###).build()
Last time I played my script it throwed java.lang.StackOverflowError. After increasing memory, I had the same result. It seems there is too much components to return, but in my nexus repository there's only one component with such group, name and version.
What's wrong with this query?
Is there someone pass this difficulty (it was so easy with nexus2 and rest api!) and retrieve components information with a groovy script?
Here is the script I successfully uploaded and use. You may easily add version to your requests - the parameter will be, for example, "snapshots-lib,com.m121.somebundle,someBundle,7.2.1-SNAPSHOT,all". As you may see, I decided to filter the sequence locally, because did not find the way to specify in query version parameter.
{
"name": "listgroup",
"type": "groovy",
"content": "import org.sonatype.nexus.repository.storage.Query;
import org.sonatype.nexus.repository.storage.StorageFacet;
import groovy.json.JsonOutput;
def repositoryId = args.split(',')[0];
def groupId = args.split(',')[1];
def artifactId = args.split(',')[2];
def baseVersion = args.split(',')[3];
def latestOnly = args.split(',')[4];
def repo = repository.repositoryManager.get(repositoryId);
StorageFacet storageFacet = repo.facet(StorageFacet);
def tx = storageFacet.txSupplier().get();
tx.begin();
def components = tx.findComponents(Query.builder().where('group = ').param(groupId).and('name = ').param(artifactId).build(), [repo]);
def found = components.findAll{it.attributes().child('maven2').get('baseVersion')==baseVersion}.collect{def version = it.attributes().child('maven2').get('version');\"${version}\"};
// found = found.unique().sort();
def latest = found.isEmpty() ? found : found.last();
tx.commit();
def result = latestOnly == 'latest' ? JsonOutput.toJson(latest) : JsonOutput.toJson(found);
return result;"
}
okee,
in fact my needs are :
i want to retrieve last release version and its date of a component by group and artefact.
I definitely leave the groovy API option.
A way i found consists in use of extdirect api. This "rest" api is used by nexus frontend to communicate with backend. No documentation exists.
I do a call to extdirect api to retrieve all versions from a component by group and artefact. I parse the results to get the last version on a release (snapshot and releases).
It's not really good because this call retrieves all the versions on all repositories and could be huge.
Another call to extdirect api to find the release date from the component id of the last release version.
I hope someday nexus publishs official documentation of an useful rest api.
I have multiple Environment and a lot of test cases, but not all test cases are needed to be run in all environment. Is there a way to run only an specific test cases from a test suite based on the selected Environment.
For Example
If I select Environment1, it will run the following test cases
TC0001
TC0002
TC0003
TC0004
TC0005
If I select Environment2, it will run only the following test cases
TC0001
TC0003
TC0005
There can be different solution to achieve this since you have multiple environments i.e., pro software being used.
I would achieve the solution using Test Suite's Setup Script:
Create Test Suite level custom property. Use the same name as your environment name. For instance, DEV is the environment defined, use the same as test suite property name and provide the list of values separated by comma as value for that property, say TC1, TC2 etc.,
Similarly defined other environments and its values as well.
Copy the below script in Setup Script for the test suite and execute the script which enables or disables the test cases according to the environment and property value
Test Suite's Setup Script
/**
* This is soapui's Setup Script
* which enables / disables required
* test cases based on the user list
* for that specific environment
**/
def disableTestCase(testCaze) {
testCaze.disabled = true
}
def enableTestCase(testCaze) {
testCaze.disabled = false
}
def getEnvironmentSpecificList(def testSuite) {
def currentEnv = testSuite.project.activeEnvironment.NAME
def enableList = testSuite.getPropertyValue(currentEnv).split(',').collect { it.trim()}
log.info "List of test for enable: ${enableList}"
enableList
}
def userList = getEnvironmentSpecificList(testSuite)
testSuite.testCaseList.each { kase ->
if (userList.contains(kase.name)) {
enableTestCase(kase)
} else {
disableTestCase(kase)
}
}
Other way to achieve this is using Event feature of ReadyAPI, you may use TestRunListener.beforeRun() and filter the test case whether to execute or ignore.
EDIT:
If you are using ReadyAPI, then you can the new feature called tag the test cases. A test case can be tagged with multiple values and you can execute tests using specific tags. In this case, you may not needed to have the setup script as that is for the open source edition. Refer documentation for more details.
This solution is only specific to Pro software and Open Source edition does have this tag feature.
I have a batch job in AX 2012 R2 that runs, essentially iterating over a table and creating an instance of a class (that extends RunBaseBatch) that gets added as a task.
I also have some post processing items I need to do, after all the tasks have completed.
So far, the following is working:
while select stagingTable where stagingTable.OperationNo == params.paramOperationNo()
{
batchHeader = this.getCurrentBatchHeader();
batchTask = OperationTask::construct();
batchHeader.addRuntimeTask(batchTask,this.getCurrentBatchTask().RecId);
}
batchHeader.save();
postTask = PostProcessingTask::construct();
batchHeader.addRuntimeTask(postTask,this.getCurrentBatchTask().RecId);
batchHeader.addDependency(postTask,batchTask,BatchDependencyStatus::FinishedOrError);
batchHeader.save();
My thought is that this will add a dependency on the post process task to not start until we get Finished or Error on the last task added in the loop. What I get instead is an exception "The dependency could not be created because task '' does not exist."
I'm uncertain what I'm missing, as the tasks all get added executed successfully, it seems that just the dependency doesn't want to work.
Several things, where this code is being called matters. Is the code already in batch? Is the code calling in doBatch() before/after the super? etc.
You have a while-select, does this create multiple batch tasks? If it does, then you need to create a dependency on each batch task object. This is one problem I see. If your while-select statement only selects 1 record and adds one task, then the problem is something else, but you shouldn't do a while-select to select one record.
Also, you call batchHeader.save(); two times. I'd probably remove the first call. I'd need to see what is instantiating your code.
Where you have this.getCurrentBatchTask().RecId, depending on if your code is in batch or not, try replacing that with BatchHeader::getCurrentBatchTask().RecId
And where you have batchHeader = this.getCurrentBatchHeader(); replace that with batchHeader = BatchHeader::getCurrentBatchHeader();
EDIT Try this code (fix whatever to make it compile)
BatchHeader batchHeader = BatchHeader::getCurrentBatchHeader();
Set set = new Set(Types::Class);
SetEnumerator se;
BatchTask batchTask;
PostTask postTask;
while select stagingTable where stagingTable.OperationNo == params.paramOperationNo()
{
batchTask = OperationTask::construct();
set.add(batchTask);
batchHeader.addRuntimeTask(batchTask,BatchHeader::getCurrentBatchTask().RecId);
}
// Create post task
postTask = PostProcessingTask::construct();
batchHeader.addRuntimeTask(postTask,BatchHeader::getCurrentBatchTask().RecId);
// Create dependencies
se = set.getEnumerator();
while (se.moveNext())
{
batchTask = se.current(); // Task to make dependent on
batchHeader.addDependency(postTask,batchTask,BatchDependencyStatus::FinishedOrError);
}
batchHeader.save();
I have a Maven project that uses the jaxb2-maven-plugin to compile some xsd files. It uses the staleFile to determine whether or not any of the referenced schemaFiles have been changed. Unfortunately, the xsd files in question use <xs:include schemaLocation="../relative/path.xsd"/> tags to include other schema files that are not listed in the schemaFile argument so the staleFile calculation in the plugin doesn't accurately detect when things need to be actually recompiled. This winds up breaking incremental builds as the included schemas evolve.
Obviously, one solution would be to list all the recursively referenced files in the execution's schemaFile. However, there are going to be cases where developers don't do this and break the build. I'd like instead to automate the generation of this list in some way.
One approach that comes to mind would be to somehow parse the top-level XSD files and then either sets a property or outputs a file that I can then pass into the schemaFile parameter or schemaFiles parameter. The Groovy gmaven plugin seems like it might be a natural way to embed that functionality right into the POM. But I'm not familiar enough with Groovy to get started.
Can anyone provide some sample code? Or offer an alternative implementation/solution?
Thanks!
Not sure how you'd integrate it into your Maven build -- Maven isn't really my thing :-(
However, if you have the path to an xsd file, you should be able to get the files it references by doing something like:
def rootXsd = new File( 'path/to/xsd' )
def refs = new XmlSlurper().parse( rootXsd ).depthFirst().findAll { it.name()=='include' }.#schemaLocation*.text()
println "$rootXsd references $refs"
So refs is a list of Strings which should be the paths to the included xsds
Based on tim_yates's answer, the following is a workable solution, which you may have to customize based on how you are configuring the jaxb2 plugin.
Configure a gmaven-plugin execution early in the lifecycle (e.g., in the initialize phase) that runs with the following configuration...
Start with a function to collect File objects of referenced schemas (this is a refinement of Tim's answer):
def findRefs { f ->
def relPaths = new XmlSlurper().parse(f).depthFirst().findAll {
it.name()=='include'
}*.#schemaLocation*.text()
relPaths.collect { new File(f.absoluteFile.parent + "/" + it).canonicalFile }
}
Wrap that in a function that iterates on the results until all children are found:
def recursiveFindRefs = { schemaFiles ->
def outputs = [] as Set
def inputs = schemaFiles as Queue
// Breadth-first examine all refs in all schema files
while (xsd = inputs.poll()) {
outputs << xsd
findRefs(xsd).each {
if (!outputs.contains(it)) inputs.add(it)
}
}
outputs
}
The real magic then comes in when you parse the Maven project to determine what to do.
First, find the JAXB plugin:
jaxb = project.build.plugins.find { it.artifactId == 'jaxb2-maven-plugin' }
Then, parse each execution of that plugin (if you have multiple). The code assumes that each execution sets schemaDirectory, schemaFiles and staleFile (i.e., does not use the defaults!) and that you are not using schemaListFileName:
jaxb.executions.each { ex ->
log.info("Processing jaxb execution $ex")
// Extract the schema locations; the configuration is an Xpp3Dom
ex.configuration.children.each { conf ->
switch (conf.name) {
case "schemaDirectory":
schemaDirectory = conf.value
break
case "schemaFiles":
schemaFiles = conf.value.split(/,\s*/)
break
case "staleFile":
staleFile = conf.value
break
}
}
Finally, we can open the schemaFiles, parse them using the functions we've defined earlier:
def schemaHandles = schemaFiles.collect { new File("${project.basedir}/${schemaDirectory}", it) }
def allSchemaHandles = recursiveFindRefs(schemaHandles)
...and compare their last modified times against the stale file's modification time,
unlinking the stale file if necessary.
def maxLastModified = allSchemaHandles.collect {
it.lastModified()
}.max()
def staleHandle = new File(staleFile)
if (staleHandle.lastModified() < maxLastModified) {
log.info(" New schemas detected; unlinking $staleFile.")
staleHandle.delete()
}
}