How can query Jenkins using API to know the last few builds that it executed. I don't know the name of build jobs. I just want Jenkins to return the last n builds it executed or the builds executed between 2 timestamps
In order to query build results via API , you have to know the job name in Jenkins. You have to append your jenkins job URL with the this suffix /api/json to get the JSON data String.
For ex: If your Jenkins server has a job named A_SLAVE_JOB , then you have to do a HTTP GET in your java rest client at this end point : http://<YourJenkinsURL>:<PortNumber>/job/A_SLAVE_JOB/api/json
This shall return a String with all the build history URL (with numbers), Last successful build and last failed build status.
You can traverse the subsequent build of a given job using a for loop. All you need is a JSON parser for extracting values from keys in JSON string. You can use org.json library in java to do the parsing. A pseudocode sample goes like this :
import org.json.*;
class myJenkinsJobParser {
public static void main(String... args){
JSONObject obj = new JSONObject("YOUR_API_RESPONSE_STRING");
String pageName = obj.getJSONObject("build").getString("status");
JSONArray arr = obj.getJSONArray("builds");
for (int i = 0; i < arr.length(); i++)
{
String url = arr.getJSONObject(i).getString("url");
// just a psuedoCode......
}
}
}
Just to summarize. The way you have to request info. Below number 0, 5 it's range of latest 5 builds.
curl -g "${SERVER}/job/${JOB}/api/json?pretty=true&tree=builds[number,url,result]{0,5}" \
--user $USER:$TOKEN
Related
I have created a custom step in which I'm doing some calculations. I need to pass or fail the step according to the outcome of the calculations. Presently step always shows pass in the report even when the calculation fails. Also, I would like to know how to pass the fail note to the report as I can see it is implemented in common steps.
I'm using QAF version 3.0.1
Below is a sample example:
#QAFTestStep(description = "I check quantity")
public static void iCheckQuantity() {
String product = getBundle().getString("prj.product");
Int availableStock = getBundle().getString("prj.aStock");
Int minStock = getBundle().getString("prj.minStock");
if (availableStock < minStock) {
// I want to fail the step here telling - minimum stock required to run the test for "product" is "minStock" but presenlty available is "availableStock"
}
}
I was able to figure out the answer.
import com.qmetry.qaf.automation.util.Validator.*;
assertFalse(true, "FAIL message here", "SUCCESS message here");
The below links were useful to understand more on verifications and also on different types of verifications/assertions available in QAF.
Refer:
https://qmetry.github.io/qaf/latest/assertion_verification.html
https://qmetry.github.io/qaf/latest/javadoc/com/qmetry/qaf/automation/util/Validator.html
I am working with a school project where I am going to analyse a companies defect database.
They are using Microsoft Foundation Server (TFS).
I am all new to TFS and the TFS api.
I have some problem in getting the right data from TFS using the TFS Client Object Model
.
I can retrieve all Test Plans, their respective test suites and every test case that a specific test suite uses, but the problem comes when I want to see in which test suite I have a specific test result from a test case. Since more than one the suite can use the same test cases, I cant see in which suite the result came from.
I am doing this way in getting test cases from suites:
foreach (ITestSuiteEntry testcase in suiteEntrys)
{
Console.WriteLine("\t \t Test Case: {0}", testcase.Title+", "+"Test case priority: "+ testcase.TestCase.Priority+"Test case Id: "+testcase.TestCase.Id);
//Console.Write(", Test Case: {0}", testcase.ToString());
//get each test result:
getTestResult(testcase,proj);
}
private void getTestResult(ITestSuiteEntry testcase, ITestManagementTeamProject proj)
{
var testResults = proj.TestResults.ByTestId(testcase.TestCase.Id);
foreach (ITestCaseResult result in testResults)
{
Console.WriteLine("\t \t \t"+result.DateCreated+","+result.Outcome);
}
}
So my question is, how can I see the Test Suite which was used to execute the test?
Have a look at following code snippet.
It shows you how to get Test Results for a specific Test Suite using Test Points Ids.
You can use similar approach to achieve your goal.
var tfsCollection = new TfsTeamProjectCollection(
new Uri(tfsUrl),
new System.Net.NetworkCredential(<user>, <password>));
tfsCollection.EnsureAuthenticated();
var testManagementService = tfsCollection.GetService<ITestManagementService>();
var teamProject = testManagementService.GetTeamProject(projectName);
var testPlan = teamProject.TestPlans.Find(testPlanId);
// Get all Test Cases belonging to a particular Test Suite.
// (If you are using several Test Configurations and want to take only one of them into account,
// you will have to add 'AND ConfigurationId = <your Test Configuration Id>' to the WHERE clause.)
string queryForTestPointsForSpecificTestSuite = string.Format("SELECT * FROM TestPoint WHERE SuiteId = {0}", suiteId );
var testPoints = testPlan.QueryTestPoints(queryForTestPointsForSpecificTestSuite);
// We are going to use these ids when analyzing Test Results
List<int> testPointsIds = (from testPoint in testPoints select testPoint.Id).ToList();
var testResults = teamProject.TestResults.ByTestId(testCaseId);
var testResultsForSpecificTestSuite = testResults.Where(testResult => testPointsIds.Contains(testResult.TestPointId));
This blog post will help you when creating queries: WIQL for Test
I want to execute a pig script file in embedded pig program which is shown below
----testPig.pig-----
A = load '/user/biadmin/student' using PigStorage() as (name:chararray);
B = foreach A generate name;
store B into '/user/biadmin/myoutput001';
for this I have written code as shown below
> PigServer pigServer = new PigServer(ExecType.MAPREDUCE);
> pigServer.registerScript("testPig.pig");
but it is not working.I have checked this in grunt-shell mode. there it is working fine.
So I made changes like this
---testPig.pig -----
A = load '/user/biadmin/student' using PigStorage() as (name:chararray);
B = foreach A generate name;
--store B into '/user/biadmin/myoutput001';
Embedded pig code for this is
> PigServer pigServer = new PigServer(ExecType.MAPREDUCE,prt);
> pigServer.registerScript(path);
> pigServer.store("B","/user/biadmin/myoutput20");
Now the modified code is working fine.
So now my doubt is
why I was not able to execute pig script which is having store command?
How can I execute pig script file which is having store command?
Your PigServer code is not working because; when you call .registerScript(), by default, PigServer sets the interactive mode flag on GruntParser to false. From the PigServer source code:
public void registerScript(InputStream in, Map<String,String> params,List<String> paramsFiles) throws IOException {
try {
String substituted = doParamSubstitution(in, params, paramsFiles);
GruntParser grunt = new GruntParser(new StringReader(substituted));
/********************************************/
grunt.setInteractive(false);
/********************************************/
grunt.setParams(this);
grunt.parseStopOnError(true);
} catch (org.apache.pig.tools.pigscript.parser.ParseException e) {
log.error(e.getLocalizedMessage());
throw new IOException(e.getCause());
}
}
Quoting from the GruntParser source code:
In interactive mode, executes the plan right away whenever a STORE command is encountered.
This means that when interactive mode is not active, STOREcommands will be ignored (that is they won't run automatically) until further PigServer.openIterator or PigServer.store calls (that is you explicitly make a call requiring the STORE line).
As for your second question, you might want to have a look at PigRunner class.
I try to write a custom step that's generate step
my code looks like :
/**
* #Then /^Check_raoul$/
*/
public function checkRaoul()
{
// grab the content ...
// get players ...
$to_return = array();
foreach ($players as $player) {
$player = $player->textContent;
if (preg_match('/^.*video=([^&]*)&.*$/', $player, $matches))
{
array_push($to_return, new Step\Then('I check the video of id "'.$matches[1].'"'));
}
}
return $to_return;
}
/**
* #Then /^I check the video of id "([^"]*)"$/
*/
public function iCheckTheVideoOfId($id)
{
// ...
}
works fine but when integrating to jenkins or un cli, if many executions of iCheckTheVideoOfId fail, I see just one error. I wish generate a number of steps equal to the number of iCheckTheVideoOfId calls
what I a doing wrong ?
We abandoned using Jenkins to do BDD checks due to the differences in how test feedback is presented and what Jenkins is capable of. We found that just running our suites locally and then a full check before pushing code to the repo produced better results and helped everyone get better at using the framework.
To answer your question directly I would suggest configuring your jenkins job to not fail when a test fails.
This can be accomplished by not outputting results at all. You can modify your command line options to not output failures at all and just log results to an output file. You could then run a script at the end to check for failures.
We want to dynamically trigger integration tests in different downstream builds in jenkins. We have a parametrized integration test project that takes a test name as a parameter. We dynamically determine our test names from the git repo.
We have a parent project that uses jenkins-cli to start a build of the integration project for each test found in the source code. The parent project and integration project are related via matching fingerprints.
The problem with this approach is that the aggregate test results doesn't work. I think the problem is that the "downstream" integration tests are started via jenkins-cli, so jenkins doesn't realize they are downstream.
I've looked at many jenkins plugins to try to get this working. The Join and Parameterized Trigger plugins don't help because they expect a static list of projects to build. The parameter factories available for Parameterized Trigger won't work either because there's no factory to create an arbitrary list of parameters. The Log Trigger plugin won't work.
The Groovy Postbuild Plugin looks like it should work, but I couldn't figure out how to trigger a build from it.
def job = hudson.model.Hudson.instance.getJob("job")
def params = new StringParameterValue('PARAMTEST', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)
This is what finally worked for me.
NOTE: The Pipeline Plugin should render this question moot, but I haven't had a chance to update our infrastructure.
To start a downstream job without parameters:
job = manager.hudson.getItem(name)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
manager.hudson.queue.schedule(job, 0, causeAction)
To start a downstream job with parameters, you have to add a ParametersAction. Suppose Job1 has parameters A and C which default to "B" and "D" respectively. I.e.:
A == "B"
C == "D"
Suppose Job2 has the same A and B parameters, but also takes parameter E which defaults to "F". The following post build script in Job1 will copy its A and C parameters and set parameter E to the concatenation of A's and C's values:
params = []
val = ''
manager.build.properties.actions.each {
if (it instanceof hudson.model.ParametersAction) {
it.parameters.each {
value = it.createVariableResolver(manager.build).resolve(it.name)
params += it
val += value
}
}
}
params += new hudson.model.StringParameterValue('E', val)
paramsAction = new hudson.model.ParametersAction(params)
jobName = 'Job2'
job = manager.hudson.getItem(jobName)
cause = new hudson.model.Cause.UpstreamCause(manager.build)
causeAction = new hudson.model.CauseAction(cause)
def waitingItem = manager.hudson.queue.schedule(job, 0, causeAction, paramsAction)
def childFuture = waitingItem.getFuture()
def childBuild = childFuture.get()
hudson.plugins.parameterizedtrigger.BuildInfoExporterAction.addBuildInfoExporterAction(
manager.build, childProjectName, childBuild.number, childBuild.result
)
You have to add $JENKINS_HOME/plugins/parameterized-trigger/WEB-INF/classes to the Groovy Postbuild plugin's Additional groovy classpath.
Execute this Groovy script
import hudson.model.*
import jenkins.model.*
def build = Thread.currentThread().executable
def jobPattern = "PUTHEREYOURJOBNAME"
def matchedJobs = Jenkins.instance.items.findAll { job ->
job.name =~ /$jobPattern/
}
matchedJobs.each { job ->
println "Scheduling job name is: ${job.name}"
job.scheduleBuild(1, new Cause.UpstreamCause(build), new ParametersAction([ new StringParameterValue("PROPERTY1", "PROPERTY1VALUE"),new StringParameterValue("PROPERTY2", "PROPERTY2VALUE")]))
}
If you don't need to pass in properties from one build to the other just take the ParametersAction out.
The build you scheduled will have the same "cause" as your initial build. That's a nice way to pass in the "Changes". If you don't need this just do not use new Cause.UpstreamCause(build) in the function call
Since you are already starting the downstream jobs dynamically, how about you wait until they done and copy the test result files (I would archive them on the downstream jobs and then just download the 'build' artifacts) to the parent workspace. You might need to aggregate the files manually, depending if the Test plugin can work with several test result pages. In the post build step of the parent jobs configure the appropriate test plugin.
Using the Groovy Postbuild Plugin, maybe something like this will work (haven't tried it)
def job = hudson.getItem(jobname)
hudson.queue.schedule(job)
I am actually surprised that if you fingerprint both jobs (e.g. with the BUILD_TAG variable of the parent job) the aggregated results are not picked up. In my understanding Jenkins simply looks at md5sums to relate jobs (Aggregate downstream test results and triggering via the cli should not affect aggregating results. Somehow, there is something additional going on to maintain the upstream/downstream relation that I am not aware of...
This worked for me using "Execute system groovy
script"
import hudson.model.*
def currentBuild = Thread.currentThread().executable
def job = hudson.model.Hudson.instance.getJob("jobname")
def params = new StringParameterValue('paramname', "somestring")
def paramsAction = new ParametersAction(params)
def cause = new hudson.model.Cause.UpstreamCause(currentBuild)
def causeAction = new hudson.model.CauseAction(cause)
hudson.model.Hudson.instance.queue.schedule(job, 0, causeAction, paramsAction)