How to get discarded builds and the date of build execution from Jenkins API? - api

I'd like to retrieve from Jenkins api information about all builds of specified job.
I've set in configuration to Discard old builds:
Days to keep builds - 20
Max # of builds to keep - 10
1) Is it possible to get information about builds whatare discarded?
Also I could get following information about each not discarded build:
"builds" : [
{
"_class" : "hudson.model.FreeStyleBuild",
"number" : 34,
"url" : "http://myUrl/view/Myjob/34/"
}
]
I use following url for this: http://MyUrl/view/MyJob/api/json?tree=builds[url,number]&pretty=true
2) Is it possible to get date of build execution?

To discard a build means, well, to discard it, i.e. its information is gone.
I can imagine a solution where you:
collect, store and update(=extend) information about all existing builds on a regular basis
at times you'd like to know which builds have been discarded:
collect the information of all existing builds at this very moment
don't update your store, but ...
create a diff between the current (C) and the information previously stored in your store (S). The builds that are in S but not in C are the discarded builds.

Related

Running manifests (classes) from a task or plan in Puppet Enterprise

TL;DR
In Puppet Enterprise, how do I run a manifest (testpp.pp) from a task or plan (not Bolt).
plan base_windows::testplan (
  TargetSpec $targets,
  Optional[String] $contents = undef,
  String $filename,
){
  $apply_prep($targets)
  $apply_results = apply($targets, '_catch_errors' => true) {
    class { 'base_windows::testpp': }
  }
  $apply_results.each | $result | {
    notice($result.report)
  }
}
apply_prep seems to succeed, but apply is failing with the following error:
{
"msg" : "Evaluation Error: Unknown function: 'report'. (file: /opt/puppetlabs/server/data/orchestration-services/code/environments/development/modules/base_windows/plans/testplan.pp, line: 16, column: 19)",
"kind" : "bolt/plan-failure",
"details" : {
"class" : "Bolt::PAL::PALError"
}
}
If I change the code to:
plan base_windows::testplan (
  TargetSpec $targets,
  Optional[String] $contents = undef,
  String $filename,
){
  apply_prep($targets)
  $apply_results = apply($targets, '_catch_errors' => true) {
# Is this how to call a class? I cannot find an example.    
class { 'base_windows::testpp': }
  }
  $apply_results.each |$result| {
$target = $result.target.name
if $result.ok {
  out::message("${target} returned a value: ${result.value}")
} else {
 out::message("${target} errored with a message: ${result.error.message}")
}
  }
}
The plan tells me it has failed, but there are no errors in the node's report. In fact, there is no entry for the time the plan was executed.
I cannot find any examples on how to call a class from a plan, so the above apply() is a guess, based on this documentation.
I have installed the puppetlabs_reboot module and successfully ran a plan using it, therefore, I conclude my system is set up correctly, it's just my code that is wrong.
Background
I may be going about this all wrong, so here is some background to the problem. Currently, I have a series of manifests that install various packages from the public Chocolatey repository depending on a node's classification. Package definitions are stored in Hiera data and each package' version is set to latest. At the end of the Package{} resource, some manifests include a reboot.
These manifests are used to provision new nodes and keep existing nodes up-to-date with the latest package version.
The Puppet agent is set to run once per hour and if the source package is updated in the Chocolatey repo, on the next Puppet run, the manifest will update the package, rebooting the node, if required.
Goal
New nodes are provisioned with the latest package version.
Prevent package updates at undetermined times on existing nodes.
Continue to allow Puppet agent runs every hour.
Make use of existing manifests.
Ideas
Split out the package{} code from the profile manifest and place them in tasks / plans, allowing packages to be updated out-of-hours.
Specify the actual package version in Hiera. Although this is more declarative and idempotent, it means keeping an eye on over 100 package version. I guess it would be fairly simple to interrogate the Chocolatey repos with code to pull the latest version number, but even so I am no better off.
Create a task with a script that runs choco upgrade all, however, the next Puppet run would revert package versions according to the version defined in Hiera, meaning Hiera still needs to be kept up-to-date.
Problems
As per the main crux of this question, how do I run manifests (classes) from plans? If I understand correctly, tasks are for ad-hoc scripts, whereas plans can run tasks and manifests. As a lot of time has been invested in writing manifests, I would prefer not to rewrite all my manifests as scripts.
I am confused by the Puppet documentation as it seems to switch between PE and Bolt syntax. I am using Puppet Enterprise where Puppet says they don't recommend using Bolt but their examples seem to site Bolt commands.
No errors in the node' report. apply_prep() reports executed successfully, albeit taking far longer to execute than puppetlabs_reboot module, but apply() results in a failure, but nothing is logged in the node's reports.
Using puppetlabs_reboot module as a reference, it appears their plan uses a bunch of tasks. It appears that they don't use apply() to run their reboot{} class. Is this not duplicating the work?
If anyone has any suggestions or ideas, I'd be grateful if you could share.
I've got it to work. The class I was trying to run, required parameters that I hadn't provided!
plan base_windows::testplan (
TargetSpec $targets,
Optional[String] $contents = undef,
String $filename,
){
apply_prep($targets)
$apply_results = apply($targets, '_catch_errors' => true) {
class { 'base_windows::testpp':
filename => $filename,
contents => $contents,
}
}
}
# Output the whole result_set in the PE console
return $apply_results
I found this out using the logs.
Turn on debug level logging in /etc/puppetlabs/puppetserver/logback.xml (root level="debug")
Tail the following logs:
tail -f /var/log/puppetlabs/bolt-server/bolt-server.log
tail -f /var/log/puppetlabs/puppetserver/puppetserver.log | grep -B 5 -A 5 'testplan'
tail -f /var/log/puppetlabs/orchestration-services/orchestration-services.log

Jenkins Allure report is not showing all the results when we have multiple scenarios

I have the following scenario outline and I am generating allure report, But in the report we are not getting all the scenarios data, its showing only last run data.
It is showing only | uat1_nam | Password01 | test data result
Jenkins plugin version i am using is 2.13.6
Scenario Outline: Find a transaction based on different criteria and view the details
Given I am login to application with user "<user id>" and password "<password>"
When I navigate to Balances & Statements -> Find a transaction
Then I assert I am on screen Balances & Statements -> Find a transaction -> Find a transaction
#UAT1
Examples:
| user id | password |
| uat1_moz | Password01 |
| uat1_nam | Password01 |
I got a similar issue.
We are running test with the same software on Linux and Windows, and generating results into 2 separate folders.
Then we have:
allure-reports
|_ linux_report
|_ windows_report
Then we are using the following command in the Jenkinsfile:
allure([
includeProperties: false,
jdk: '',
properties: [],
reportBuildPolicy: 'ALWAYS',
results: [[path: 'allure-reports/linux-report'], [path: 'allure-reports/windows-report']]
])
Similar to Sarath, only the results of the from the last run are available...
I also tried to run the cli directly on my machine, same results.
allure serve allure-reports/linux-report allure-reports/windows-report
I already found many methods, actually this one is very similar to my use case, but I do not understand why it works here, and not for me...
https://github.com/allure-framework/allure2/issues/1051
I also tried with the following method, but the Docker container is not running properly on Linux, due to permission issues... But I am running the container from a folder where I got all permissions. Same results if I give my userID in parameters:
https://github.com/fescobar/allure-docker-service#MULTIPLE-PROJECTS---REMOTE-REPORTS
I was able to get deeper into the topic, and I am finally able to prove why the data are overwritten.
I used a very simple example to generate 2 different reports, where only allure.epic was different.
As I thought, if we generate 2 different reports, with the same source folder, but generate 2 separate reports, then only the latest report will be considered (allure.epic name was updated in between).
If I have 2 different folders, with the same code (but only the allure.epic is different), then I have the all data available, and stored in different Suites!
Then, to make sure that allure considers the reports as different, and make a different classification for each OS, we have to make tests on code which is stored in different locations. Which does not fit with my usecase, as the same code is tested on both Linux and Windows.
Or maybe is there an option for pytest-allure to specify the root classification?

How to define run once context steps with Gauge?

Using Gauge we can repeat a set of steps before each scenario using Context Steps right after a test specification heading. For example:
Delete project
==============
* User log in as "mike"
Delete single project
---------------------
* Delete the "example" project
* Ensure "example" project has been deleted
Delete multiple projects
------------------------
* Delete all the projects in the list
* Ensure project list is empty
In the above Delete Project test specification, the context step User log in as "mike" is going to be executed twice, one time for each of the two detete scenarios.
How to define steps that run once and before all scenarios of a test specification?
Since you cannot have it run once through the spec file a workaround could be to use the suite store.
public void loginAsMike(){
if((boolean) DataStoreFactory.getSuiteDataStore().get('loggedIn')){
//execute steps
DataStoreFactory.getSuiteDataStore().put('loggedIn', true);
}
}
This way it will only run once. The only issue here would be if you were to run multiple tests in parallel. However if your only logging in as mike in one spec file only then this is a good solution.

Jenkins' EnvInject Plugin does not persist values

I have a build that uses EnvInject Plugin to set an environmental value.
A different job needs to scan last good Jenkins build of that job and get the value of that environmental variable.
This all works well, except sometimes the variable will disappear from build history. It seems that after some time passes, when I look at the 'Environment variables' section in build history, the injected value simply disappears.
How can I make this persist? Is this a bug, or part of the design?
If it make any difference, the value of the injected variable is +1500 chars and in the following format: 'component1=1.1.2;component2=1.1.3,component3=4.1.2,component4=1.1.1,component4=1.3.2,component4=1.1.4'
Looks like EnvInject and/or JobDSL have a bug.
Steps to reproduce:
Set up a job that runs this JobDSL:
job('run_deploy_mock') {
steps {
environmentVariables {
env('deployedArtifacts', 'component1=1.0.0.2')
}
}
}
Run it and it will create a job called 'deploy_mock'
Run the 'deploy_mock' job. After build #1 is done, go to build details and check 'Environmental Variables' section for an entry called 'component1'
Run the JobDSL job again
Check 'Environmental Variables' section for 'deploy_mock' build #1. The 'component1' variable is now missing.
If I substitute the '=' for something else, it works as expected.
Created Jenkins Jira

Issue in getting iteration data from rally API

I am using following url to get the iteration data from rally.
I then parse the json data received.
def query = URLEncoder.encode("(Project.Name contains \"1 Prime Infrastructure\")", "UTF-8")
def rallyURL = "https://us1.rallydev.com/slm/webservice/v2.0/iteration?query="+query+"&fetch=true&start=1&pagesize=200"
The issue is it giving 0 records. But when i change the name to some other project the data comes.
Probably it is because of the default workspace for my username and password. I want project data from different workspace.
I have access to all this workspace.
can someone tell how to set the workspace before making an api call so that i can get the iteration data ??
Thanks,
You can simply include a workspace parameter in your url to override the default:
&workspace=/workspace/12345
You can also always further refine your results to a specific project:
&project=/project/12345
Or to a specific hierarchy:
&projectScopeUp=true
&projectScopeDown=true