Export all jobs from jenkins including run history - api

Ideally I need a script that outputs the following information in a CSV format that's easy to import into Excel:
job name,number of times run in last year,number of times run overall,last run status
For that job, output no individual run details.
Tried this on my Jenkins:
List Jenkins job build detials for last one year along with the user who triggered the build.
but got an error:
java.lang.NullPointerException: Cannot invoke method getShortDescription() on null object
at org.codehaus.groovy.runtime.NullObject.invokeMethod(NullObject.java:91)
at org.codehaus.groovy.runtime.callsite.PogoMetaClassSite.call(PogoMetaClassSite.java:48)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
Any idea what in the Groovy needs changing? or is there a better solution?
Thanks all!

Thanks to #daggett and #ian . Both worked.
I went with IANS :
def jobNamePattern ='.*'   // adjust to folder/job regex as needed
def daysBack = 365   // adjust to how many days back to report on
def timeToDays = 24*60*60*1000  // converts msec to daysprintln "Job Name: ( # builds: last ${daysBack} days / overall )  Last Status\n   Number | Trigger | Status | Date | Duration\n"Jenkins.instance.allItems.findAll() {
  it instanceof Job && it.fullName.matches(jobNamePattern)
}.each { job ->
  builds = job.getBuilds().byTimestamp(System.currentTimeMillis() - daysBack*timeToDays, System.currentTimeMillis())
  println job.fullName + ' ( ' + builds.size() + ' / ' + job.builds.size() + ' )  ' + job.getLastBuild()?.result
  
  // individual build details
  builds.each { build ->
    println '   ' + build.number + ' | ' + build.getCauses()[0]?.getShortDescription() + ' | ' + build.result + ' | ' + build.getTimestampString2() + ' | ' + build.getDurationString()
  }
}
return

Related

How to manage response assertion if the values are same but sequence changes in Jmeter

I am getting response, let's suppose {"profiles":{"HLS_1200":[{"profile_id":38,"quality_id":11}],"ADAPTIVE" {"L","XL","XXL"}, now every time when we hit the API the response is same but the sequence is getting changed and therefore assetion is getting failed.
Next time I will get the response like {"profiles":{"HLS_1200":[{"profile_id":38,"quality_id":11}],"ADAPTIVE" {"XL","L","XXL"},
I want to pass this response assertion even if the sequence is changed.
Go for JSONAssert library which performs "deep scans" and doesn't care about the order of the attributes
Obtain the following libraries and drop them to JMeter Classpath:
android-json-0.0.20131108.vaadin1.jar
jsonassert-1.5.0.jar
Restart JMeter to pick up the libraries
Add JSR223 Assertion as a child of the request which returns your JSON
Put the following code into "Script" area:
def expected = '{\n' +
' "profiles": {\n' +
' "HLS_1200": [\n' +
' {\n' +
' "profile_id": 38,\n' +
' "quality_id": 11\n' +
' }\n' +
' ],\n' +
' "ADAPTIVE": [\n' +
' "L",\n' +
' "XL",\n' +
' "XXL"\n' +
' ]\n' +
' }\n' +
'}'
org.skyscreamer.jsonassert.JSONAssert.assertEquals(expected, prev.getResponseDataAsString(), false)
More information:
Introduction to JSONassert
Scripting JMeter Assertions in Groovy - A Tutorial

Extract and comparing data from json array not working in karate

I am trying to search a certain value in a JSON array using the value stored in a variable and then comparing somehow this is not working for me. Can you please help. BillerId1 is always returning a blank value
Given url buyerApi
Given url paymentHub
Then path '/BPAY/v' + version + '/billers'
And header Authorization = 'Bearer ' + token
When method get
Then status 200
* def id = response[0].savedBillerId
Then url paymentHub
Then path '/BPAY/v' + version + '/billers/' +id
And header Authorization = 'Bearer ' + token
And request {billerCode:<billerCode>, billerCRN:'<billerCRN>'}
When method put
Then status 200
Then url paymentHub
Then path '/BPAY/v' + version + '/billers'
And header Authorization = 'Bearer ' + token
When method get
Then status 200
* print id
* def billId1 = get[0] response[?(#.savedBillerId==**'#id'**)].savedBillerId
* print billId1
And match billId1 == id
Examples:
| billerCode | billerCRN |
| 65284 | 65112345675 |
Array looks like this
[
{
"savedBillerId": "ebfa2b9f-f49c-4b0c-c6ee-08d7e671944a",
"billerId": "26c67edb-b3dc-44ea-aa74-08d7d6890798",
"billerName": "test case 21c",
"billerCode": 65284,
"crn": "65112345675"
},
{
"savedBillerId": "500dfde7-e31c-408d-c6ef-08d7e671944a",
"billerId": "26c67edb-b3dc-44ea-aa74-08d7d6890798",
"billerName": "test case 21c",
"billerCode": 65284,
"crn": "65112345672"
}
]
#ptrthomas
Please read the docs: https://github.com/intuit/karate#jsonpath-filters
* def id = '500dfde7-e31c-408d-c6ef-08d7e671944a'
* def billId1 = karate.jsonPath(response, "$[?(#.savedBillerId=='" + id + "')].savedBillerId")[0]
* match billId1 == id
Also I feel this is a worthless check. You find by id and again check that id ?

How to send Ionic push notifications using Parse-Server?

I want to send push notifications from my ionic app to app now I wrote parse cloud code and  normal typescript but both are not working, actually m requirement is sending push notification to all devices and also specific device,please review my code below and help me 
my cloud code
Parse.Cloud.define("send", (request) => {
enter code here
    return Parse.Push.send({
        channels: ["News"],
        data: {
            title: "Hello from the Cloud Code",
            alert: "Back4App rocks!",
        }
    }, { useMasterKey: true });
});
typescript code
calling cloud code
Parse.Cloud.run('send').then(function (ratings) { debugger console.log("updated"); // result should be 'Update object successfully' }).catch((error) => { console.log(error) console.log("fail") });
You should add a query parameter. That way the push knows which user to send the push too.
You don't need the query parameter for channels. Do your installation have the channel?
Feel free to open an issue on the JS SDK.

Calling Google Spreadsheets and Vue

Good morning,
I am using this library (https://github.com/theoephraim/node-google-spreadsheet) to work with Google Sheets, and it seems like the authentication is working properly, but when I am recovering the sheets to work with them, it is throwing me a weird error and I don't know how to fix it.
It's not working the following code (in doc.getInfo):
function getInfoAndWorksheets (step) {
console.log('jj')
doc.getInfo(function (err, info) {
console.log('cvcv')
console.log(info)
console.log('Loaded doc: ' + info.title + ' by ' + info.author.email)
var sheet = info.worksheets[0]
console.log('sheet 1: ' + sheet.title + ' ' + sheet.rowCount + 'x' + sheet.colCount)
console.log(err)
step()
})
}
The error is the next one: err = Error: incorrect header check at Zlib._handle.onerror (webpack-internal:///./node_modules/browserify-zlib/lib/index.js:352:17) at Zlib._error (
You can see the error in the next photo:
https://www.photobox.co.uk/my/photo/full?photo_id=501798366536
Try this
const doc = new GoogleSpreadsheet('<Spreadsheet ID>', null, { gzip: false })

Scrapy can't figure out an sql query ins ajax call

I am trying to scrape data from this link https://www.flatstats.co.uk/racing-system-builder.php using scrapy.
I want to automate the ajax call using scrapy.
When I click "Full SP" Button (inspect in Firebug) the post parameter has the sql string which is "strange"
race|2|eq|Ordinary|0|~tRIDER_TYPE
What dialect is this?
My code :
import scrapy
import urllib
class FlatStat(scrapy.Spider):
name= "flatstat"
allowed_domains = ["flatstats.co.uk"]
start_urls = ["https://www.flatstats.co.uk/racing-system-builder.php"]
def parse(self, response):
query_lst = response.xpath('//table[#id="system"]//tr/td[last()]/text()').extract()
query_str = ' '.join(query_lst)
url = 'https://www.flatstats.co.uk/ajax/sb_report.php'
body_dict = {'a_e_max': '9.99',
'a_e_min': '0',
'arch_min': '0',
'exp_min': '0',
'report_type':'S',
# copied from the Post parameters by inspecting. Actually I tried everything.
'sqlFullString' : u'''Type%20(Rider)%7C%3D%7COrdinary%20(Exclude%20Amatr%2C%20App%2C%20Lady%20Races
)%7CAND%7Crace%7C2%7C0%7COrdinary%7C0%7C~tRIDER_TYPE%7C-t%7Ceq''',
#I tried copying this from the post parameters as well but no success.
#I also tried sql from the table //td text() which is "normal" sql but no success
'sqlString': query_str}
#here i tried everything FormRequest as well though there is no form.
return scrapy.Request(url, method="POST", body=urllib.urlencode(body_dict), callback=self.parse_page)
def parse_page(self, response):
with open("response.html", "w") as f:
f.write(response.body)
So questions are:
What is this sql.
Why isn't it returning me the required page. How can I run the right query?
I tried Selenium as well to click the button and let it do the stuff it self but that is another unsuccessful story. :(
It's not easy to say what the website creator is doing with the submitted sqlString. It probably means something very specific to how the data is processed by their backend.
This is an extract of the page JavaScript in-HTML code:
...
function system_report(type) {
sqlString = '', sqlFullString = '', rowcount = 0;
$('#system tr').each(function() {
if(rowcount > 0) {
var editdata = this.cells[6].innerHTML.split("|");
sqlString += editdata[0] + '|' + editdata[1] + '|' + editdata[7] + '|' + editdata[3] + '|' + editdata[4] + '|' + editdata[5] + '^';
sqlFullString += this.cells[0].innerHTML + '|' + encodeURIComponent(this.cells[1].innerHTML) + '|' + this.cells[2].innerHTML + '|' + this.cells[3].innerHTML + '|' + this.cells[6].innerHTML + '^';
}
rowcount++;
});
sqlString = sqlString.slice(0, -1)
...
Looks non trivial to reverse-engineer.
Although it's not a solution to your "sql" question above, I suggest that you try using splash (an alternative to selenium in some cases).
You can launch it with docker (the easiest way):
$ sudo docker run -p 5023:5023 -p 8050:8050 -p 8051:8051 scrapinghub/splash
With the following script:
function main(splash)
local url = splash.args.url
assert(splash:go(url))
assert(splash:wait(0.5))
-- this clicks the "Full SP" button
assert(splash:runjs("$('#b-full-report').click()"))
-- loading the report takes some time
assert(splash:wait(5))
return {
html = splash:html()
}
end
you can get the page HTML with the popup of the report.
You can integrate Splash with Scrapy using scrapyjs (a.k.a scrapy-splash)
See https://stackoverflow.com/a/35851072/ with an example how to do so with a custom script.