Get Jenkins Metrics data through API - api

What I need with the API call
I need the timeline info for each of the builds via the API call for Jenkins metrics plugin. I am using a web API to get data for my jobs in my jenkins. However, calling $JenkinsUrl/metrics/APIkey is leading me nowhere. Any idea how can I achieve the information?

All three bits of timeline info for each build (queue time, building time, total time) are available via the "get build" API.
On the screen where it shows "8.1 sec waiting in the queue" etc, click on the "REST API" link in the page footer, then "JSON API", then add &depth=2 to the end of the resulting API URL.
note, if you're searching for specific values in the API json, that the times will be in milliseconds. after you parse the json, build time is under the "duration" property, and the other two are in the array under the "actions" property. for me, it was the third element of the array, but that may vary (find the one with _class "jenkins.metrics.impl.TimeInQueueAction"):
{
"_class": "jenkins.metrics.impl.TimeInQueueAction",
"queuingDurationMillis": 16,
"totalDurationMillis": 4365
}
so in my example the build time was 4349 and the queue time was 16, so the total time was 4349 + 16 = 4365 milliseconds.
Instead of adding &depth=2 to the end of the url, you might be able to get exactly the three values you want by appending this to the url: &tree=duration,actions[queuingDurationMillis,totalDurationMillis]

Related

Auth0. How to retrieve over 1000 users (and make this call via a python script than be run as a cron job)

I am trying to use Auth0 to get a list of users when my user list is >1000 (approx 2000)
So I understand a bit better now how this works after following the steps at:
https://auth0.com/docs/manage-users/user-migration/bulk-user-exports
There are three steps:
Use a POST call to the https://MY_DOMAIN/oauth/token endpoint to get an auth token (done)
Then take this token and insert it into the next POST call to the endpoint: https://MY_DOMAIN/api/v2/jobs/users-exports
Then take the job_id and insert it into the 3rd GET call to the endpoint: https://MY_DOMAIN/api/v2/jobs/MY_JOB_ID
But this just gives me a link to a document that I download. Essentially is the same end result as using the User Import / Export extension.
This is NOT what I want. I want to be able to call an endpoint and have it return a list of all the users (similar to the Retrieve Users with the Get Users Endpoint). I require it is done this way, so I can write a python script and run it as a cron job.
However, since I have over 1000 users, I am getting the below error when I call the GET /API/v2/users endpoint.
auth0.v3.exceptions.Auth0Error: 400: You can only page through the first 1000 records. See https://auth0.com/docs/users/search/v3/view-search-results-by-page#limitation
Can anyone help? Can this be done all the way I wish it to be?

How do I ingest tide gauge data from the NOAA HTTP API into Thingsboard Professional?

NOAA provides tidal and weather data through their own http API, and I would like to be able to use their API to get data into ThingsBoard (Professional) every six minutes to overlay with my device data (their data are updated every 6 minutes). Can someone walk me through the details of using the correct Integrations or Rule chains to get the time series data added to the database? It would also be nice to only use the metadata once. Below you can see how to get the most recent tide gauge level (water level) using their API.
For example, to see the latest tide gauge water level for a tide gauge (in this case, tide gauge 8638610), the API allows for getting the most recent water level information -- https://api.tidesandcurrents.noaa.gov/api/prod/datagetter?date=latest&station=8638610&product=water_level&datum=navd&units=metric&time_zone=lst_ldt&application=web_services&format=json
That call produces the following JSON: {"metadata":{"id":"8638610","name":"Sewells Point","lat":"36.9467","lon":"-76.3300"},"data":[{"t":"2022-02-08 22:42", "v":"-0.134", "s":"0.003", "f":"1,0,0,0", "q":"p"}]}
The Data Converter was fairly easy to construct (except maybe the noaa_data.data[0, 0] used in the code below):
//function Decoder(payload,metadata)
var noaa_data = decodeToJson(payload);
var deviceName = noaa_data.metadata.id;
var dataType = 'water_level';
var latitude = noaa_data.metadata.lat;
var longitude = noaa_data.metadata.lon;
var waterLevelData = noaa_data.data[0, 0];
//functions
function decodeToString(payload) {
return String.fromCharCode.apply(String, payload);
}
var result = {
deviceName: deviceName,
dataType: dataType,
time: waterLevelData.t,
waterLevel: waterLevelData.v,
waterLevelStDev: waterLevelData.s,
latitude: latitude,
longitude: longitude
}
function decodeToJson(payload) {
var str = decodeToString(payload);
var data = JSON.parse(str);
return data;
}
return result;
which has an Output:
{
"deviceName": "8638610",
"dataType": "water_level",
"time": "2022-02-08 22:42",
"waterLevel": "-0.134",
"waterLevelStDev": "0.003",
"latitude": "36.9467",
"longitude": "-76.3300"
}
I am not sure what process to use to get the data into ThingsBoard to be displayed as a device alongside my other device data.
Thank you for your help.
If you have a specific(and small) number of stations to grab then you can do the following:
Create the devices in Thingsboard manually
Go into rule chains, create a water stations rule chain
For each water station place a 'Generator' node, selecting the originator as required.
Route these into an external "Rest API" node.
Route the result of the post into a blue script node and put your decoder in there
Route result to telemetry
Example rule chain
More complex solution but more scalable:
Use a single generator node
Route the message into a blue script. This will contain a list of station id's that you want to pull info for. By setting the output of a script to the following you can make it send out multiple messages in sequence:
return [{msg:{}, metadata:{}, msgType{}, ...etc...]
Route the blue script into the rest api call and get the station data
Do some post processing with another blue script node if you need to. Don't decode the data here though.
Route all this into another rest api node and POST the data back to your HTTP integration endpoint (if you don't have one you will need to create it. Fairly simple)
Connect your data converter to this integration.
Finally, modify your output so that it is accepted by the converter output
{
"deviceName": "8638610",
"deviceType": "water-station",
"telemetry": {
"dataType": "water_level",
"time": "2022-02-08 22:42",
"waterLevel": "-0.134",
"waterLevelStDev": "0.003",
"latitude": "36.9467",
"longitude": "-76.3300"
}
}
Rough example
Above is how I would do it if I didn't want to use any external services. If you're AWS savvy I'd say set up a CRON job to trigger a lambda function every 6 minutes and post into your platform. Either will work.

GoogleFit on iphone: problem with REST API calls

I'm trying to query, with the Fit rest API, segmented data from an iphone that have installed GoogleFit and configured the sync between apple health and googelfit.
To my android phone, I get the data as expected with this POST:
(*)
"aggregateBy": [
{
"dataSourceId": "derived:com.google.step_count.delta:com.google.android.gms:estimated_steps",
"dataTypeName": "com.google.step_count.delta"
},
{
"dataSourceId": "derived:com.google.distance.delta:com.google.android.gms:merge_distance_delta",
"dataTypeName": "com.google.distance.delta"
}
{
"dataSourceId": "derived:com.google.active_minutes:com.google.android.gms:merge_active_minutes",
"dataTypeName": "com.google.active_minutes"
}
],
"endTimeMillis": 1643325227000,
"startTimeMillis": 1640991600000,
"bucketByActivitySegment": {
"minDurationMillis": 600000
}
}
But from the iphone user, this returns an empty bucket.
I checked the available data dataSources for the apple user. I did some "trial and error" on the dataSourceIds connected to "active_minutes". In particular
derived:com.google.active_minutes:com.google.ios.fit:appleinc.:iphone:1148c16f:top_level
derived:com.google.active_minutes:com.google.ios.fit:appleinc.:watch:f40f5c4a:top_level
The trial and error werer conducted with aggregateBy post with one of the above sources, so no distance and step_count involved. The two above dataSourceIds were obtained from a "list post query" for available dataSources, done by the iphone user, with the following scopes:
fitness.activity.read
fitness.location.read
No segmented data is returned from either dataSourceIds (an empty bucket as well).
Contents within the apple user's app indicates that there should be segmented data somewhere, see screenshot link.
(Edit: also tried setting "minDurationMillis: 0")
Meanwhile, queries such as:
(**)
{
"aggregateBy": [
{
"dataSourceId": "derived:com.google.distance.delta:com.google.android.gms:merge_distance_delta",
"dataTypeName": "com.google.distance.delta"
}
],
"endTimeMillis": 1643325227000,
"startTimeMillis": 1640991600000,
"bucketByTime": {
"durationMillis": 2333627000
}
}
does return data from the apple user. But I'm really interested in segments (minimum 10 minutes long)
So, question: Anyone with experience getting segmented data from apple-googlefit users?
Figure:
iphone screenshot
Update.
Since (**) (see first post) did return data from the iphones, I went with a 1-hour "bucketByTime"-solution for both androids and iphones.
If segments is important, it is possible to parse/filter the "bucketByTime" data into segments. However, activity type is not obtained with this POST.
New problem(s) have arisen:
As mentioned, (**) POST returned data. Particularly when:
"startTimeMillis" = [start of 2022]
and
"endTimeMillis" = [now].
A cron executor is configured such that (**) is repeated once a day where:
"startTimeMillis" = ["previous now"]
and
"endTimeMillis" = [now].
However, this does not return any data from the iphone users. To clarify, no daily data is received from the iphones.
Some preformed check-ups:
The iphone users see data as normal on the front-end view of the GoogleFit app (see screenshot link in first post).
Attempted (**) with every "dataStreamId" and "name" available (returned from "list post query" for available dataSources) - Nothing is returned.
Asked the iphone users to check GoogleFit's permissions, in accordance with this support page. (I'll have to take their word that it's configured correctly)
Important finding:
For one iphone user, "incomplete"[2] data is occasionally returned. I'm imagining this data is data recorded by his apple watch. I asked him to wear his watch today.
You'd figure, when asking for eg. merge_distance_delta the app merges the available sources into a neat timeline. Questions on this level is not available publicly (to my knowledge).
[2] The sum does not (come close) to matching the GoogleFiT front-end results (as it does for the android users).

Counting the number of response codes in JMeter 4.0

I run some load tests (all endpoints) and we do have a known issue in our code: if multiple POST requests are sent in the same time we do get a duplicate error based on a timestamp field in our database.
All I want to do is to count timeouts (based on the message received "Service is not available. Request timeout") in a variable and accept this as a normal behavior (don't fail the tests).
For now I've added a Response Assertion for this (in order to keep the tests running) but I cannot tell if or how many timeout actually happen.
How can I count this?
Thank you
I would recommend doing this as follows:
Add JSR223 Listener to your Test Plan
Put the following code into "Script" area:
if (prev.getResponseDataAsString().contains('Service is not available. Request timeout')) {
prev.setSampleLabel('False negative')
}
That's it, if sampler will contain Service is not available. Request timeout in the response body - JMeter will change its title to False negative.
You can even mark it as passed by adding prev.setSuccessful(false) line to your script. See Apache Groovy - Why and How You Should Use It article fore more information on what else you can do with Groovy in JMeter tests
If you just need to find out the count based on the response message then you can save the performance results in a csv file using simple data writer (configure for csv only) and then filter csv based on the response message to get the required count. Or you can use Display only "errors" option to get all the errors and then filter out based on the expected error message.
If you need to find out at the runtime then you can use aggregate report listener and use "Errors" checkbox to get the count of failure but this will include other failures also.
But, if you need to get the count at the run time to use it later then it is a different case. I am assuming that it is not the case.
Thanks,

Does Import.io api support status of the extractor?

I've just created an extractor with import.io. This extractor uses chaining. Firstly I'm extracting some urls from one page and with these extracted urls, I'm extracting detail pages. When detail pages' extraction finish, I want to get the results. But how can I be sure that extraction is completed. Is there any api endpoint for checking the status of extraction?
I found "GET /store/connector/{id}" endpoint from legacy. But when I try this, I got 404. You can take a look at the screenshot.
Another question is, I want to schedule my extractor twice a day. Is this possible?
Thanks
Associated with each Extractor are Crawl Runs. A crawl run represents the running of an extractor with a specific configuration (training, list of URLs, etc). The state of each of a crawl run can have one of the following values:
STARTED => Currently running
CANCELLED => Started but cancelled by the user
FINISHED => Run was complete
Additional metadata that is included is as follows:
Started At - When the run started
Stopped At - When the run finished
Total URL Count - Total number of URLs in the run
Success URL Count - # of successful URLs queried
Failed URL Count - # of failed URLs queried
Row Count - Total number of rows returned in the run
The REST API to get the list of craw runs associated with an extractor is as follows:
curl -s X GET "https://store.import.io/store/crawlrun/_search?_sort=_meta.creationTimestamp&_page=1&_perPage=30&extractorId=$EXTRACTOR_ID&_apikey=$IMPORT_IO_API_KEY"
where
$EXTRACTOR_ID - Extractor to list crawl runs
$IMPORT_IO_API_KEY - Import.io API from your account