API Cursor Based Pagination with Curl - api

currently im working on a project for university. The goal is to make api calls with cmd (Curl) and transfer the data into a bi tool. I dont have any experience with coding/scripting and so on. The problem i currently have is that a single api call brings back 100 resources but i need all of the Data to visualize it.
curl -X get "https://(url)" -H "accept: application/json" -H "authorization: Basic (Username:Password) --proxy () --output () --output-dir ()
So my goal here is to create like a loop so that all of the data ist getting saved in one file.
Any Ideas?
Thank you !

Either it is in the API documentation how to get all the pages or there are Range headers or there are Link headers or hyperlinks attached to the response body for the next/previous pages or there is a MIME type e.g. JSON-LD, HTML, XML, etc. you can request with Accept header which contains hyperlinks. Without knowing the exact API it is not possible to tell which implementation the API developers chose. Maybe they were lazy and neither of them.

Related

Jira -- How to get issue changelog via REST API - but ALL, not single issue

I've seen this question many times, but no sufficient answer.
We're trying to dump all JIRA data into our data warehouse/ BI system. Or at least, the interesting parts.
One thing you can do is track status times, cycle time, lead time directly with field durations. This is very easy via JIRA's direct SQL database. The tables changeItem and changeGroup.
Of course the REST JSON API has less performance impact on the database.
However ... there appears to be no equivalent in the rest API of fetching ALL issue change history. Yes, you can fetch the changelog of one issue directly via an API call. If you have 100k issues, are you expected to make 100k API calls, iterating through issue IDs? Sounds like madness.
Is it somehow possible to expand changelogs through the search API, which amasses all issue data? I haven't seen it. Is what I'm seeking here possible? Or will we have to stick to the SQL route?
I think you are asking pretty much the same question as before: How can I fetch (via GET) all JIRA issues? Do I go to the Search node?, but additionally interesting in getting changelog data.
Yes, again you have to do it in batch, requesting JIRA API several times.
Here is little bash script, which could help you to do that:
#!/usr/bin/env bash
LDAP_USERNAME='<username>'
LDAP_PASSWORD='<password>'
JIRA_URL='https://jira.example.com/rest/api/2/search?'
JQL_QUERY='project=FOOBAR'
START_AT=0
MAX_RESULTS=50
TOTAL=$(curl --silent -u "${LDAP_USERNAME}:${LDAP_PASSWORD}" -X GET -H "Content-Type: application/json" "${JIRA_URL}maxResults=0&jql=${JQL_QUERY}" | jq '.total')
echo "Query would export ${TOTAL} issues."
while [ ${START_AT} -lt ${TOTAL} ]; do
echo "Exporting from ${START_AT} to $((START_AT + MAX_RESULTS))"
curl --silent -u "${LDAP_USERNAME}:${LDAP_PASSWORD}" -X GET -H "Content-Type: application/json" "${JIRA_URL}maxResults=${MAX_RESULTS}&startAt=${START_AT}&jql=${JQL_QUERY}& expand=changelog" | jq -c '.issues[]' >> issues.json
START_AT=$((START_AT + MAX_RESULTS))
done
Please note the expand parameter, which additionally put all change log to the json dump as well. Alternatively you can use issue dumper python solution: implement the callback to store data to db and you're done.
Another service worth considering especially if you need to have a feed like list of changes:
/plugins/servlet/streams?maxResults=99&issues=activity+IS+issue%3Aupdate&providers=issues
This returns a feed of last changes in issues in XML format for some criteria, like users etc. Actually, you may play around with "Activity Stream" gadget on a Dashboard to see hot it works.
The service has limit of 99 changes at once, but there's paging(see the Show More.. button)

Getting file diff with Github API

net project for which I need to detect and parse changes made to a specific single text file in a repository between different pull requests.
I've been successfully able to access the pull requests and the commits using the Github API but I don't know how to retrieve the lines that changed in the last commit?
Is this possible using the API? What would be the best approach?
If not should I try to read the last two file versions and implement a differ algorithm locally?
Thanks!
A pull request contains a diff_url entry like
"diff_url": "https://github.com/octocat/Hello-World/pull/1347.diff"
You can do this for any commit. For example, to get the diff of commit 7fd1a60b01f91b3 on octocat's Hello-World it's https://github.com/octocat/Hello-World/commit/7fd1a60b01f91b314f59955a4e4d4e80d8edf11d.diff.
This also works for branches. Here's master on octocat's Hello-World. https://github.com/octocat/Hello-World/commit/master.diff.
The general form is:
https://github.com/<owner>/<repo>/commit/<commit>.diff
For private repositories:
curl -H "Accept: application/vnd.github.v3.diff" https://<personal access token>:x-oauth-basic#api.github.com/repos/<org>/<repo>/pulls/<pull request>
Also works with the normal cURL -u parameter.
See: https://docs.github.com/en/rest/reference/pulls#get-a-pull-request
The crux is in the requested media type. You can pass the Accept header with value
application/vnd.github.diff
see documentation. For full reference, a GET request with the above and Authorization header to https://api.github.com/repos/{orgName}/{repoName}/pulls/{prId} does the trick.

API to GET the confluence page content via page name?

I want to CURL an API to get the contents of a confluence page using the pagename. I have an API to get the page details via pageid.
curl -u <userid>:<password> -X GET confluence-url/confluence/rest/prototype/1/content/<pageid>
But i want an API to get the same via pagename. Is that possible?
Check out confluence rest api examples
Maybe you are looking for something like this:
curl -u admin:admin -X GET "http://localhost:8080/confluence/rest/api/content?title=myPage%20Title&spaceKey=TST&expand=body.storage"
EDIT:
"title" is the name of the page
"spaceKey" is the key for your space. Confluence is organized in spaces. You can read more about spaces here
Expansions are documented here. expand specifies which elements should be expanded in the response. as stated in the documentation
If your GET returns a list of results and you don't choose to expand anything, the response is terse, displaying only a basic representation of the resource. It will, however, include a list of the expandable items for the resource.
So if you want the response to include the content of your page, then you need to expand "body.storage". If you want to expand multiple things you can seperate them with comma.

How to get application uptime report with New Relic API ?

I need to get a weekly report of my applications uptime.
The metric is available on the "SLA Report" screen (http://awesomescreenshot.com/0f02fsy34e) but i can't find a way to get it programatically.
Other SLA metrics are available using the API : http://docs.newrelic.com/docs/features/sla-report-examples#metrics
The uptime information is not considered a metric so is not available via the REST API. If you like you may contact support.newrelic.com to request this as a new feature.
Even though there was no direct api request for getting uptime from newrelic we can give the nrql queries inside curl.
curl -X POST -H "Accept: application/xml" -H "X-Query-Key: your_query_api_key" -d "nrql=SELECT+percentage(count(*)%2c+WHERE+result+%3d+'SUCCESS')+FROM+SyntheticCheck+SINCE+1+week+ago+WHERE+monitorName+%3d+'your+monitor+name'+FACET+dateOf(timestamp)" "https://insights-api.newrelic.com/v1/accounts/your_account_id/query"
The above curl will give uptime percentages distributed by Days of week. If you don't know how to url encode your nrql query please try this http://string-functions.com/urlencode.aspx. Hope this helps.
mm. I figured out that the following gives a csv formatted text content and could be useful for extracting data (monthly), but it's not going ask you for credentials on the tool / command line you use. (Beware). Since i export the monthly metrics from my machine, it's simple for me to make it work, and it really gives me much more flexibility.
https://rpm.newrelic.com/optimize/sla_report/run?account_id=<account_id>&application_id=<app_id>&format=csv&interval=months

Retrieving multiple articles & images via the new Freebase API

I want to get the article text & images for a bunch of topics from freebase. Using the old API this was easy, via either MQL extensions or the topic API (also now deprecated?). But what is now the best way of doing this via the new API?
I see from the docs I can get the text for an individual topic, like this:
https://www.googleapis.com/freebase/v1/text/en/bob_dylan
So I could loop through each topic one by one, but it seems slow to have to hit the API so many times, especially when I only needed one before. Am I missing some clever way of retrieving text / images for multiple topics?
Cheers,
Ben
It is possible to do multiple calls for /text using JSON-RPC - http://en.wikipedia.org/wiki/JSON-RPC
Here's an example:
curl "https://www.googleapis.com/rpc" -d "[{'method': 'freebase.text.get', 'apiVersion': 'v1', 'params': {'id': ['en','bob_dylan']}},{'method': 'freebase.text.get', 'apiVersion': 'v1', 'params': {'id': ['en','blade_runner']}}]" -H "Content-Type: application/json"
We are working on improving our documentation for doing this but this should get you going.
The name of the method you want to call is freebase.text.get and the rest of the parameters are documented here:
http://wiki.freebase.com/wiki/ApiText#Parameters
You can pass the id using an "id" parameter.
What exactly are you looking for for images ? How would you get back multiple binary content ?