My company uses Power BI and we are trying to get incidents data from PagerDuty API.
I have been struggling to find a way to get all the data but I am unknowledgeable about PowerBI, PowerQuery or dealing with API. I am in database role.
I used Get Data option in Power BI and selected Web as data source. Entered My API link, and passed the headers. The result I get is limited to 25 by default, I can change it to up to 100 but my main issue is with not being able to get next set of results.
I need to have some kind of loop to increase the offset parameter and refeth the query.
My current query in PowerBI looks like this.
= Json.Document(Web.Contents("https://api.pagerduty.com/incidents", [Headers=[Accept="application/vnd.pagerduty+json;version=2",
#"Content-Type"="application/json", Authorization="Token token=MY API KEY"]]))
And it returns me this:
I can expand the list and get the data I need. but I only see 25 right now.
You'll need a way to keep updating an offset parameter in the URL -- for example if you've set limit 25, you'd set offset to 25 to get the next page, 50 to get the next page after that, and so on, like so:
https://api.pagerduty.com/incidents?offset=25
I'm not sure how you'd accomplish that in your tool, though! Here's a link to the PagerDuty documentation on pagination in case you need extra details.
Related
Good day everyone,
we are trying to have through the use of the integrations of the Apigee service of google all the rows in a bigquery table that have a certain value in a field.
this operation is quite easy to do, but when we have more than 200 lines as a result, problems arise.
The problem is that using the integration to connect to BigQuery I am not returning any listEntitiesPageToken value and not even any listEntitiesNextPageToken value
so i can't figure out how i can go about navigating the result pages
Has anyone had the same problem? What do you suggest?
In the tutorial: "https://cloud.google.com/apigee/docs/api-platform/integration/connectors-task#configure-the-connectors-task" is write : "For example, if you are expecting 1000 records in your result set, you can set the listEntitiesPageSize to 100. So when the Connectors task runs for the first time, it returns the first 100 records, the next 100 records in the second run and so on."
And there is a tip: "Use the listEntitiesPageSize parameter in conjunction with the listEntitiesPageToken parameter to navigate through the pages."
I used the tutorial to understand how to use the task for loop and I understood that I should create a "subintegration" which must be called by a "main integration" for each element present in a list / array.
But what what can i do since these tokens are empty?
I am trying to fill in Value box inside Pagination rules explained on this article which was published recently (in May 25th 2021).
My Request URL is this one:
So, based on my URL's data, I would like to know how to insert all rows (total 15,315 rows) instead of 500 rows.
I am new to Rest API, and I guess I was looking for an indicator that points to the next set of records.
Currently, when I run Azure Data Factory V2 without Pagination rules, it only inserts 500 rows of data into Azure database.
This is the solution I got from my issue:
https://learn.microsoft.com/en-us/answers/questions/468561/azure-data-factory-pagination-issue.html#answer-470345
I'm facing a problem in splunk like if i choose current session(2020) from filter then i should get the data of previous Session(2019).
I wrote a splunk query like :
index="entab_due" Session=2019 ClassName="* *"
| eval n=(tonumber(Session)-1)
| where totalBalance > 0 and Session = n
but i didn't get any result.
Problem : Get the data of previous session after selecting Session from filter
Please help me to get the solution.
If two different panels in your dashboard need different data then they probably should use different searches. Or use a base search that gathers the data needed for both and use post-processing to filter the data needed by each panel.
I have a web application in which I get data from my database and show in a datatable. I am facing an issue doing this as the data that I am fetching has too many rows(200 000). So when I query something like select * from table_name;
my application gets stuck.
Is there a way to handle this problem with JavaScript?
I tried pagination but I cannot figure how would i do that as datatable creates pagination for already rendered data?
Is there a way through which I can run my query through pagination at
the backend?
I have come across the same problem when working with mongodb and angularjs. I used server side paging. Since you have huge number of records, You can try using the same approach.
Assuming a case that you are displaying 25 records in one page.
Backend:
Get the total count of the records using COUNT query.
select * from table_name LIMIT 25 OFFSET
${req.query.pageNumber*25} to query limited records based on the page number;
Frontend:
Instead of using datatable, display the data in HTML table it self.
Define buttons for next page and previous page.
Define global variable in the controller/js file for pageNumber.
Increment pageNumber by 1 when next page button is clicked and
decrement that by 1 when prev button is pressed.
use result from COUNT query to put upper limit to pageNumber
variable.(if 200 records are there limit will be 200/25=8).
So basically select * from table_name LIMIT 25 OFFSET
${req.query.pageNumber*25} will limit the number of records to 25. when req.query.pageNumber=1, it will offset first 25records and sends next 25 records. similarly if req.query.pageNumber=2, it will offset first 2*25 records and sends 51-75 records.
There are two ways to handle.
First way - Handling paging in client side
Get all data from database and apply custom paging.
Second way - Handling paging in server side
Every time you want to call in database and get records according to pagesize.
You can use LIMIT and OFFSET constraints for pagination in MySQL. I understand that at a time 2 lacs data makes performance slower. But as you mention that you have to use JS for that. So make it clear that if you wants js as frontend then it is not going to help you. But as you mention that you have a web application, If that application is on Node(as server) then I can suggest you the way, which can help you a lot.
use 2 variables, named var_pageNo and var_limit. Now use the row query of mysql as
select * form <tbl_name> LIMIT var_limit OFFSET (var_pageNo * var_limit);
Do code according to this query. Replace the variable with your desire values. This will make your performance faster, and will fetch the data as per your specified limit.
hope this will helpful.
I am trying to get information about files in a folder using https://apis.live.net/v5.0/folderid/files?
This particular folder of mine has around 5200 files. So I am getting a readtimeout when I make the above mentioned request. Is there any restriction on the number of files that I can make the request.
Note : I am able to successfully retrieve the file information from folder if I restrict the file count to 500 say https://apis.live.net/v5.0/folderid/files?limit=500
In general it's good to page queries that could potentially return a large number of requests. You could try using the limit query parameter in combination with the offset query parameter to read sets of the children at a time and see if that works better for you.
I'll quote in the relevant information from the documentation for ease of reference:
Specify the first item to get by setting the offset parameter in the preceding code to the index of the first item that you want to get. For example, to get two items starting with the third item, use FOLDER_ID/files?limit=2&offset=3.
Note In the JavaScript Object Notation (JSON)-formatted object that's returned, you can look in the paging object for the previous and next structures, if they apply, to get the offset and limit parameter values of the previous and next entries, if they exist.
You may also want to consider swapping to the new API, which has it's own paging model (using the next links).