Get github API results more than 100 - api

I want to develop github alike issue tracker.
For that i have been working on this below api.
https://api.github.com/repos/facebook/react/issues?per_page=100
But this api results only 100 results per request as per docs.
Is there a way i can get all results of issues and not just 100,i can make multiple request but i don't think it is feasible way of doing it.
Issue object itself contain author,label,assignee so needed all results at once.
Is there any way to do it?

No, there is no way to get all of the results without pagination. GitHub, like almost all major web sites, has a time limit on the amount of time a request can take. If you have a repository with, say, 150 000 issues, then any reasonable operation on all of those issues will take longer than the timeout. Therefore, it doesn't make sense for GitHub to allow you to disable pagination in this way because the request would invariably fail anyway.
Even if you use the GraphQL API, you still get a limited number of results. If you want to fetch all of the issues, you'll need to make multiple requests.

Related

Should vue filtering use REST API or url parameters

I'm designing a website with a REST API using Django Rest Framework and Vue for the front end and I'm trying to work out what the proper way to do filtering is.
As far as I can see, I can either:-
a) Allow filtering via the API by using URL parameters like /?foo=bar
or
b) Do all the filtering on the Vue side by only displaying the items that are returned that have foo=bar
Are there any strong reasons for doing one over the other?
The real answer to this question is "it depends".
Here are a few questions to ask yourself to help determine what the best approach is:
How much data will be returned if I don't filter at the API level?
If you're returning just a few records, there won't be a noticeable performance hit when the query runs. If you're returning thousands, you'll likely want to consider server side querying/paging.
If you're building an application where the amount of data will grow over time, it's best to build the server side querying from the get-go.
What do I want the front-end experience to be like?
For API calls that return small amounts of data, the user experience will be much more responsive if you return all records up front and do client-side filtering. That way if users change filters or click through paged data, the UI can update almost instantaneously.
Will any other applications be consuming my API?
If you plan to build other apps that consume the API, you may want to build the filtering at the API level so you don't need to recreate front-end filtering logic in every consuming application.
Hopefully these questions can help guide you to the best answer for your use case.
Whenever I come across this issue I ask myself just one question: How many items are you working with? If you're only returning a few items from the API you can easily do the filtering on the front-end side and save yourself a bunch of requests whenever the results are filtered. Also, if the result set is quite small, it's a lot faster to do it this way rather than sending off a request every time the filters change.
However, if you're working with a very large number of items, it's probably best to just filter them out in the API, or even via your database query if that's what you're working with. This will save you from returning a large number of results to the front-end. Also, filtering large numbers of items on the front-end can significantly impact performance since it usually involves looping over a collection.

How to fetch results from an offset when the API doesn't support offset (HERE Maps API)

I have a search functionality that gets data from HERE API's Search endpoint. I maintain records of each search's results so I can add metadata that I need for my own purposes and also so I can provide results without always going back to HERE API. The problem I have is with paginating, specifically with providing a starting index when fetching results from HERE. Similar to how Algolia does it, I want to be able to search for a term and begin with the results at a certain index, the offset. HERE API apparently doesn't allow this at all. The closest it comes to such a feature is that it provides the URL for the next search, as described here. This is limited because it doesn't allow me to start the search results at a particular index that I specify. So essentially I want to know if there's a "standard" way of getting such functionality even when it's not provided by the API.
My own solution
The HERE API provides a size parameter that allows specifying the total number of results that I want, so I can specify a larger size than I need, and basically use code to start the results from my desired index. But this feels a bit hacky, and I wonder if there's a better/more established way of doing this.
Happy to listen to any ideas! Thanks. :)
Such a kind of an 'offset' for starting the paging after a specific number of results is indeed not supported by the Places API itself.
You have to set up a workaround within your application.

Relational Database: Best practice regarding amount of queries per API call

I am working with React Native/Redux and an express/postgresql backend.
I have an api call to create a comment that returns the created comment as well as some of the information concerning the user.
Those are two different api calls.
Now I also need the updated comment count to send to the feed reducer so that the count is still correct when they close the comment tab.
I was wondering if it is still okay to have 3 queries in one API call and maybe more general/meta, if there might be a better solution to this if that isn't the case.
Kind regards
It's perfectly fine to do multiple queries in a single API call. You can do as many as are required to complete a single coherent interaction between the server and client.

How to get reposted tracks from my activities?

Using the Souncloud API, I'd like to retrieve the reposted tracks from my activities. The /me/activities endpoint seems suited for this and I tried the different types provided.
However, I didn't find out how to get that data. Does anyone know?
Replace User Id, limit and offset with what you need:
https://api-v2.soundcloud.com/profile/soundcloud:users:41691970?limit=50&offset=0
You could try the following approach:
Get the users that shared a track via /tracks/{id}/shared-to/users endpoint.
Fetch the tracks postet by this user via /tracks endpoint, as the _user_id_ is contained.
Compare the tracks metadata with the one you originally posted.
I am not into the Soundcloud API, but taking a close look at it seems to make this approach at least as technical possible, though e.g. fetching all tracks won't be a production solution, of course. It's more a hint. Perhaps reposting means something totally different in this context.
And the specified entpoint exists in the general api doc, so I don't know if you would have to extend the java-api-wrapper for using it.

Why isn't an update reflected right away when posting changes to a Rally story via the REST API?

I've noticed when retrieving a story/defect after first updating it, sometimes the retrieve response returns the field values as if the update never happened. Retrying the retrieve after a short delay (~500ms) returns the updated field values as expected. Is this a known behaviour? Is there any way of avoiding this?
I'm using the Rally API 2.0 - https://rally1.rallydev.com/slm/webservice/v2.0/
The update is being performed using this URI:
POST /slm/webservice/v2.0/Defect/14173461229?key=<key> HTTP/1.1
I'm retrieving the story after update as follows:
GET /slm/webservice/v2.0/artifact?query=(ObjectId%20=%2014173461229)&start=1&pagesize=20&fetch=true HTTP/1.1
What is your integration doing that it needs to re-poll the artifact within < 1 second of POST'ing an update? Is there a second process that does polling that is revealing the latency for the updates? Does your integration run multiple threads? Does the response time vary at all depending on time of day, etc.? There are any number of factors that could be at play here, but 500 ms doesn't seem like an un-reasonable refresh rate given factors such as latency over HTTP/S as well as server-side database and cache updates. That said, for an in-depth look you may wish to inquire with Rally Support (rallysupport#rallydev.com) as they have tools that can help evaluate server-side response time corresponding to requests by specific UserID.