I am working with React Native/Redux and an express/postgresql backend.
I have an api call to create a comment that returns the created comment as well as some of the information concerning the user.
Those are two different api calls.
Now I also need the updated comment count to send to the feed reducer so that the count is still correct when they close the comment tab.
I was wondering if it is still okay to have 3 queries in one API call and maybe more general/meta, if there might be a better solution to this if that isn't the case.
Kind regards
It's perfectly fine to do multiple queries in a single API call. You can do as many as are required to complete a single coherent interaction between the server and client.
Related
I'm designing a website with a REST API using Django Rest Framework and Vue for the front end and I'm trying to work out what the proper way to do filtering is.
As far as I can see, I can either:-
a) Allow filtering via the API by using URL parameters like /?foo=bar
or
b) Do all the filtering on the Vue side by only displaying the items that are returned that have foo=bar
Are there any strong reasons for doing one over the other?
The real answer to this question is "it depends".
Here are a few questions to ask yourself to help determine what the best approach is:
How much data will be returned if I don't filter at the API level?
If you're returning just a few records, there won't be a noticeable performance hit when the query runs. If you're returning thousands, you'll likely want to consider server side querying/paging.
If you're building an application where the amount of data will grow over time, it's best to build the server side querying from the get-go.
What do I want the front-end experience to be like?
For API calls that return small amounts of data, the user experience will be much more responsive if you return all records up front and do client-side filtering. That way if users change filters or click through paged data, the UI can update almost instantaneously.
Will any other applications be consuming my API?
If you plan to build other apps that consume the API, you may want to build the filtering at the API level so you don't need to recreate front-end filtering logic in every consuming application.
Hopefully these questions can help guide you to the best answer for your use case.
Whenever I come across this issue I ask myself just one question: How many items are you working with? If you're only returning a few items from the API you can easily do the filtering on the front-end side and save yourself a bunch of requests whenever the results are filtered. Also, if the result set is quite small, it's a lot faster to do it this way rather than sending off a request every time the filters change.
However, if you're working with a very large number of items, it's probably best to just filter them out in the API, or even via your database query if that's what you're working with. This will save you from returning a large number of results to the front-end. Also, filtering large numbers of items on the front-end can significantly impact performance since it usually involves looping over a collection.
I want to develop github alike issue tracker.
For that i have been working on this below api.
https://api.github.com/repos/facebook/react/issues?per_page=100
But this api results only 100 results per request as per docs.
Is there a way i can get all results of issues and not just 100,i can make multiple request but i don't think it is feasible way of doing it.
Issue object itself contain author,label,assignee so needed all results at once.
Is there any way to do it?
No, there is no way to get all of the results without pagination. GitHub, like almost all major web sites, has a time limit on the amount of time a request can take. If you have a repository with, say, 150 000 issues, then any reasonable operation on all of those issues will take longer than the timeout. Therefore, it doesn't make sense for GitHub to allow you to disable pagination in this way because the request would invariably fail anyway.
Even if you use the GraphQL API, you still get a limited number of results. If you want to fetch all of the issues, you'll need to make multiple requests.
My company has a service-oriented architecture. My app's GraphQL server therefore has to call out to other services to fullfill the data requests from the frontend.
Let's imagine my GraphQL schema defines the type User. The data for this type comes from two sources:
A user account service that exposes a REST endpoint for fetching a user's username, age, and friends.
A SQL database used just by my app to store User-related data that is only relevant to my app: favoriteFood, favoriteSport.
Let's assume that the user account service's endpoint automatically returns the username and age, but you have to pass the query parameter friends=true in order to retrieve the friends data because that is an expensive operation.
Given that background, the following query presents a couple optimization challenges in the getUser resolver:
query GetUser {
getUser {
username
favoriteFood
}
}
Challenge #1
When the getUser resolver makes the request to the user account service, how does it know whether or not it needs to ask for the friends data as well?
Challenge #2
When the resolver queries my app's database for additional user data, how does it know which fields to retrieve from the database?
The only solution I can find to both challenges is to inspect the query in the resolver via the fourth info argument that the resolver receives. This will allow it to find out whether friends should be requested in the REST call to the user account service, and it will be able to build the correct SELECT query to retrieve the needed data from my app's database.
Is this the correct approach? It seems like a use-case that GraphQL implementations must be running into all the time and therefore I'd expect to encounter a widely accepted solution. However, I haven't found many articles that address this, nor does a widely used NPM module appear to exist (graphql-parse-resolve-info is part of PostGraphile but only has ~12k weekly downloads, while graphql-fields has ~18.5k weekly downloads).
I'm therefore concerned that I'm missing something fundamental about how this should be done. Am I? Or is inspecting the info argument the correct way to solve these optimization challenges? In case it matters, I am using Apollo Server.
If you want to modify your resolver based on the requested selection set, there's really only one way to do that and that's to parse the AST of the requested query. In my experience, graphql-parse-resolve-info is the most complete solution for making that parsing less painful.
I imagine this isn't as common of an issue as you'd think because I imagine most folks fall into one of two groups:
Users of frameworks or libraries like Postgraphile, Hasaura, Prisma, Join Monster, etc. which take care of optimizations like these for you (at least on the database side).
Users who are not concerned about overfetching on the server-side and just request all columns regardless of the selection set.
In the latter case, fields that represent associations are given their own resolvers, so those subsequent calls to the database won't be fired unless they are actually requested. Data Loader is then used to help batch all these extra calls to the database. Ditto for fields that end up calling some other data source, like a REST API.
In this particular case, Data Loader would not be much help to you. The best approach is to have a single resolver for getUser that fetches the user details from the database and the REST endpoint. You can then, as you're already planning, adjust those calls (or skip them altogether) based on the requested fields. This can be cumbersome, but will work as expected.
The alternative to this approach is to simple fetch everything, but use caching to reduce the number of calls to your database and REST API. This way, you'll fetch the complete user each time, but you'll do so from memory unless the cache is invalidated or expires. This is more memory-intensive, and cache invalidation is always tricky, but it does simply your resolver logic significantly.
I want to make a REST API that does spellchecking on text that is passed in, without storing any of the text on the server.
The call would probably look something like `example.com/api/v1/spelling/mistakes', with optional query param for locale and an list of the mistakes as return value.
What would be the best HTTP method to use, given that the text passed in would be too large for a GET. Neither POST, PUT nor PATCH seem to reasonably map to the intended purpose and there don't seem to be any other suitable matches in the less commonly used methods either.
What is the best HTTP method to use for a "translation"-like REST API service, taking and returning large amounts of data?
I would say this is a POST. But it could have been a GET if the data was previously posted. The reason it is not a GET is because you are passing all the data in this API call, as you mentioned. For example, if the data was 'posted' somewhere else previously, then the GET can be used where the address (URI) of the location, or ID, of that 'posted' data is passed to the API as a param in the GET. But because we are both 'posting' the data and retrieving information about that in the same call, I would say then that this is a POST. Grant it the data being posted has a short life span, it is still being posted. If the data being posted was instead a customer order, then it would still be a POST but the data would be persisted somewhere. The difference here is the the short period of time that the data will exist for. And in future iterations of your API, you might actually want to keep that data and refer back to it with some ID. So by using POST you allow for future enhancements also.
By the way, as a precaution, be careful with the memory footprint of these calls. I can see this as being very memory intensive if the data being passed grows large and the API becomes very popular. Not a show stopped but something to consider when designing it.
Hope that helps alleviate what I call REST anxiety when designing an API.
Using the Souncloud API, I'd like to retrieve the reposted tracks from my activities. The /me/activities endpoint seems suited for this and I tried the different types provided.
However, I didn't find out how to get that data. Does anyone know?
Replace User Id, limit and offset with what you need:
https://api-v2.soundcloud.com/profile/soundcloud:users:41691970?limit=50&offset=0
You could try the following approach:
Get the users that shared a track via /tracks/{id}/shared-to/users endpoint.
Fetch the tracks postet by this user via /tracks endpoint, as the _user_id_ is contained.
Compare the tracks metadata with the one you originally posted.
I am not into the Soundcloud API, but taking a close look at it seems to make this approach at least as technical possible, though e.g. fetching all tracks won't be a production solution, of course. It's more a hint. Perhaps reposting means something totally different in this context.
And the specified entpoint exists in the general api doc, so I don't know if you would have to extend the java-api-wrapper for using it.