I am making an application based on Google maps API. This requires requesting for distance between two cities. Now I want distances between many cities.
So should I use "for loop" and make many requests separately or should I send all the cities names in one link. Which one will work faster? And which one will be better?
For sure you should avoid sending multiple requests, because each roundtrip to a server takes time.
However when you are grouping many requests this can also take a long time (both to send, and to process on the server), and affect the user experience (long waiting time).
In your case I suspect that the "for loop" will not load to a lot of data, and server side processing will also not be too heavy, so sending a grouped single request should be the way to go.
You can use the "DirectionService" sevices ,which is providing by Google i.e "api3".
You can find the distance between the Many cities ,it takes one origin point ,destination point and 8 way points (total 10 places) for one request and it provides a JSON file
in return ,which contains all the information (distance in KM,value meters,city names and lot more) .Please check this link, https://developers.google.com/maps/documentation/javascript/directions . i hope this answer will meet your requirement,otherwise don't mind.
Related
I'm designing a website with a REST API using Django Rest Framework and Vue for the front end and I'm trying to work out what the proper way to do filtering is.
As far as I can see, I can either:-
a) Allow filtering via the API by using URL parameters like /?foo=bar
or
b) Do all the filtering on the Vue side by only displaying the items that are returned that have foo=bar
Are there any strong reasons for doing one over the other?
The real answer to this question is "it depends".
Here are a few questions to ask yourself to help determine what the best approach is:
How much data will be returned if I don't filter at the API level?
If you're returning just a few records, there won't be a noticeable performance hit when the query runs. If you're returning thousands, you'll likely want to consider server side querying/paging.
If you're building an application where the amount of data will grow over time, it's best to build the server side querying from the get-go.
What do I want the front-end experience to be like?
For API calls that return small amounts of data, the user experience will be much more responsive if you return all records up front and do client-side filtering. That way if users change filters or click through paged data, the UI can update almost instantaneously.
Will any other applications be consuming my API?
If you plan to build other apps that consume the API, you may want to build the filtering at the API level so you don't need to recreate front-end filtering logic in every consuming application.
Hopefully these questions can help guide you to the best answer for your use case.
Whenever I come across this issue I ask myself just one question: How many items are you working with? If you're only returning a few items from the API you can easily do the filtering on the front-end side and save yourself a bunch of requests whenever the results are filtered. Also, if the result set is quite small, it's a lot faster to do it this way rather than sending off a request every time the filters change.
However, if you're working with a very large number of items, it's probably best to just filter them out in the API, or even via your database query if that's what you're working with. This will save you from returning a large number of results to the front-end. Also, filtering large numbers of items on the front-end can significantly impact performance since it usually involves looping over a collection.
This is a dilemma I find myself facing very often when dealing with nested resources.
So suppose the target user has n photos. Does it make sense to define a GET /users/{id}/photos route or should I just send n GET /photos/{id} requests by first requesting the User object and then looping through that User's photo_ids attribute?
From a best practices perspective, I would not send several request for such similar resources. In that scenario, you end up creating more work for yourself by having to rate limit on the backend, in order to keep your users with a lot of pictures from blowing up your server. That would be impactful to your UX as well.
I recommend you format your route as such:
GET /users/photos?id={{id}}
And return all associated photos with that user ID all at once. You can always limit that to X number of photos per call too, and paginate:
GET /users/photos?id=658&page={{1,2,3, etc.}}
My personal preference is always to try to keep variable data in the URL parameters. Having spent the afternoon with several unrelated APIs, I can tell you a whole slew of developers agree.
I want to monitor some live scores on soccer matches. I have 2 ways to do this:
official api from the website(free)
parse websites source code myself and get data from it( need to do it every second)
What is the difference? Is calling API faster?
This can depend on quite a lot external to this specific scenario, but given the context, yes the API's would much faster. The difference is in what data is being sent/received/parsed.
In either scenario you'd need some timer to tick and parse the results (website or API) so there's no performance difference in the "wait code", but the big difference will be in the data itself that is parsed. When you call the API, chances are more likely that you will send a specific parameter or call a specific function that indicates what you're looking for, pseudo-code example:
SoccerSiteApi.GetValue(SCORE, team1, team2);
Or
SoccerSiteApi.GetCurrentScores(team1, team2);
By calling the API, you are only sending and receiving a few hundred bytes (or more depending on data) and getting back exactly what you want, that is, you don't need to parse the scores out of the values sent back since they are the scores, so no processing time is spent doing anything additional with the data itself.
If, however, you were to parse the entire web site, you would need to make an HTTP GET request (and all that entails) to get the entire page (which could be a couple hundred KB or MB depending on content) and then spend processing time extracting the exact data you were looking for, and then doing this every second.
So the biggest difference is amount of data and time spent processing it.
Hope that can help
I'm trying to write an application that creates mail accounts for thousands of users using the Google Directory API. Creating them one by one works, but is extremely slow. I tried to use the batch requests which is suppose to support up to 1000 requests at once. However with that, only around 50 users are created successfully and the rest of the requests throw 403 errors. If I change the batch size to 40 instead, after the first batch, many requests fail with 5xx errors.
If the batch requests are still limited by the same rate limits, the seem to be worthless as I could just send those requests individually at that slow rate. Is there a better way to do this or is there something else I should do instead?
Batching the requests will certainly save network roundtrips (which can be pretty expensive if you have thousand of users to process). However, the server will still have to execute the request one by one even if it is batched. Take a look at the documentation on Admin SDK
https://developers.google.com/admin-sdk/directory/v1/guides/batch
The special note said: "A set of n requests batched together counts toward your usage limit as n requests, not as one request. The batch request is taken apart into a set of requests before processing."
I'm reviewing some code where we've had some issues with return data from a WCF web service. Currently the service makes a list of objects, serializes them (as JSON for the record) and returns the entire serialized list down the wire. Obviously when there's a lot of data users run into quota limit problems.
I'm considering changing it so the service returns one item at a time which would send a bunch of requests on a loop adding one object at a time into the list until it was done.
Obviously in scenario one we're making one request to the service that has the potential to return a massive amount of data and run up against the quota. In the other scenario we never hit the quota but the requesting app will be requesting data item after data item in a stream of separate requests.
To illustrate we have a list of items which come in a variety of item types and those types come at a variety of price points. The app might want to aggregate a number of items, the customers who want that item, the types of item and price requested by the customer and their could be maybe seventy items with between five and eighty customers each requesting on average 2 types of product at 1 price each.
Taking averages at the extreme end this could make 7000 separate (very small) data requests in a single complete job. Is that a problem? It is possible to package it up a bit so that customer types and prices requested could be bundled but that's still potentially a couple of thousand requests at one time.
Am I better off with a single huge data stream? Or a couple of thousand smaller ones?
You're better off with the optimal sized return for your scenario :) It kinda depends on the overhead on the request. Generally the less chatter back and forth to a web service the better.
Facetious answer, so here's the rub:
You're probably best off with some sort of paging system, wherein your request asks for a specific number of items, and your response returns a "n of m" in the results. That way you can tune the number of requests and size of the response to perform best in your situation.