API call limits for a weather app with 100,000 users - react-native

I am developing an app in Flutter and ReactNative to display realtime weather for a list of cities. I am retrieving the weather data from OpenWeatherMap. I am finding it hard to understand the concept of calls per minute. There is a limit of 60 calls/minute on a free plan, 600 for Startup plan, and so on.
My question is, if by some miracle my app has 100,000 users in the future, and they all have 5 cities in their app, does this mean that to get realtime weather data the number of API calls would be 5x100,000 at any given moment for my single registered API key?
Even with an enterprise plan (200,000 calls/minute), this seems not achievable.
Am I missing something here? I am interested in realtime data, not historical. The analogy could be extended to a stock trading app as well where I would loop fetchData() over an interval of 1 or 2 seconds. But if there are thousands of users, I'm not sure how to handle the calls.
Please help me understand how I can achieve this or if I'm wrong somewhere.

Related

Ensure that API calls are efficient

I'm integrating with the Sport Radar API. This is a pretty great API but I'm noticing it doesn't have a particular Endpoint that I was hoping to leverage.
I noticed that the API does not have a Players index call. In order to get all the players in the league you have to go through this process (Confirmed by the API team)
1) Call an endpoint that lists the ids of each team in the league.
2) For each Team id call an endpoint that gets each team and lists each player for each team.
All in all that is over 30 API requests for each time I need to run a fairly common function.
MY CURRENT SOLUTION:
Store the Players in the database. So instead of reaching out through the API every time I need to list the players. I can just make a DB query for the players. This solution seems heavy handed though, I'd like for my application to not have to keep track of the players.
MY QUESTION
What are some ways of solving this problem? Again the questions is: What are ways of avoiding many api calls in order to get common data? Is the best solution just to make the 30 api call each time? Thank you!
I am working with the sport radar API and we had a similar problem, to solve it we use a cache to have this information faster and filtered, basically we connect a redis and run a cron in a service that updates the data (in our case each 24 hours).

How to analyse historical waze data?

I'm trying to find a way to get historical speed data for a certain road in the UK to calculate its average speed per time of day AND the maximum speed driven by a any driver on the road between a period time. Any pointers how to do this from Waze? Thanks
I'm afraid Waze doesn't expose that data (understandably, as it is their core business). This excerpt from the help page should say enough:
Please note: Waze does not share any historical data with partners.
If you work for a local government or organisation, you could consider joining the Connected Citizens Program. As a partner you are able to get a data feed for a certain route and you're allowed to store that data to get historical data (as detailed on the Waze Partners Help site).
While I'm not certain about the legal status of doing this without being a partner, you could probably also start building your own historical dataset based on what Waze provides as average speed on a segment by periodically looking up the data returned when you plan a route on the Waze Live Map.
Routing requests are sent to https://www.waze.com/row-RoutingManager/routingRequest?... (see the network console of your browser for more details), but this requires some additional work managing CSRF and session cookies and providing the proper referral header. While not impossible, it's not too easy to pull off.
The response of such a routing request contains the instructions you see on the live map, but also includes things like the length of each specific segment on the route (distance), its average speed without realtime data (crossTimeWithoutRealTime) and its average speed with realtime data (crossTime). It's also possible to request the average speed for a certain time in the day, but this tends to be somewhat unreliable data.
The maximum speed is something you won't be able to find in Waze's data though, I'm afraid. I'm not even certain Waze stores that information as those statistical outliers generally aren't that interesting for navigational instructions. You could try to contact Waze for more information if you're doing a scientific study, but don't get your hopes up too much in that case as they have a small team that is constantly overwhelmed by the amount of questions they receive.

Shopify API Request Limit, Multiple stores?

If I may ask, I was wondering how the API request limit handles multiple store calls?
Scenario:
We have one backend service "polling" a store with one core request and 'x' number of requests for images by productId for each item in "LineItems" (I doubt this will be an extraordinary figure) every 5 seconds, but I'm curious to know if we had Five stores and Five background service polling respective stores, would this total to the request limit? I.E how is this tracked, by IP?
I'm hoping that it's on a per store basis thus other stores have there own "Bucket". I have read through "Some" documentation but not sure that info is giving the knowledge I require.
There a lot of older posts and articles, all with conflicting info.
I fully appreciate this is not a programming issue per-say but was hoping this was the place for such a question. As ever, appreciate everyone's time.
Regards,
All shops are provided with a two API calls per second limit, after you've hammered your bucket for 40 other calls. If you are polling a shop every 5 seconds, then you are in no danger of hitting any limits. Any one App can call N stores and this limit is per store, not per App.

Podio API limit

I am working on one product which is fetching all the organization/workspace and app details of the customer. The customer can refresh them any time.
So let’s say I have one customer who has 100 applications across multiple workspaces so around it is making around 110 calls to get each application detail, workspace details and organizations.
Now if that customer refreshed the applications multiple times like 10 times in an hour so the action only for that is 1000 API calls. If I have 50 such users active and doing this thing then it will be something 50000.
AFAIK I can not make so many API calls in an hour so how to handle this scenario. I know a lot of applications are doing such things so want to understand how everyone is handling this.
If you need a higher rate limit, I would encourage you to contact Podio support and ask specifically for what you need. We have internal guidelines for evaluating these kinds of requests and may increase the limit for your user and client ID if appropriate.
In general, though, I would expect your app to implement some kind of batching, transient storage, and/or caching layers, especially if your customers are interacting with Podio exclusively or primarily through your system.
Please see our official statement here: https://developers.podio.com/index/limits
Summary:
The general limit is 5,000 API calls per hour, but if the API call is marked as "Rate limited" in the API reference the call is deemed resource intensive and a lower rate of 1000 calls per hour is enforced. If you hit the rate limits the API will begin returning 420 HTTP error codes for all API calls. Rate limits are per user per API key.
Contacting support:
If you have a project that requires a higher rate limit contact support#podio.com with a brief description of your project, your estimated usage and the client_id of the API key you are using.
Usage tips:
Tips for reducing API usage
Avoid making API requests inside loops. Instead of fetching individual objects inside a loop, fetch a collection of objects in one API operation. E.g. filter items
Cache results whenever possible. This is especially true when you are displaying data to the public (i.e. every sees the same output).
Don't poll for changes. Instead of polling Podio to see if your content has changed use webhooks or push to receive a notification. This might save you thousands of requests: https://developers.podio.com/doc/hooks
Use logging to see how many requests you're making
Bundle responses with "fields" parameter
You might want to build an API proxy app; you would need a messaging queue and a rate limiter. This would lets you keep track of the api calls consumptions across apps and users.
Also worth noting: some API routes are more expensive than others if they are more resource intensive on the Podio side… The term in use is rate-limited: rate limited api route are bound to 1k calls an hours, so in effect costs 5 times as much as regular routes.
Hope this helps!

Measure how hot a topic is on Twitter

What kind of service should I use to measure how hot a topic is on Twitter, and how hot it has been in the past?
I thought about:
The Twitter API (https://dev.twitter.com/rest/reference/get/search/tweets) that lets me run searches up to 100 tweets. So in this case I have to make multiple calls to determine how many tweets there are. Is that correct?
TweetReach, that gives reports like this: https://tweetreach.com/reports/16000571, but the cheapest plan is at 300$/month.
With the Twitter API, you have a few options, but none of them may be exactly what you want, and none of them can go back very far into the past. You would have to either compile that information yourself, or use an external service like the one you mentioned.
Using the search API, you can only get results from the past 7 days, and are limited to 100 tweets per request. You can also set result_type to popular to get the most popular tweets about that search term. Twitter does have rate limits, but the ones for search are relatively high. You can use 180 requests every 15 minutes for any user you have authenticated, plus 450 requests every 15 minutes for the app itself (completely separate from the user requests). So if you only use app requests, you can get 45,000 tweets every 15 minutes.
If you don't need to search for specific terms, you can get trending topics in different areas using trends. The available areas can be retrieved using trends/available. Searching for trends also gives you the tweet_volume of each trend over the past 24 hours. If you check the trends every 24 hours and save the volumes, you can build up histories of trending topics.
Another option is using the streaming api. This only gives you current tweets, but you can use track to only get results for a set of terms, which you can then analyze.
Any external service, like TweetReach, will probably either cost you money or strictly limit the amount you can do with it unless you pay.
I'm the Social Media Manager for Union Metrics (we make TweetReach and lots of other things) and I just wanted to let you know that our free snapshots are built on the Search API which gives it those restrictions you've already discussed above, while our full snapshot reports can grab up to 1500 tweets for $20.
We do have more comprehensive Twitter analytics which I think you've already looked at, and those do backfill 30 days before tracking going forward. However you might have missed our new product Echo, which allows for a full, interactive search of the entire Twitter archive (you can see it in action here https://unionmetrics.com/product/echo-twitter-archive-search/) and is available through our Social Suite.
I understand if you don't have a large budget, and I completely understand the dilemma of cost of your time to build what you need vs. budget restrictions. Hope this helps at least let you know what else we offer!
Sarah A. Parker
Social Media Manager | Union Metrics
Fine Makers of TweetReach, The Union Metrics Social Suite, and more