Facebook graph API rate limit and batch requests - api

I've seen the 600 calls / 600 seconds rate limit mentioned by some (e.g. on quora).
What I want to know is whether I am allowed to do 600 batch requests in 600 secs (a batch request consists of up to 50 requests).

You should handle the rate limiting programmatically by checking for the following error message. You should then put in a time-wait loop before your next call if you encounter the error. One of my high traffic applications accounts watches for this error and will slow down.
From: https://developers.facebook.com/docs/bestpractices/
Rate limited (API_EC_TOO_MANY_CALLS) If your application is making too many calls, the API server might rate limit you automatically,
returning an "API_EC_TOO_MANY_CALLS" error. Generally, this should not
happen. If it does, it is because your application has been determined
to be making too many API calls. Iterate on your code so you're making
as few calls as possible in order to maintain the user experience
needed. You should also avoid complicated FQL queries. To understand
if your application is being throttled, go to Insights and click
"Throttling".
edit
As reported by Igy in the comment thread, each request in that batch counts as 1. For your example of 600 being the max limit, that means you can fire off 15 batch requests containing 50 calls each.

According to FB docs, each element in a batch counts as a separate call.
We currently limit the number of requests which can be in a
batch to 50, but each call within the batch is counted separately for
the purposes of calculating API call limits and resource limits. For
example, a batch of 10 API calls will count as 10 calls and each call
within the batch contributes to CPU resource limits in the same
manner.
Quoted from: https://developers.facebook.com/docs/reference/api/batch/
I don't have empirical evidence however.
David

From my experience, they count individual requests regardless the way they were made (in batch or not).
For example, if I'm trying to do 1 batch/sec containing 10 requests each, I soon get 'TOO MANY CALLS'.
If I'm doing 1 batch/10 sec, each batch containg 10 requests, I never see TOO MANY CALLS.
I personally do not see any reason to prefer batches over regular API calls.

I have quite a big and painful experience now with the Facebook API and I can state that :
If a batch request contains 50 requests, then it counts as 50 requests on Facebook
1 request != 1 call. Facebook has its own definition of what a call is. If your request is big, return a lot of data or consume a lot of cpu then it will count as several calls.
The most frequent graph API call I am doing contains a lot of nested fields and I have noticed that I reached the "600 calls / 600 seconds" after doing it only 200 times. So basically this call count for 3 in my case...
You have a lot of other rate limits but none of of them are properly documented...

Batch calls definitely are counted per item in the batch. One batch call with 50 items is the equivalent of 50 api calls using the graph.

Related

Rate Limit Pattern with Redis - Accuracy

Background
I have an application that send HTTP request to foreign servers. The application communicating with other services with strict rate limit policy. For example, 5 calls per second. Any call above the allowed rate will get 429 error code.
The application is deployed in the cloud and run by multiple instances. The tasks are coming from shared queue.
The allowed rate limit synced by Redis Rate Limit pattern.
My current implementation
Assuming that the rate limit is 5 per second: I split the time into multiple "window". Each window has maximum rate of 5. Before each call I checking if the counter is less then 5. If yes, fire the request. If no, wait for the next window (after a second).
The problem
In order to sync the application around the Redis, I need to Redis calls: INCR and EXPR. Let's say that each call can take around 250ms to be returned. So we have checking time of ~500ms. Having said that, in some cases you will check for old window because until you will get the answer the current second has been changed. In case that on the next second we will have another 5 quick calls - it will lead to 429 from the server.
Question
As you can see, this pattern not really ensuring that the rate of my application will be up to 5 calls\second.
How do you recommend to do it right?

Does batching lead to increase in 429 Throttling errors in MSGraph API

I am trying to sync oneDrive Files (metadata and permissions) for a domain using MSGraph API using list, children and permission endpoints.
I am using batching for children and permission endpoints, sending 10-20 request urls in single batch requests concurrently for 10 users.
I am getting a lot of 429 errors by doing so. Though, I was also getting 429 errors on making single (non-batched) calls also.
According to the documentation related to throttling, they ask to
1. Reduce the number of operations per request
2. Reduce the frequency of calls.
So, my question is
Does a batch call of 10 get urls, count as 10 different operations and 10 different calls ?
Does a batch call of 10 get urls, count as 10 different operations and
10 different calls ?
Normally, N URLs will be treated as N+1 operations(even more). N operations from the batch URLs and one for the batch URL itself.
Pay attention to the docs:
JSON batching allows you to optimize your application by combining
multiple requests into a single JSON object.
Due to multiple requests have been combined to one request, the server side just need to send back one response too. But the underlying operation for each URL still need to be handle, so the workload on server side is still very high, just may reduce a little.
The answer lies somewhere in between.
Even though the documentation (cannot find the actual page at this moment) says you can combine up to 20 requests, I found out by experimenting that the limit is currently set to 15. So if you reduce the amount off calls in a single batch you should be good to go.
I'm not sure but it might also help to restrict the batches to a single user.
The throttling limit is set to 10000 items per 10 minutes per user resource, see this blog item

Jmeter : how to get large number of rps in jmeter

I'm testing a web app using jmeter for load test and I getting a hard time on how can I set properly how many threads, ramp-up and loops will I use in order to get a large number of rps. Anyway, I want to check if my server can keep up to 500rps. Does anyone here can help me how can I set it properly. Thanks.
The number of requests per unit of time is called Throughput and mainly depends on two factors:
Number of active threads
Your application response time
The first one is obvious - more threads -> more requests per second. However JMeter will wait for response from the previous thread before starting the next request so application response time matters as well.
So the recommendations are:
Set number of threads in the Thread Group to the number of anticipated users of your system.
Set ramp-up period accordingly to the number of threads so the load will increase (and decrease) gradually, this way you will be able to correlate increasing/decreasing load with the changing response time and throughput
Instead of loops it might be a better idea to set desired test duration using Scheduler section of the Thread Group.
Run your test and observe the actual throughput using i.e. Server Hits Per Second listener or Transactions per second chart of the HTML Reporting Dashboard. If it matches your expectations - you are done, if not - you will need to increase the number of virtual users.
You can use ConcurrencyThreadGroup plugin , Specifically see how to Produce Desired RPS:
Threads pool size can be calculated like RPS * <max response time> / 1000. The more rate desired the more threads you will need. The more response time service have the more threads you will need.
For example, if your service response time may be 2.5sec and target
rps is 1230, you have to have 1230 * 2500 / 1000 = 3075 threads.

Marketo API - Maximum of 10 concurrent API calls

I'd like to know what Marketo means by 10 concurrent API calls. If for example 20 people use an API in the same time, is it going to crash ? And if I make the script sleep for X seconds if I get that limit response and try the API call again, will it work ?
Thanks,
Best Regards,
Martin
Maximum of 10 concurrent API calls means, that Marketo will process only 10 simultaneous API requests per subscription at maximum.
So, for example if you have a service that directly queries the API every time it is used, and this very service gets called 11 or more times in the same time, than Marketo will respond with an error message for the eleventh call and the rest. The first 10 calls should be processed fine. According to the docs, the error message the following requests will receive will have an error code of 615.
If your script is single threaded (like standard PHP) and you have more that 10 API calls, and your script is running in one instance, than you are fine, since the calls are performed one after another (so they are not concurrent). However, if your script can run in multiple instance you can hit the limit easily. In case a sleep won't help you, but you can always check the response code in your script and retry the call if it received an error. This retry process is often called Exponential Backoff. Here is a great article on this topic.

Error: "Calls to mailbox_fql have exceeded the rate of 300 calls per 600 seconds"

I receive Graph API error #613 (message: "Calls to mailbox_fql have exceeded the rate of 300 calls per 600 seconds", type:OAuthException) when testing my app. It's a desktop app, and the only copy is the one running on my machine (so there's only one access_token and one user - me).
I query the inbox endpoint once every 15 seconds or so. Combined, the app makes about 12 API calls (to various endpoints) per minute. It consistently fails on whichever call fetches the 300th thread (there are about 25 threads on the first page of the inbox endpoint, and I'm only fetching the first page). I am not batching any calls to the Graph API.
I'm developing on Mac OS X 10.7 using Objective-C. I use NSURLConnection to call the Graph API asynchronously. As far as I know, each request processed by NSURLConnection should only result in one request to Facebook's API.
Going on the above, I'm having trouble figuring out why I am receiving this error. I suspect that it is because a single call to the inbox endpoint (i.e. a call to the URI https://graph.facebook.com/me/inbox?access_token=...) is counted as more than one call to mailbox_fql. In particular, I think that a single call that returns <n> threads counts as <n> calls against mailbox_fql. If this is the case, is there a way to reduce the number of calls to mailbox_fql per API call (e.g. by fetching only the <n> most recent threads in the inbox, rather than the whole first page)?
The documentation appears to be pretty sparse on this topic, so I've had to get by mostly through trial and error. I'd be thrilled if anyone else knows how to tackle this issue.
Edit: It turns out that you can pass a limit GET parameter that, unsurprisingly, limits the number of results. However, the Developer blog notes some limitations with this approach (namely that fewer results than requested may be returned if some are not visible to your user).
The blog recommends using until and/or since as GET parameters when calling the standard Graph API. These parameters take any strtotime()-compliant string (or Unix epoch time) and limit your results accordingly.
Original answer follows:
After some further research, it looks like my options are to fetch less frequently or use custom FQL queries to limit the number of calls to mailbox_fql. I haven't been able to find any way to limit the response of the standard Graph API call to the inbox endpoint. In the present case, I'm using an FQL query of the following form:
https://graph.facebook.com/fql?q=SELECT <fields> FROM thread WHERE folder_id=1 LIMIT <n>&access_token=...
<fields> is a comma-separated list of fields (described in Facebook's thread FQL docs). thread is the literal name of the table corresponding to the inbox endpoint; the new thread endpoint corresponds to the unified_thread table, but it's not publicly available yet. folder_id=1 indicates that we want to use the inbox (as opposed to outbox or updates folders).
In practice, I'm setting <n> to 5, which results in a reasonable 200 calls to mailbox_fql in a 10-minute span when using 15-second call intervals. In my tests, I haven't been receiving error #613, so I guess it works.
I imagine that most people here were already familiar with the ins and outs of FQL, but it was new to me. I hope that this helps some other newbies dealing with similar issues!