I received the "JS error: Exceeded the total HTTP connection count limit!", error in Voximplant Platform today, but I can't find the limits. Where do I find this information?
There are some restrictions in Voximplant Javascript engine. One of them limits number of concurrent HTTP requests being made from the call scenario - only 3 requests can be processed simultaneously.
You should consider adjusting scenario flow to reduce number of simultaenous requests. For example, you can use httpRequestAsync and chain requests using await.
Full list of sandbox restrictions can be found here.
Related
I have to call third-party API in bulk(calling API more than 1000 times) and the third-party API has a rate limit of 10 request per second.
Here is the architecture to call the third-party API.
scheduler - which will take data from the primary database and publish it into a message broker with the API rate limit.
Message broker - It has dynamic numbers of consumer
Internal API - which will process the data and invoke the third-party API.
Now, the Above architecture will help to handle the API rate limit, but the problem will arise when the third party will take time due to any reason(Network latency or etc..) and due to this it is possible that more messages can stay in the message broker and which will lead to API rate limit.
To overcome this I have decided to do inter-process communication in which when(#3) will get the 429 then it will tell Cron -Job (#1) to pause the publishing message for some amount of time and then continue after the time.
Which framework or library will help to achieve this?
I implemented the Express-rate-limit npm module on my code (nodejs)
I saw the DDOS Module.
Anyone who have good expertise on Nodejs please suggest me that wheher I have to use DDOS module or not.
I installed the module but this will block the request. I read about express rate-limit also, this module is also working same as DDOS.
Someone suggest me that use DDOS. I told that I already used Express-Rate-Limit but he said that Use this also.
I am confused now. Please give me the Proper input regarding this. Any help is really appreciate.
it's fine as basic shield from ddos, or handling external requests for your api methods, that can go-out-of limit.
But if you want to prevent real ddos attacks, your should check debouncing and event throttling. Also think about per-machine custom firewall configurations;)
Dig a bit more into docs of this module ;)
burst Burst is the number or amount of allowable burst requests before
the client starts being penalized. When the client is penalized, the
expiration is increased by twice the previous expiration.
bursts = base request counter for 1 unit of time, defined by default as 1 second, or a custom set up
limit
limit is the number of maximum counts allowed (do not confuse that
with maxcount). count increments with each request. If the count
exceeds the limit, then the request is denied. Recommended limit is to
use a multiple of the number of bursts.
requests received => check for the limit. If limit achieved, requester gets a penalty.
When you see a lot of requests(multiple bursts detected).
That's real detection for excide of request limit.
So, 5 bursts set, 20 as limit, when burst detected as 5, it will flag 20 request counter like a fully recognized limitation
maxexpiry
maxexpiry is the seconds of maximum amount of expiration time. In
order for the user to use whatever service you are providing again,
they have to wait through the expiration time.
And that's it. Just dive into testing this stuff;)
I am trying to sync oneDrive Files (metadata and permissions) for a domain using MSGraph API using list, children and permission endpoints.
I am using batching for children and permission endpoints, sending 10-20 request urls in single batch requests concurrently for 10 users.
I am getting a lot of 429 errors by doing so. Though, I was also getting 429 errors on making single (non-batched) calls also.
According to the documentation related to throttling, they ask to
1. Reduce the number of operations per request
2. Reduce the frequency of calls.
So, my question is
Does a batch call of 10 get urls, count as 10 different operations and 10 different calls ?
Does a batch call of 10 get urls, count as 10 different operations and
10 different calls ?
Normally, N URLs will be treated as N+1 operations(even more). N operations from the batch URLs and one for the batch URL itself.
Pay attention to the docs:
JSON batching allows you to optimize your application by combining
multiple requests into a single JSON object.
Due to multiple requests have been combined to one request, the server side just need to send back one response too. But the underlying operation for each URL still need to be handle, so the workload on server side is still very high, just may reduce a little.
The answer lies somewhere in between.
Even though the documentation (cannot find the actual page at this moment) says you can combine up to 20 requests, I found out by experimenting that the limit is currently set to 15. So if you reduce the amount off calls in a single batch you should be good to go.
I'm not sure but it might also help to restrict the batches to a single user.
The throttling limit is set to 10000 items per 10 minutes per user resource, see this blog item
I'd like to know what Marketo means by 10 concurrent API calls. If for example 20 people use an API in the same time, is it going to crash ? And if I make the script sleep for X seconds if I get that limit response and try the API call again, will it work ?
Thanks,
Best Regards,
Martin
Maximum of 10 concurrent API calls means, that Marketo will process only 10 simultaneous API requests per subscription at maximum.
So, for example if you have a service that directly queries the API every time it is used, and this very service gets called 11 or more times in the same time, than Marketo will respond with an error message for the eleventh call and the rest. The first 10 calls should be processed fine. According to the docs, the error message the following requests will receive will have an error code of 615.
If your script is single threaded (like standard PHP) and you have more that 10 API calls, and your script is running in one instance, than you are fine, since the calls are performed one after another (so they are not concurrent). However, if your script can run in multiple instance you can hit the limit easily. In case a sleep won't help you, but you can always check the response code in your script and retry the call if it received an error. This retry process is often called Exponential Backoff. Here is a great article on this topic.
We are sending push notifications to Android devices via GCM API.
People are allowed to subscribe to different topics and receive alert every couple of days.
There is between 100_000 to 1_000_000 users subscribed for given topic, so we wanted to speed things up using more than ten connections.
We see answers with retry, so we retry after specified period of time as stated in the docs.
Can we get rid of retires by using more connections and sending the requests slower?
Or is the quota set for given API key and starting more connections will even hurt us?
EDIT:
We are using GCM HTTP interface. To be precise erlang-gcm library: https://github.com/pdincau/gcm-erlang We are sending message to 1M users. We are not sending to topic. We are performing multicast send to list of users. gcm-erlang library allows us to pass 1000 users per request (which is also the limit of GCM API). This means, we have to perform at least 1000 requests.
It takes something around 10 minutes to process all those 1000 requests, so we wanted to make them in parallel, but it doesn't make it faster. Here I've found information on throttling: https://stuff.mit.edu/afs/sipb/project/android/docs/google/gcm/adv.html#throttling
"Messages are throttled on a per application"
Does it mean, that even if this are messages to different users, we are still throttled, because they are using single API key for our mobile application?
Will the XMPP endpoint faster?
It is weird that parallelizing requests didn't make them faster. How come? Are you sure that the bottleneck is not on your side?
No, it doesn't look like you're throttled (you would receive errors if you were instead of waiting on line)
I still don't understand why topics don't work for you. They seem like a good match.
Anyway, if you want to send messages individually, I would highly recommend switching to XMPP. You will be able to send one hundred messages at a time per connection and open up to 1000 connection (but you're not gonna need that much really).