Is there a way to check the Foursquare venue rate-limit without actually costing a request? - api

I am currently pinging the foursquare venue platform in order to harmonize my company's data with foursquare's. Information on the venue harmonization is here: https://developer.foursquare.com/overview/mapping
The rate limit is in the headers as X_RATELIMIT_REMAINING. I have a producer pushing out new requests and a consumer which touches the foursquare api. I would like to find out a way to have the producer check the rate limit without spending the remaining requests.
Does anyone know of a way to do this?

Unfortunately, there is no "dummy" enpoint to check your ratelimit. They have a multi endpoint but as far as I can remember they'll count each requests inside the batch.
You can count the ratelimit on your side. Here is an explanation :
The window should still update be updating in real time.
To be clear, if your rate limit is 500, at 11:00, you send 5 requests,
the X-RateLimit-Remaining will be 495. If you wait a few minutes to
11:05, and send another request, X-Rate-Limit remaining will be 494 --
it won't have reset yet.
It's not until 12:01 that you'll get back the 5 requests that you'd
made at 11:00. So, if you request again at 12:01, your limit would be
498 (-1 for the request you just did, -1 for the 11:05 request).
Requesting again at 12:06, and you'll be back up to 499 (the full
limit, minus what you just used).
from this thread : API Quota exceeded
If you reproduce this logic and count each request you make per endpoint, you can guess your ratelimit. I'm sorry there is not an easier way to do this :(

Related

Efficient way to handle 429 error while processing request in batches on multiple instances

I have batches of 500 messages. And to send them I am using the external API which allows sending only 1 message at a time. Also, they have a rate limit of 10/seconds.
If there is a single instance then I can handle the rate limit by adding a delay between the API calls. But in my case number of instance are not fixed. It totally depends upon the traffic I receive.
Let's say I have 10 instances running and they are processing messages in batches of 500 messages. So, I have to process 5000 messages in total. But the rate limit is 10/seconds so if all the instance invokes the same API with the same credentials then the rate limit get exceeded after a single API call of every instance.
When It will try to send the second message then they will get the 429 error as the rate limit is exceeded.
Now, I want to make sure that combining all the 10 instances will only send the 10 messages per second. How can I implement it?
Any better suggestions or recommendations are appreciated!!

Limits to Telegram API get_entity requests

my app listen requests with telegram messages URL inside and process messages in loop:
get entity for group
search +/- 2 messages from target message
get entity for each recieved message author (in fact find entity to each uniqual from_id)
I use client.get_messages() and client.get_entity() methods ans sleep 10-15 secs between each loop.
And after 2-3 hours without any alerts (floodwait to 10sec or 5 minuts) I getting floodwait error with insane timeout (~22 hours).
I not trying to send spam, in fact I don`t sent any messages at all .
Where I can find limits to use get_entity methods?
Or may be using this method is overkill and user info may be finded someother method
I suggest you to take a look at the Telethon documentation: in there there are a few tips that let you avoid the limits using get_entity combined with get_input_entity.

Twitch Api: How can I know that stream was finished?

I have a stream url like https://www.twitch.tv/streams/26114851120/channel/31809543. Stream is online and I need to catch moment when stream will be finished.
I researched twitch api documentation and didn't find any events. The first thought was to send requests every several minutes and when stream going online - handle this. It was a little delay, but it isn't critical.
But there are many streams that I must track and I scare that twitch can block me for this.
Are there any other ways to catch stream's finish?
As best I can tell there's no way to directly listen for a stream going online or offline, but you can still monitor a large number of streams in spite of that.
There are a fair number of Q&A on the official Twitch developer site wanting this functionality, but all of them I could find are answered with the same "it's not currently possible."
Keep in mind that you can check the status of multiple channels simultaneously (up to 100 per request) using a comma separated list and the limit query parameter: Get-Live-Streams
https://api.twitch.tv/kraken/streams/?get-live-streams?channel=Channel1,Channel2&limit=100
That'll return an object containing an array of online streams (the streams property).
Rate Limits
Twitch's official stance regarding rate limiting is a recommendation of no more than "about 1 request per second". That said they don't throttle you for making several requests in immediate succession, but rather the cumulative amount.
Note that there's a separate rate limit for IRC-related actions of 20 commands/messages per 30 seconds normally or 100 per 30 if a mod. Violating that will trigger a 30 minute lockout.
API-Side Caching
API results are also cached for 1-3 minutes which reduces load on their end. Given that, there's not much value in polling for anything more frequently than that (i.e. you should wait at least 1 minute before making the exact same request again since you'd just get the same response).
You Can Still Monitor ~6000 Streams
Given the ability to check 100 streams at a time, a need to wait for at least 1 minute per request to get new results, and an approximate rate limit of 1 request per second, you can theoretically check the status of about 6000 streams continuously (assuming you're not making other requests; 100 streams per second * 60 per minute).
PubSub For Monitoring Other Things
Currently the PubSub API doesn't have anything for monitoring stream's going online, but you may want to keep it in mind for other polling-type actions (it currently deals with things like new subscriptions or donations).
Using The Embedded Player
One last thing worth noting is you can listen for a channel going online or offline when you're using the Twitch Embedded Player.
Might be a little late to reply, but now you can look into Twitch WebHooks.
They allow you to subscribe to specific stream(s) to have Twitch notify your callback URL, when stream goes up or down.
This seems more accurate and bandwidth-saving than querying twitch yourself.

In Ravendb track request count per session

Is there a way to get the request count per session in RavenDB so as to use it for optimization? Like reducing the calls made etc. I know RavenDB limits it to 30 per session. What I would like to know is the count of requests made at any given time. (In code, during run time).
To get the number of requests for a session use session.Advanced.NumberOfRequests.

Error: "Calls to mailbox_fql have exceeded the rate of 300 calls per 600 seconds"

I receive Graph API error #613 (message: "Calls to mailbox_fql have exceeded the rate of 300 calls per 600 seconds", type:OAuthException) when testing my app. It's a desktop app, and the only copy is the one running on my machine (so there's only one access_token and one user - me).
I query the inbox endpoint once every 15 seconds or so. Combined, the app makes about 12 API calls (to various endpoints) per minute. It consistently fails on whichever call fetches the 300th thread (there are about 25 threads on the first page of the inbox endpoint, and I'm only fetching the first page). I am not batching any calls to the Graph API.
I'm developing on Mac OS X 10.7 using Objective-C. I use NSURLConnection to call the Graph API asynchronously. As far as I know, each request processed by NSURLConnection should only result in one request to Facebook's API.
Going on the above, I'm having trouble figuring out why I am receiving this error. I suspect that it is because a single call to the inbox endpoint (i.e. a call to the URI https://graph.facebook.com/me/inbox?access_token=...) is counted as more than one call to mailbox_fql. In particular, I think that a single call that returns <n> threads counts as <n> calls against mailbox_fql. If this is the case, is there a way to reduce the number of calls to mailbox_fql per API call (e.g. by fetching only the <n> most recent threads in the inbox, rather than the whole first page)?
The documentation appears to be pretty sparse on this topic, so I've had to get by mostly through trial and error. I'd be thrilled if anyone else knows how to tackle this issue.
Edit: It turns out that you can pass a limit GET parameter that, unsurprisingly, limits the number of results. However, the Developer blog notes some limitations with this approach (namely that fewer results than requested may be returned if some are not visible to your user).
The blog recommends using until and/or since as GET parameters when calling the standard Graph API. These parameters take any strtotime()-compliant string (or Unix epoch time) and limit your results accordingly.
Original answer follows:
After some further research, it looks like my options are to fetch less frequently or use custom FQL queries to limit the number of calls to mailbox_fql. I haven't been able to find any way to limit the response of the standard Graph API call to the inbox endpoint. In the present case, I'm using an FQL query of the following form:
https://graph.facebook.com/fql?q=SELECT <fields> FROM thread WHERE folder_id=1 LIMIT <n>&access_token=...
<fields> is a comma-separated list of fields (described in Facebook's thread FQL docs). thread is the literal name of the table corresponding to the inbox endpoint; the new thread endpoint corresponds to the unified_thread table, but it's not publicly available yet. folder_id=1 indicates that we want to use the inbox (as opposed to outbox or updates folders).
In practice, I'm setting <n> to 5, which results in a reasonable 200 calls to mailbox_fql in a 10-minute span when using 15-second call intervals. In my tests, I haven't been receiving error #613, so I guess it works.
I imagine that most people here were already familiar with the ins and outs of FQL, but it was new to me. I hope that this helps some other newbies dealing with similar issues!