Limits to Telegram API get_entity requests - telethon

my app listen requests with telegram messages URL inside and process messages in loop:
get entity for group
search +/- 2 messages from target message
get entity for each recieved message author (in fact find entity to each uniqual from_id)
I use client.get_messages() and client.get_entity() methods ans sleep 10-15 secs between each loop.
And after 2-3 hours without any alerts (floodwait to 10sec or 5 minuts) I getting floodwait error with insane timeout (~22 hours).
I not trying to send spam, in fact I don`t sent any messages at all .
Where I can find limits to use get_entity methods?
Or may be using this method is overkill and user info may be finded someother method

I suggest you to take a look at the Telethon documentation: in there there are a few tips that let you avoid the limits using get_entity combined with get_input_entity.

Related

Doubling request on bot messaging endpoint

I'm using MS-provided bot sample with Teams messaging extensions feature. Only posted my Azure AD creds, no more changes. Running it locally...
When a user clicks messaging extension button in Teams request arrived on Microsoft.BotBuilderSamples.Controllers.BotController.PostAsync() method. If this method works longer than 25 seconds, Teams show to the user error message. Docs say that it should be only 15 seconds, but it seems Teams became more tolerant these days, okay.
But in this case the second request arrives in this method after first one (it takes place even if method works 16 seconds, not 26)! It's with same body and headers besides Authorization header (it contains new token).
So...What does it mean? What is this behavior created for? How to prevent it?
And who does this second request after all? I look in fiddler and see only one request to MS server from my desktop Teams client. When I'm make similar request from Postman, it arrives only one time.
Copying answer from comments for better understanding:
From a scenario it is the best if the bot responds within 5 seconds. We are waiting for more, but that should not be something that the we should rely on. Also as #subba reddi said If there is no response from bot controller within 15 seconds, Teams service retry once. So, you will see double calls in your controller. So, make sure your bot respond within 15 seconds.

Marketo API - Maximum of 10 concurrent API calls

I'd like to know what Marketo means by 10 concurrent API calls. If for example 20 people use an API in the same time, is it going to crash ? And if I make the script sleep for X seconds if I get that limit response and try the API call again, will it work ?
Thanks,
Best Regards,
Martin
Maximum of 10 concurrent API calls means, that Marketo will process only 10 simultaneous API requests per subscription at maximum.
So, for example if you have a service that directly queries the API every time it is used, and this very service gets called 11 or more times in the same time, than Marketo will respond with an error message for the eleventh call and the rest. The first 10 calls should be processed fine. According to the docs, the error message the following requests will receive will have an error code of 615.
If your script is single threaded (like standard PHP) and you have more that 10 API calls, and your script is running in one instance, than you are fine, since the calls are performed one after another (so they are not concurrent). However, if your script can run in multiple instance you can hit the limit easily. In case a sleep won't help you, but you can always check the response code in your script and retry the call if it received an error. This retry process is often called Exponential Backoff. Here is a great article on this topic.

RabbitMQ X-delayed plugin : How to access processing data in exchanges?

I'm searching for few hours now about how to retrieve informations in a RabbitMQ exchange.
Let me explain my goal :
I designed a system to avoid burning gmail API calls limits (per second) in my application. To do so I set up a cron which scale the sendings in an hour : basically I defined a delay in my cron and then push my data into a delayed-queue which is itself bound to the x-delayed-exchanger (type direct). This part is working pretty well.
In addition, I have a consumer which consumes and send the emails from my queue. It's perfectly working too.
My problem come here : Some manual actions coming from my users need to be send ASAP. That so, I want to retrieve the few next delayed messages which are going to be sent from my delayed exchange to the queue and put this new message between the two next delayed message :
As an example :
my-delayed-exchange has [message1: will be published in 3000ms, message2: will be published in 6000ms]: I want to insert messageToSendAsap: will be published in 4500ms] that way I'll be insure that I control my API limits.
Does anyone hear about a method to achieve this ?
Thank you in advance
PS : I code in NodeJS with the amqp lib
Based on the example on the github page of the plugin, you can simply set the x-delay value to 1 (I think it cannot be zero). That is if you are sending message M1 with delay of X and message M2 with delay of Y, so that Y < X the message M2 will be delivered to queue(s) before M1.
Also if you want the message to be sent right away (so not in-between the next two as you wrote in your example) you can simply have another "classical" direct exchnage (without any delays).

Number of concurrent connections to GCM service

We are sending push notifications to Android devices via GCM API.
People are allowed to subscribe to different topics and receive alert every couple of days.
There is between 100_000 to 1_000_000 users subscribed for given topic, so we wanted to speed things up using more than ten connections.
We see answers with retry, so we retry after specified period of time as stated in the docs.
Can we get rid of retires by using more connections and sending the requests slower?
Or is the quota set for given API key and starting more connections will even hurt us?
EDIT:
We are using GCM HTTP interface. To be precise erlang-gcm library: https://github.com/pdincau/gcm-erlang We are sending message to 1M users. We are not sending to topic. We are performing multicast send to list of users. gcm-erlang library allows us to pass 1000 users per request (which is also the limit of GCM API). This means, we have to perform at least 1000 requests.
It takes something around 10 minutes to process all those 1000 requests, so we wanted to make them in parallel, but it doesn't make it faster. Here I've found information on throttling: https://stuff.mit.edu/afs/sipb/project/android/docs/google/gcm/adv.html#throttling
"Messages are throttled on a per application"
Does it mean, that even if this are messages to different users, we are still throttled, because they are using single API key for our mobile application?
Will the XMPP endpoint faster?
It is weird that parallelizing requests didn't make them faster. How come? Are you sure that the bottleneck is not on your side?
No, it doesn't look like you're throttled (you would receive errors if you were instead of waiting on line)
I still don't understand why topics don't work for you. They seem like a good match.
Anyway, if you want to send messages individually, I would highly recommend switching to XMPP. You will be able to send one hundred messages at a time per connection and open up to 1000 connection (but you're not gonna need that much really).

Work managers threads constraint and page cannot be displayed

We have a memory intensive processing for certain functionality and we would like to limit the number of parallel requests to this processing. We are able to configure by using "Work Managers" in WebLogic and putting a limit on the number of threads for that servlet.
For example, if we put maximim thread limit as 3, then if there are 10 parallel requests; 7 requests are in queue. There could be situations where these the requests waiting in queue could take up to 30-40 minutes to be processed. We did simple testing and the received page cannot be displayed due to timeout after 15 mins and received the message after 1 hour.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
Appreciate if any one has any thoughts around this.
Does any one know if there is a setting in WebLogic to increase/decrease timeout and avoid page cannot be displayed?
There might be something but I actually didn't check as it would be a bad advice anyway. By looking for this, you are trying to solve the wrong problem here. A browser is just not made for long-running process like the one you are describing (>30mn) even if you don't mind the user waiting (not mentioning that he could refresh the page and queue more and more jobs).
So, the right answer here is in my opinion: use asynchronism, this is the perfect use case. When the user clicks on the button, send a JMS message to a queue (or create a Quartz job) and send the user a page with a request ID telling him to come back later. When the processing is done, update the status somewhere and make the status/result available to the user. Really, the user experience will be better doing this and you'll face less problems than with a browser.
1) Use some other tool (not browser) like WGET where you can control timeout parameter (--timeout).
2) Why do you use HTTP? Use message driven beans and send message JMS to that and don't care about time outs.
Perhaps quartz can do what you need? Start a job and check in on it as you need to?