Telethon updates arriving late - telethon

I add a new event handler to listen to NewMessages and it's working as expected, but sometimes some updates are arriving late, for example:
In my logs, I received the event at 01:38:25, but the message was sent at 01:38:13
INFO 2023-01-31 01:38:25,165 | telegram.client | New message: NewMessage.Event(original_update=UpdateNewChannelMessage(message=Message(id=199558, peer_id=PeerChannel(channel_id=1768526690), date=datetime.datetime(2023, 2, 1, 1, 38, 13, tzinfo=datetime.timezone.utc), message=...)
Most messages arrive in time, so my question is: What's the reason for this to happen?
Even though it's the minority, it's happening with a great frequency.
The problem to me is that I need to receive the message in time to do certain operations.

Related

Redis stream XReadGroup not reading new messages even if `BLOCK` parameter is 0

I am using redis stream and XReadGroup for reading messages from stream. I have set block parameter as 0.
currently my code look like this
data, err := w.rdb.XReadGroup(ctx, &redis.XReadGroupArgs{
Group: w.opts.group,
Consumer: w.opts.consumer,
Streams: []string{w.opts.streamName, ">"},
Count: 1,
Block: 0,
}).Result()
I am currently facing a problem that if I keep the application (involving this code) idle for 10-12 hours, XReadGroup is not able to read new messages, if I restart the application then all the new messages consumed at once. Is there any solution for this problem?
You can have a block time of let's say 10s, it does not change anything (I guess the code you provided is in a while(true)).
From my experience you can keep the app idle for days and it still works.
I don't really know why but I guess it has to do with the "constant" back and forth "reseting" the connection.

php-amqplib: how to get the number of messages in prefetched cache?

Related question:Get queue size from rabbitmq consumer's callback with PhpAmqpLib
In the above question the message count is obtained by queue_declare. However this message count only counts messages that are in the queue, but not the prefetched messages (which is exactly what the poster of that question is experiencing)
If I set the prefetch_count (in basic_qos) to 1 and send ack for every single message then the message count works perfectly, but if I set the prefetch_count to 10 and send ack for every 5 messages then the message count will be something like 100, 100, 100, 100, 100, 95, 95, 95, 95, 95, ... when each message is handled.
What I want is to get the number of prefetched messages as well and add them up so that I will have the correct message count, including prefetched not but processed messages, when each message is handled.
Is there a way to obtain this number of cached messages in php-amqplib?

Azure Queue Send Message Method Expiry

New to Azure and testing Azure Queues . I attempted sending a message to the Queue with the Python SDK . Here is the method that I'm calling
from azure.storage.queue import QueueServiceClient, QueueClient, QueueMessage
connectionstring=os.environ.get("connection_string")
queue_client = QueueClient.from_connection_string(connectionstring,queue_name)
msg_content={"MessageID":"AQ2","MessageContext":"This is a test Message"}
#set the visibility timeout to 10 seconds and time-to-live to 1 day (3600 minutes)
#The documentation seems to say its an integer . Is it day , minutes ,hours ,seconds ?
queue_client.send_message(msg_content,visibility_timeout=10,time_to_live=3600)
and the output I get as a response from the queue is
{'id': '90208a43-15d9-461e-a0ba-b12e02624d34',
'inserted_on': datetime.datetime(2020, 6, 9, 12, 17, 57, tzinfo=<FixedOffset 0.0>),
'expires_on': datetime.datetime(2020, 6, 9, 13, 17, 57, tzinfo=<FixedOffset 0.0>),
'dequeue_count': None,
'content': {'MessageID': 'AQ2',
'MessageContext': 'This is a test Message'},
'pop_receipt': '<hidingthistoavoidanydisclosures>',
'next_visible_on': datetime.datetime(2020, 6, 9, 12, 18, 7, tzinfo=<FixedOffset 0.0>)}
Now if you observe the expires_on its clearly an hour from the insert date which is fine . But for some reason the message instantly moved to the poison queue ( which should normally happen after an hour if the message is untouched . I don't get where I'm going wrong . Request help on how to set the expiry time right and why its instantly moving the message to poison queue
The time to live is in seconds.
Here's the doc for queue send message

Timeout Exception while retrieving from Redis Cache at the same place always

We are receiving following timeout exception while retrieving data from Redis cache.
'Timeout performing GET inst: 2, mgr: Inactive, err: never, queue: 3, qu: 0, qs: 3, qc: 0, wr: 0, wq: 0, in: 18955,
IOCP: (Busy=4,Free=996,Min=2,Max=1000), WORKER: (Busy=0,Free=1023,Min=2,Max=1023),
Please note: Every timeout exception has different above values. queue is sometimes 2,1,3 and qs also varies with the queue value.
Also, IN: values keeps changing like 18955, 65536, 36829 etc.
Even IOCP changes like
IOCP: (Busy=6,Free=994,Min=2,Max=1000), WORKER: (Busy=0,Free=1023,Min=2,Max=1023).
Please note:
There are many similar questions in stack overflow and tried all of them. But, no luck.
We recently updated nuget package to the latest stable version (v1.2.1) of StackExchange.Redis library,
This exception seems to be occuring at the same place everytime even though there are various places where we are using redis cache. This has been found with the help of stack trace.
Also, we never faced this issue earlier like we are using the same solution from last 3 years and never encountered this issue. This exception has been occurring from last 3 months frequently atleast 3-4 times daily.
It looks like you are experiencing threadpool throttling (from the Busy and Min numbers in your error message). You will need to increase the MIN values for IOCP and Worker pool threads.
https://gist.github.com/JonCole/e65411214030f0d823cb#file-threadpool-md has more information.

Spring Batch JMSItemReader giving duplicate data in session transacted mode

I have a spring batch job which has single step. I am using JMSItemReader where jmstemplate is session transacted and my writer is just performing some business logic. Whenever any exception occurs by default and retry is exhausted then automatically batch size becomes 1 and retrys for all the items one by one.
I am defining step like this.
stepBuilderFactory.get("step")
.<String, String> chunk(10)
.reader(reader())
.processor(processor)
.writer(writer)
.faultTolerant()
.processorNonTransactional()
.retry(SomeException.class)
.retryLimit(2)
.backOffPolicy(backOffPolicy)
.skip(SomeException.class)
.skipLimit(Integer.MAX_VALUE)
.build();
The issue I am facing is something like this
Input is : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Items in batch 1, 2, 3, 4, 5
Exception occurs in writer
Retrys for 2 times and retrys exhausted
Now it will try 1 by 1 like this
item - 1 - Error
item - 2 - Success
item - 3 - Error
item - 4 - Error
item - 5 - Success
As error occurred so items 1, 3, 4 are skipped and 2, 5 are successfully processed
Here is the issue - Next I should get 6, 7, 8, 9, 10 as batch for processing but I am getting 1, 2, 3, 4, 5 as batch again and its getting executing infitely.
Note: It works fine when the sessionTransacted is false but in that case it doesn't roll back messages in case of exception to ActiveMQ Queue.
Any help is appreciated.
I think this is valid behavior, Since there is transaction rollback and message is not removed from queue, message is available for next listener thread for reading. And you've skip limit of Integer.MAX_VALUE, hence it would retry for infinite time(nearly as you have large skiplimit). I believe you need to configure dead letter queue for the queue you are reading from such that, after certain retries, if the message is corrupt/invalid, should be moved to DLQ, for manual intervention to process the message. Thus the same message is not redelivered again to the listener.