New to Azure and testing Azure Queues . I attempted sending a message to the Queue with the Python SDK . Here is the method that I'm calling
from azure.storage.queue import QueueServiceClient, QueueClient, QueueMessage
connectionstring=os.environ.get("connection_string")
queue_client = QueueClient.from_connection_string(connectionstring,queue_name)
msg_content={"MessageID":"AQ2","MessageContext":"This is a test Message"}
#set the visibility timeout to 10 seconds and time-to-live to 1 day (3600 minutes)
#The documentation seems to say its an integer . Is it day , minutes ,hours ,seconds ?
queue_client.send_message(msg_content,visibility_timeout=10,time_to_live=3600)
and the output I get as a response from the queue is
{'id': '90208a43-15d9-461e-a0ba-b12e02624d34',
'inserted_on': datetime.datetime(2020, 6, 9, 12, 17, 57, tzinfo=<FixedOffset 0.0>),
'expires_on': datetime.datetime(2020, 6, 9, 13, 17, 57, tzinfo=<FixedOffset 0.0>),
'dequeue_count': None,
'content': {'MessageID': 'AQ2',
'MessageContext': 'This is a test Message'},
'pop_receipt': '<hidingthistoavoidanydisclosures>',
'next_visible_on': datetime.datetime(2020, 6, 9, 12, 18, 7, tzinfo=<FixedOffset 0.0>)}
Now if you observe the expires_on its clearly an hour from the insert date which is fine . But for some reason the message instantly moved to the poison queue ( which should normally happen after an hour if the message is untouched . I don't get where I'm going wrong . Request help on how to set the expiry time right and why its instantly moving the message to poison queue
The time to live is in seconds.
Here's the doc for queue send message
Related
I add a new event handler to listen to NewMessages and it's working as expected, but sometimes some updates are arriving late, for example:
In my logs, I received the event at 01:38:25, but the message was sent at 01:38:13
INFO 2023-01-31 01:38:25,165 | telegram.client | New message: NewMessage.Event(original_update=UpdateNewChannelMessage(message=Message(id=199558, peer_id=PeerChannel(channel_id=1768526690), date=datetime.datetime(2023, 2, 1, 1, 38, 13, tzinfo=datetime.timezone.utc), message=...)
Most messages arrive in time, so my question is: What's the reason for this to happen?
Even though it's the minority, it's happening with a great frequency.
The problem to me is that I need to receive the message in time to do certain operations.
Related question:Get queue size from rabbitmq consumer's callback with PhpAmqpLib
In the above question the message count is obtained by queue_declare. However this message count only counts messages that are in the queue, but not the prefetched messages (which is exactly what the poster of that question is experiencing)
If I set the prefetch_count (in basic_qos) to 1 and send ack for every single message then the message count works perfectly, but if I set the prefetch_count to 10 and send ack for every 5 messages then the message count will be something like 100, 100, 100, 100, 100, 95, 95, 95, 95, 95, ... when each message is handled.
What I want is to get the number of prefetched messages as well and add them up so that I will have the correct message count, including prefetched not but processed messages, when each message is handled.
Is there a way to obtain this number of cached messages in php-amqplib?
I want to get a message's views but I don't know which method should I use.
Here is the telegram API. I have the channel ID and the message_id,(I got them from my telegram bot). I know that telegram bot API doesn't have access to views so I want to use the main telegram API but I don't know which method should I use.
you can follow this steps:
create a post link (https://t.me/channel_username/post_id)
example: https://t.me/tehrandb/93
open the link with PHP or Python or other languages
Extract the field value with the tgme_widget_message_views class
by python and telethon you can access to a certain message
and that message object have an attribute 'view':
m = Message(id=4864, to_id=PeerUser(user_id=818906659), date=datetime.datetime(2019, 6, 25, 4, 47, 57, tzinfo=datetime.timezone.utc), message=':reminder_ribbon:تک فیلم آموزش پروژه محور \n:man:\u200d:computer:پیاده سازی Responsive Menu \n:point_left: 0 تا 100\n:round_pushpin:با css & html\n#web\n#stepbysteplearn', out=False, mentioned=False, media_unread=False, silent=False, post=False, from_scheduled=False, legacy=False, from_id=818906659, fwd_from=MessageFwdHeader(date=datetime.datetime(2019, 6, 24, 20, 29, 53, tzinfo=datetime.timezone.utc), from_id=None, from_name=None, channel_id=1023032463, channel_post=11711, post_author=None, saved_from_peer=PeerChannel(channel_id=1023032463), saved_from_msg_id=11711), via_bot_id=None, reply_to_msg_id=None, media=MessageMediaDocument(document=Document(id=5803386688260540004, access_hash=5193338638774407914, file_reference=b'\x01\x00\x00\x13\x00]\x18l:\xb7\xd5\r&\xe8\xb5j\xa65*\xea\x01\xdc\xe2Py', date=datetime.datetime(2019, 6, 24, 20, 29, 52, tzinfo=datetime.timezone.utc), mime_type='video/mp4', size=16955767, dc_id=4, attributes=[DocumentAttributeVideo(duration=668, w=1280, h=720, round_message=False, supports_streaming=True), DocumentAttributeFilename(file_name='Responsive_Menu_With_Media_Queries.mp4')], thumbs=[PhotoStrippedSize(type='i', bytes=b'\x01\x16(\xc5\xa2\x8a(\x00\xa2\x8a(\x00\xa2\x8a(\x00\xa2\x8a(\x00\xa2\x8a(\x00\xa2\x8a(\x03'), PhotoSize(type='m', location=FileLocationToBeDeprecated(volume_id=455132553, local_id=24511), w=320, h=180, size=644)]), ttl_seconds=None), reply_markup=None, entities=[MessageEntityHashtag(offset=90, length=4), MessageEntityMention(offset=95, length=16)], views=6276, edit_date=None, post_author=None, grouped_id=None)
m.views will return a specific message views
full information of a message object in telegram.
I have a spring batch job which has single step. I am using JMSItemReader where jmstemplate is session transacted and my writer is just performing some business logic. Whenever any exception occurs by default and retry is exhausted then automatically batch size becomes 1 and retrys for all the items one by one.
I am defining step like this.
stepBuilderFactory.get("step")
.<String, String> chunk(10)
.reader(reader())
.processor(processor)
.writer(writer)
.faultTolerant()
.processorNonTransactional()
.retry(SomeException.class)
.retryLimit(2)
.backOffPolicy(backOffPolicy)
.skip(SomeException.class)
.skipLimit(Integer.MAX_VALUE)
.build();
The issue I am facing is something like this
Input is : 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Items in batch 1, 2, 3, 4, 5
Exception occurs in writer
Retrys for 2 times and retrys exhausted
Now it will try 1 by 1 like this
item - 1 - Error
item - 2 - Success
item - 3 - Error
item - 4 - Error
item - 5 - Success
As error occurred so items 1, 3, 4 are skipped and 2, 5 are successfully processed
Here is the issue - Next I should get 6, 7, 8, 9, 10 as batch for processing but I am getting 1, 2, 3, 4, 5 as batch again and its getting executing infitely.
Note: It works fine when the sessionTransacted is false but in that case it doesn't roll back messages in case of exception to ActiveMQ Queue.
Any help is appreciated.
I think this is valid behavior, Since there is transaction rollback and message is not removed from queue, message is available for next listener thread for reading. And you've skip limit of Integer.MAX_VALUE, hence it would retry for infinite time(nearly as you have large skiplimit). I believe you need to configure dead letter queue for the queue you are reading from such that, after certain retries, if the message is corrupt/invalid, should be moved to DLQ, for manual intervention to process the message. Thus the same message is not redelivered again to the listener.
I am curious if I could get a report of messages sent and received that includes time stamps and email addresses.
I looked at the Gmail API documentation and I did not see anything that directly mentioned anything like that.
Thank you.
Here's the relevant function maybe u can see it http://imapclient.readthedocs.org/en/latest/index.html#imapclient.IMAPClient.fetch
>> c.fetch([3293, 3230], ['INTERNALDATE', 'FLAGS'])
{3230: {b'FLAGS': (b'\Seen',),
b'INTERNALDATE': datetime.datetime(2011, 1, 30, 13, 32, 9),
b'SEQ': 84},
3293: {b'FLAGS': (),
b'INTERNALDATE': datetime.datetime(2011, 2, 24, 19, 30, 36),
b'SEQ': 110}}