php-amqplib: how to get the number of messages in prefetched cache? - rabbitmq

Related question:Get queue size from rabbitmq consumer's callback with PhpAmqpLib
In the above question the message count is obtained by queue_declare. However this message count only counts messages that are in the queue, but not the prefetched messages (which is exactly what the poster of that question is experiencing)
If I set the prefetch_count (in basic_qos) to 1 and send ack for every single message then the message count works perfectly, but if I set the prefetch_count to 10 and send ack for every 5 messages then the message count will be something like 100, 100, 100, 100, 100, 95, 95, 95, 95, 95, ... when each message is handled.
What I want is to get the number of prefetched messages as well and add them up so that I will have the correct message count, including prefetched not but processed messages, when each message is handled.
Is there a way to obtain this number of cached messages in php-amqplib?

Related

Telethon updates arriving late

I add a new event handler to listen to NewMessages and it's working as expected, but sometimes some updates are arriving late, for example:
In my logs, I received the event at 01:38:25, but the message was sent at 01:38:13
INFO 2023-01-31 01:38:25,165 | telegram.client | New message: NewMessage.Event(original_update=UpdateNewChannelMessage(message=Message(id=199558, peer_id=PeerChannel(channel_id=1768526690), date=datetime.datetime(2023, 2, 1, 1, 38, 13, tzinfo=datetime.timezone.utc), message=...)
Most messages arrive in time, so my question is: What's the reason for this to happen?
Even though it's the minority, it's happening with a great frequency.
The problem to me is that I need to receive the message in time to do certain operations.

rabbitmq prefetch with multiple consumers

I'm trying to understand how rabbitmq works with multiple consumer and prefetch_count.
I have three consumers consuming on the same queue and all of these consumers have configured with the QoS prefetch_count = 200.
Now assuming at a certain point I have unlimited backlog messages in the queue and consumers A,B,C are connecting to the queue, would A get message 1-200, B get 201-400, C get 401-600 from the queue simultaneously? That seems like message 1, 201, 401 got processed at the first place compared to the rest. Somehow I don't want that, I'd like to have these messages being processed sequentially.
If that's the case I guess this implies that the messages may be processed disordered based on how consumers are setup, even though the queue follows FIFO.
Or should I set prefetch_count = 1 to make sure of REAL FIFO?
Edited:
Just set up a local env of rabbitmq and experimented a bit. I used a producer to bombard a queue with numbers 0 to 100000 sequentially to accumulate backlog messages in a queue. Later on, I had two consumers A, B consuming messages from that queue with prefetch_count = 200.
From what I observed, A got 0-199 and B got numbers 200-399 at very beginning. However, A started getting numbers {401, 403, 405, 406 ...} and B gets {400, 402, 404, ...} after that.
I guess A and B got non-skipped messages at the beginning was because I wasn't strictly spinning up these two consumers simultaneously. But the following pattern explains well how prefetch_count works. It doesn't necessarily send consumers consecutive messages(I knew it's processed in a round robin fashion, but I guess this is more intuitive with an experiment). There's no guarantee in what order the messages will be processed if using prefetch_count.

Why do my tasks match every RabbitMQ queue in a headers exchange?

I'm trying to implement exponential backoff with a RabbitMQ headers exchange, and I had each queue be bound with x-match: "all" and x-retry-count: [RETRY COUNT FOR THIS LEVEL]. However, what I found was that if I try to retry a task and I have backoff queues for 100, 200, 400, and 800 millisecond wait time, each task I send to the retry exchange somehow matches every queue.
As you can see in the picture below, for the 200ms backoff queue, I'm binding the header x-retry-count: 2, but a task with the header x-retry-count: 1 is matching it (and the x-retry-count values for all other queues in the backoff exchange too). Why would that be?
Found what was going on. x-retry-count doesn't count as a header that can be matched on because it starts with x-; naming the header retry-count does work

How to make rabbitmq to refuses messages when a queue is full?

I have a http server which receives some messages and must reply 200 when a message is successfully stored in a queue and 500 is the message is not added to the queue.
I would like rabbitmq to refuse my messages when the queue reach a size limit.
How can I do it?
actually you can't configure RabbitMq is such a way. but you may programatically check queue size like:
`DeclareOk queueOkStatus = channel.queueDeclare(queueOutputName, true, false, false, null);
if(queueOkStatus.getMessageCount()==0){//your logic here}`
but be careful, because this method returns number of non-acked messages in queue.
If you want to be aware of this , you can check Q count before inserting. It sends request on the same channel. Asserting Q returns messageCount which is Number of 'Ready' Messages. Note : This does not include the messages in unAcknowledged state.
If you do not wish to be aware of the Q length, then as specified in 1st comment of the question:
x-max-length :
How many (ready) messages a queue can contain before it starts to drop them from its head.
(Sets the "x-max-length" argument.)

Difference between Pending Messages and Enqueue Counter in Active MQ?

In the Active MQ Admin console of what is the difference between "Number Of Pending Messages" and "Messages Enqueued"? When a Message is placed on to the queue, should both these values should match?
pending messages = number of messages CURRENTLY waiting for delivery in the destination (the current size of the queue)
enqueued messages = number of messages that where enqueued in the destination since the last statistic reset. This number can only rise.
dequeued messages = messages delivered from the destination to consumers. this number can be higher that the number of enqueued messages if a message was delivered to multiple consumers (topics).
Messages Enqueued = Number of messages sent to the queue since the server start
Messages Dequeued = Number of messages received+deleted since the server start