I put records from a file into rabbit mq,read records from queue and call a service.For rejected records,I am sending a negative acknowledgemnt and requeuing with channel.basicNack method.But requirement is that we need to make only some 3 attempts of service call.After that we have to remove the message from the queue rather keep on calling the service again and again.
On the last attempt, set the requeue argument in basicNack to false.
Related
When using StreamMessageListenerContainer a subscription for a consumer group can be created by calling:
receive(consumer, readOffset, streamListener)
Is there a way to configure the container/subscription so that it will always attempt to re-process any PENDING messages before moving on to polling for new messages?
The goal would be to keep retrying any message that wasn't acknowledged until it succeeds, to ensure that the stream of events is always processed in exactly the order it was produced.
My understanding is if we specify the readOffset as '>' then on every poll it will use '>' and it will never see any messages from the PENDING list.
If we provide a specific message id, then it can see messages from the PENDING list, but the way the subscription updates the lastMessageId is like this:
pollState.updateReadOffset(raw.getId().getValue());
V record = convertRecord(raw);
listener.onMessage(record);
So even if the listener throws an exception, or just doesn't acknowledge the message id, the lastMessageId in pollState is still updated to this message id and won't be seen again on the next poll.
I'm trying to understand how rabbitmq works with multiple consumer and prefetch_count.
I have three consumers consuming on the same queue and all of these consumers have configured with the QoS prefetch_count = 200.
Now assuming at a certain point I have unlimited backlog messages in the queue and consumers A,B,C are connecting to the queue, would A get message 1-200, B get 201-400, C get 401-600 from the queue simultaneously? That seems like message 1, 201, 401 got processed at the first place compared to the rest. Somehow I don't want that, I'd like to have these messages being processed sequentially.
If that's the case I guess this implies that the messages may be processed disordered based on how consumers are setup, even though the queue follows FIFO.
Or should I set prefetch_count = 1 to make sure of REAL FIFO?
Edited:
Just set up a local env of rabbitmq and experimented a bit. I used a producer to bombard a queue with numbers 0 to 100000 sequentially to accumulate backlog messages in a queue. Later on, I had two consumers A, B consuming messages from that queue with prefetch_count = 200.
From what I observed, A got 0-199 and B got numbers 200-399 at very beginning. However, A started getting numbers {401, 403, 405, 406 ...} and B gets {400, 402, 404, ...} after that.
I guess A and B got non-skipped messages at the beginning was because I wasn't strictly spinning up these two consumers simultaneously. But the following pattern explains well how prefetch_count works. It doesn't necessarily send consumers consecutive messages(I knew it's processed in a round robin fashion, but I guess this is more intuitive with an experiment). There's no guarantee in what order the messages will be processed if using prefetch_count.
I am following this guide- https://spring.io/guides/gs/messaging-jms/
I have few messages with higher priority that needs to be sent before any other message.
I have already tried following -
jmsTemplate.execute(new ProducerCallBack(){
public Object doInJms(Session session,MessageProducer producer){
Message hello1 =session.createTextMessage("Hello1");
producer.send(hello1, DeliveryMode.PERSISTENT,0,0); // <- low priority
Message hello2 =session.createTextMessage("Hello2");
producer.send(hello1, DeliveryMode.PERSISTENT,9,0);// <- high priority
}
})
But the messages are sent in order as they are in the code.What I am missing here?
Thank you.
There are a number of factors that can influence the arrival order of messages when using priority. The first question would be did you enable priority support and the second would be is there a consumer online at the time you sent the message.
There are many good resources for information on using prioritized messages with ActiveMQ, here is one. Keep in mind that if there is an active consumer online when you sent those messages then the broker is just going to dispatch them as they arrive since and the consumer will of course process them in that order.
I'm trying to achieve a reject/delay loop using Rabbit's operations, i.e. :
I Have:
Main Queue with Main Exchange binded to it and DLX to StandBy Exchange.
StandBy Queue with StandBy Exchange binded to it with 60s TTL and DLX to Main Exchange
Basically I want to:
Consume from Main Queue
Rejects message (under certain circunstances)
Will get redirect it to StandBy Queue because rejection
When TTL expire, re-queue message to Main Queue.
The steps 1, 2 and 3 are OK but the last one drop the message instead of re-queue it.
Some theory from RabbitMQ's docs what I used to design this was:
Messages from a queue can be 'dead-lettered'; that is, republished to another exchange when any of the following events occur:
The message is rejected (basic.reject or basic.nack) with requeue=false,
The TTL for the message expires; or
The queue length limit is exceeded.
...
It is possible to form a cycle of message dead-lettering. For instance, this can happen when a queue dead-letters messages to the default exchange without specifiying a dead-letter routing key. Messages in such cycles (i.e. messages that reach the same queue twice) will be dropped if there was no rejections in the entire cycle.
The theory says that it should be re-queue because it has a rejection in the cycle from step #2, so, can you help me figure it out why it drops the message instead of re-queue it?
UPDATE:
The version I was targeting was 2.8.4 and it seems that in that moment the if there was no rejections in the entire cycle wasn't in the uses cases, anyway you can check this yourselves RabbitMQ 2.8.x Docs
I'll accept #george answer as the original objective can be achieved by this code.
Rafael, I am not sure what client you are using but with the Pika client in Python you could implement something like this. For simplicity I only use one exchange. Are you sure you are setting the exchange and the routing-key properly?
sender.py
import sys
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='cycle', type='direct')
channel.queue_declare(queue='standby_queue',
arguments={
'x-message-ttl': 10000,
'x-dead-letter-exchange': 'cycle',
'x-dead-letter-routing-key': 'main_queue'})
channel.queue_declare(queue='main_queue',
arguments={
'x-dead-letter-exchange': 'cycle',
'x-dead-letter-routing-key': 'standby_queue'})
channel.queue_bind(queue='main_queue', exchange='cycle')
channel.queue_bind(queue='standby_queue', exchange='cycle')
channel.basic_publish(exchange='cycle',
routing_key='main_queue',
body="message body")
connection.close()
receiver.py
import sys
import pika
def callback(ch, method, properties, body):
print "Processing message: {}".format(body)
# replace with condition for rejection
if True:
print "Rejecting message"
ch.basic_nack(method.delivery_tag, False, False)
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()
channel.basic_consume(callback, queue='main_queue')
channel.start_consuming()
I have a http server which receives some messages and must reply 200 when a message is successfully stored in a queue and 500 is the message is not added to the queue.
I would like rabbitmq to refuse my messages when the queue reach a size limit.
How can I do it?
actually you can't configure RabbitMq is such a way. but you may programatically check queue size like:
`DeclareOk queueOkStatus = channel.queueDeclare(queueOutputName, true, false, false, null);
if(queueOkStatus.getMessageCount()==0){//your logic here}`
but be careful, because this method returns number of non-acked messages in queue.
If you want to be aware of this , you can check Q count before inserting. It sends request on the same channel. Asserting Q returns messageCount which is Number of 'Ready' Messages. Note : This does not include the messages in unAcknowledged state.
If you do not wish to be aware of the Q length, then as specified in 1st comment of the question:
x-max-length :
How many (ready) messages a queue can contain before it starts to drop them from its head.
(Sets the "x-max-length" argument.)