After using Dequeue() to my queue, I want to restore the retrieved message to the queue. Is this possible?
If you aren't auto acknowledging messages then it will get re-queued in the absence of an explicit acknowledgement.
If you are auto acknowledging then you should just manually enqueue it.
So, if you are doing something like:
BasicDeliverEventArgs e = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
then you can do something like:
consumer.Queue.Enqueue(e);
Is that the sort of thing you were after?
Related
When using StreamMessageListenerContainer a subscription for a consumer group can be created by calling:
receive(consumer, readOffset, streamListener)
Is there a way to configure the container/subscription so that it will always attempt to re-process any PENDING messages before moving on to polling for new messages?
The goal would be to keep retrying any message that wasn't acknowledged until it succeeds, to ensure that the stream of events is always processed in exactly the order it was produced.
My understanding is if we specify the readOffset as '>' then on every poll it will use '>' and it will never see any messages from the PENDING list.
If we provide a specific message id, then it can see messages from the PENDING list, but the way the subscription updates the lastMessageId is like this:
pollState.updateReadOffset(raw.getId().getValue());
V record = convertRecord(raw);
listener.onMessage(record);
So even if the listener throws an exception, or just doesn't acknowledge the message id, the lastMessageId in pollState is still updated to this message id and won't be seen again on the next poll.
I have a http server which receives some messages and must reply 200 when a message is successfully stored in a queue and 500 is the message is not added to the queue.
I would like rabbitmq to refuse my messages when the queue reach a size limit.
How can I do it?
actually you can't configure RabbitMq is such a way. but you may programatically check queue size like:
`DeclareOk queueOkStatus = channel.queueDeclare(queueOutputName, true, false, false, null);
if(queueOkStatus.getMessageCount()==0){//your logic here}`
but be careful, because this method returns number of non-acked messages in queue.
If you want to be aware of this , you can check Q count before inserting. It sends request on the same channel. Asserting Q returns messageCount which is Number of 'Ready' Messages. Note : This does not include the messages in unAcknowledged state.
If you do not wish to be aware of the Q length, then as specified in 1st comment of the question:
x-max-length :
How many (ready) messages a queue can contain before it starts to drop them from its head.
(Sets the "x-max-length" argument.)
I had declare a queue like below:
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-max-length-bytes", 2 * 1024 * 1024); // Max length is 2G
channel.queueDeclare("queueName", true, false, false, args);
When the queue messages count bytes is large than 2G, It will auto remove the message on the head of the queue.
But what I expected is That it reject produce the last message and return exception to the producer.
How can I get it?
A possible workaround is check the queue size before send your message using the HTTP API.
For example if you have a queue called: myqueuetest with max size = 20.
Before send the message you can call the HTTP API in this way:
http://localhost:15672/api/queues/
the result is a JSON like this:
"message_bytes":10,
"message_bytes_ready":10,
"message_bytes_unacknowledged":0,
"message_bytes_ram":10,
"message_bytes_persistent":0,
..
"name":"myqueuetest",
"vhost":"test",
"durable":true,
"auto_delete":false,
"arguments":{
"x-max-length-bytes":20
},
then you cloud read the message_bytes field before send your message and then decide if send or not.
Hope it helps
EDIT
This workaround could kill your application performance
This workaround is not safe if you have multi-threading/more publisher
This workaround is not a very "best practise"
Just try to see if it is ok for your application.
As explained on the official docs:
Messages will be dropped or dead-lettered from the front of the queue to make room for new messages once the limit is reached.
https://www.rabbitmq.com/maxlength.html
If you think RabbitMQ should drop messages form the end of the queue, feel free to open an issue here so we can discuss about it https://github.com/rabbitmq/rabbitmq-server/issues
I had a MSMQ application setup where data was being pushed into one queue. Initially I only had one process reading from it and processing it. Since the volume has increased I started multiple processes to read from it which is basically a new instance of my original process. I do not see any errors but the performance has really dropped. My understanding is that each process will read from a queue and receive a new message that has not yet been processed and continue with that. Is this correct or is it possible that multiple processes could end up processing the same message?
Dim q As MessageQueue
If MessageQueue.Exists(".\private$\MsgsIQueue") Then
q = New MessageQueue(".\private$\MsgsIQueue")
Else
'GS - If there is no queue then we're done here
Console.WriteLine("Queue has not been created!")
Return
End If
While True
Dim message As Message
counter += 1
Try
If q.Transactional = True Then
Thread.Sleep(2000)
End If
q.MessageReadPropertyFilter.ArrivedTime = True
message = q.Peek(TimeSpan.FromSeconds(20.0))
message.UseJournalQueue = True
message = q.Receive(New TimeSpan(0, 0, 60))
message.Formatter = New XmlMessageFormatter
(New [String]() {"System.String"})
ProcessMessage(message)
....
Ok, are you sure that it is the queue reading that is actually causing the performance degradation? I would suspect that there is some other bottleneck in your pipeline as MSMQ is really good at handling reading from multiple processes/threads.
If I take a look at your code I would suggest the following changes:
Why sleep for 2 secs if is a tx queue? Always use tx queues and move the call to Sleep to the catch block to have a wait interval if the queue is empty.
Move the setting of the filter outside of the loop.
Remove the call to Peek as it performs nothing of value.
Use journal queue is only of use when sending messages. So remove it.
Set the formatter on the queue instead and it will be used for all reads.
You should also wrap the call to Read and ProcessMessage within a TransactionScope where you also wrap ProcessMessage in another try/catch block. This way you can commit the read if everything went Ok in ProcessMessage or otherwise choose to abort the read or move the message to a dead letter queue.
In a Mule flow, I would like to add an Exception Handler that forwards messages to a "retry queue" when there is an exception. However, I don't want this retry logic to run automatically. Instead, I'd rather receive a notification so I can review the errors and then decide whether to retry all messages in the queue or not.
I don't want to receive a notification for every exception. I'd rather have a scheduled job that runs every 15 minutes and checks to see if there are messages in this retry queue and then only send the notification if there are.
Is there any way to determine how many messages are currently in a persistent VM queue?
Assuming you use the default VM queue persistence mechanism and that the VM connector is named vmConnector, you can do this:
final String queueName = "retryQueue";
int messageCount = 0;
final VMConnector vmConnector = (VMConnector) muleContext.getRegistry()
.lookupConnector("vmConnector");
for (final Serializable key : vmConnector.getQueueProfile().getObjectStore().allKeys())
{
final QueueKey queueKey = (QueueKey) key;
if (queueName.equals(queueKey.queueName))
{
messageCount++;
}
}
System.out.printf("Queue %s has %d pending messages%n", queueName, messageCount);