I started using Anypoint MQ Subscribe with Max Redelivery Countset to 2.
Application should throw ANYPOINT-MQ:RETRY_EXHAUSTED exception after 2 failed deliveries, but the message was returned back to main queue and picked up again in the next batch.
I am trying to put the messages in DLQ manually after 2 failed deliveries using Try scope.
Any idea, how to put the messages in DLQ manually?
Errors related to anypoint-mq:RETRY_EXHAUSTED or HTTP: RETRY_EXHAUSTED, always occurs when it failed to connect to any point mq or http request to any other service.
When you set retry connection strategy in connector like retry 2 time then connector tries to connect 2 times and after that still no connection then we will get retry exhausted error
To catch that error and see message to DLQ, do categorisation of error in on error propagate use type ANYPOINT-MQ:RETRY_EXHAUSTED Or HTTP:RETRY_EXHAUSTED based on which connector you are using.
Then it will catch that error and then inside on error propagate use any logic like send message to file or dlq whatever but there also if it fails to send to file then put a logger with proper details to track the message without losing it
Thanks
Related
rabbitMQ version: 3.11.8 , MassTransit: 8.0.1.
I have a queue with this config:
x-queue-type:quorum, x-single-active-consumer:true, durable:true
sometimes I need to do the action: GetMessage(s) in the Management panel.
but now with this queue I got this exeption:
405 RESOURCE_LOCKED - cannot obtain access to locked queue 'myQueue' in vhost 'xxx'. basic.get operations are not supported by quorum queues with single active consumer
usaully I need to read messages from errpr_queue that Masstransit created.
I've searched for that, and I found just some solutions for exclusive queues- for example issue 1 and issue 2.
but I couldn't find any solution for 'cannot obtain access to locked queue'
So, you've requested a single active consumer on the queue. And when you try to get messages in the console, it reports that the queue is locked.
Seems like that would be expected behavior, and it's telling you as much in the error message.
I have a flow like this with first file endpoint from left had configuration like this had set redeliver policy to 5. To make this flow to fail I had configured unknown file location on the second file connector from the left. If I configured redeliver policy to 5 on the first file connector what happens exactly. why we are using redeliver policy. Am not asking what happens exactly to this flow. But in generalized manner what exactly redeliver policy does on inbound file endpoint connector.
The re-delivery policy is a filter which can be applied to any source component. When you add a re-delivery policy basically you are doing a check at the source itself to catch/identify certain errors or to fulfill certain conditions before the actual mule message get passed on to the next components in the flow.
if you sent the redelivery policy to 5 the connector will try redeliver the message 5 times and if it encounter "bad message" 5 times after the 5th try it will throw MULE:REDELIVERY_EXHAUSTED error.
the actual process work in the following manner:
Each time the source receives a new message, Mule identifies the message by generating its key. During this process if the flow encounters an error Mule increments the counter associated with the message key and when the limit specified is reached it throws the error.
with respect to File connector an example would be how many times you want to retry to access a file before you want the connector to give up.
How can I find when a Camel route redelivery error handler successfully recovered one error case.
I would like to be able to get metrics around successful redelivery by a camel error handler retry.
I would like to know how many message exchange instances that happened to have a network error performing file transfer where successful recovered after retry.
I'm running an NServiceBus endpoint on an Azure Workerrole. I send all diagnostics to table storage at the moment. I was getting messages in my DLQ, and I couldn't figure out why I wasn't getting any exceptions logged in my table storage.
It turns out that NSB logs the exceptions as INFO, which is why I couldn't easily spot them in between all the actual verbose logging.
In my case, a command handler's dependencies couldn't be resolved so Autofac throws an exception. I totally get why the exception is thrown, I just don't understand why they're logged as INFO. The message ends up in my DLQ, and I only have a INFO-trace to understand why.
Is there a reason why exceptions are handled this way in NSB?
NServiceBus is not logging container issue as an error because it's happening during an attempt to process a message. First Level Retry and Second Level Retry will be attempted. When SLR is executed, it will log a WARN about the retry. Ultimately, a message will fail processing and an ERROR message will be logged. NSB and Autofac sample can be used to reproduce this.
When endpoint is running with a scaled out role and MadDeliveryCount is not big enough to accommodate all the role instances and retry count that each instance would hold, this will result in DeliveryCount reaching it's max while NServiceBus endpoint instance still thinks it has attempts before sending message to an error queue and logging an error. Similar to the question here I'd recommend to increase MaxDeliveryCount.
There is an open NServiceBus issue to have a native support for SLR counter. You can add your voice to the issue. The next version of NServiceBus (V6) will be logging message id along with the exception so that you at least could correlate between message in DLQ and log file.
I have a receive port with a WCF-CustomIsolated receive location.
On receive port I checked "Enable routing for failed messages".
In pipeline settings I have set ValidateDocument to true.
When a client sends me an incorrect schema, it receives a validation error (that happened in pipeline) and it's OK.
But it's not routed as a fault message to message box.
Could you help me why does it happen?
Why "routing for failed messages" does not work in this case? And in what cases it should work?
Thank you!
On the Receive Location, go the Transport Properties, Messages, Error Handling, and check Suspend request message on failure.
Even though it says "Suspend" checking this in combination with the Routing for Failed Message on the Receive Port will actually create a FailedMessage that you are after. (If Routing for Failed Message isn't enabled it will suspend).
This applies to all the WCF adapters, not just the CustomIsolated one.
You need to subscribe to the error message. You could use a send port or orchestration with a filter set to the receive port, message type and/or message error.