How to log recovery by redelivery in Apache-Camel error handler? - error-handling

How can I find when a Camel route redelivery error handler successfully recovered one error case.
I would like to be able to get metrics around successful redelivery by a camel error handler retry.
I would like to know how many message exchange instances that happened to have a network error performing file transfer where successful recovered after retry.

Related

RabbitMQ 405 RESOURCE_LOCKED - cannot obtain access to locked queue

rabbitMQ version: 3.11.8 , MassTransit: 8.0.1.
I have a queue with this config:
x-queue-type:quorum, x-single-active-consumer:true, durable:true
sometimes I need to do the action: GetMessage(s) in the Management panel.
but now with this queue I got this exeption:
405 RESOURCE_LOCKED - cannot obtain access to locked queue 'myQueue' in vhost 'xxx'. basic.get operations are not supported by quorum queues with single active consumer
usaully I need to read messages from errpr_queue that Masstransit created.
I've searched for that, and I found just some solutions for exclusive queues- for example issue 1 and issue 2.
but I couldn't find any solution for 'cannot obtain access to locked queue'
So, you've requested a single active consumer on the queue. And when you try to get messages in the console, it reports that the queue is locked.
Seems like that would be expected behavior, and it's telling you as much in the error message.

Mule 4 - Anypoint MQ Retry Exhausted Exception and dead letter queue

I started using Anypoint MQ Subscribe with Max Redelivery Countset to 2.
Application should throw ANYPOINT-MQ:RETRY_EXHAUSTED exception after 2 failed deliveries, but the message was returned back to main queue and picked up again in the next batch.
I am trying to put the messages in DLQ manually after 2 failed deliveries using Try scope.
Any idea, how to put the messages in DLQ manually?
Errors related to anypoint-mq:RETRY_EXHAUSTED or HTTP: RETRY_EXHAUSTED, always occurs when it failed to connect to any point mq or http request to any other service.
When you set retry connection strategy in connector like retry 2 time then connector tries to connect 2 times and after that still no connection then we will get retry exhausted error
To catch that error and see message to DLQ, do categorisation of error in on error propagate use type ANYPOINT-MQ:RETRY_EXHAUSTED Or HTTP:RETRY_EXHAUSTED based on which connector you are using.
Then it will catch that error and then inside on error propagate use any logic like send message to file or dlq whatever but there also if it fails to send to file then put a logger with proper details to track the message without losing it
Thanks

Want to know what a redeliver policy configuration in a mule does to file endpoint connector

I have a flow like this with first file endpoint from left had configuration like this had set redeliver policy to 5. To make this flow to fail I had configured unknown file location on the second file connector from the left. If I configured redeliver policy to 5 on the first file connector what happens exactly. why we are using redeliver policy. Am not asking what happens exactly to this flow. But in generalized manner what exactly redeliver policy does on inbound file endpoint connector.
The re-delivery policy is a filter which can be applied to any source component. When you add a re-delivery policy basically you are doing a check at the source itself to catch/identify certain errors or to fulfill certain conditions before the actual mule message get passed on to the next components in the flow.
if you sent the redelivery policy to 5 the connector will try redeliver the message 5 times and if it encounter "bad message" 5 times after the 5th try it will throw MULE:REDELIVERY_EXHAUSTED error.
the actual process work in the following manner:
Each time the source receives a new message, Mule identifies the message by generating its key. During this process if the flow encounters an error Mule increments the counter associated with the message key and when the limit specified is reached it throws the error.
with respect to File connector an example would be how many times you want to retry to access a file before you want the connector to give up.

Apache Camel: File component moveFailed redelivery strategy

When a certain endpoint is not available (500 for instance) my queue file is moved to .error directory. I am using the parameter: moveFailed for this.
<from uri="file:inbox?autoCreate=true&readLockTimeout=2000&charset=utf-8&preMove=.processing&delete=true&moveFailed=.error&maxMessagesPerPoll=50&delay=1000"/>
According to: http://camel.apache.org/file2.html
When moving the files to the “fail” location Camel will handle the
error and will not pick up the file again.
What is the best approach to implement a redelivery policy/strategy so that the files get picked up again when failed?
Setup a retry by redelivering to that certain endpoint component, not to the whole route.
You can do this by specifying number of retries, a delay between retries, and a backoff multiplier if you so wish using an error handler.
onException(RestException.class)
.maximumRedeliveries(3)
.redeliveryDelay(100L)
.backOffMultiplier(1.5)
Or setting this in your camel context:
<errorHandler id="errorhandler" redeliveryPolicyRef="redeliveryPolicy"/>
<redeliveryPolicyProfile id="redeliveryPolicy" maximumRedeliveries="3" redeliveryDelay="100" backOffMultiplier="1.5" retryAttemptedLogLevel="WARN"/>
This way, the file is only delivered to the error folder once it has run out of redelivery attempts.
You could also look at using the dead letter handler, and putting the file into a queue to be processed later.

NServiceBus exceptions logged as INFO messages

I'm running an NServiceBus endpoint on an Azure Workerrole. I send all diagnostics to table storage at the moment. I was getting messages in my DLQ, and I couldn't figure out why I wasn't getting any exceptions logged in my table storage.
It turns out that NSB logs the exceptions as INFO, which is why I couldn't easily spot them in between all the actual verbose logging.
In my case, a command handler's dependencies couldn't be resolved so Autofac throws an exception. I totally get why the exception is thrown, I just don't understand why they're logged as INFO. The message ends up in my DLQ, and I only have a INFO-trace to understand why.
Is there a reason why exceptions are handled this way in NSB?
NServiceBus is not logging container issue as an error because it's happening during an attempt to process a message. First Level Retry and Second Level Retry will be attempted. When SLR is executed, it will log a WARN about the retry. Ultimately, a message will fail processing and an ERROR message will be logged. NSB and Autofac sample can be used to reproduce this.
When endpoint is running with a scaled out role and MadDeliveryCount is not big enough to accommodate all the role instances and retry count that each instance would hold, this will result in DeliveryCount reaching it's max while NServiceBus endpoint instance still thinks it has attempts before sending message to an error queue and logging an error. Similar to the question here I'd recommend to increase MaxDeliveryCount.
There is an open NServiceBus issue to have a native support for SLR counter. You can add your voice to the issue. The next version of NServiceBus (V6) will be logging message id along with the exception so that you at least could correlate between message in DLQ and log file.