When a certain endpoint is not available (500 for instance) my queue file is moved to .error directory. I am using the parameter: moveFailed for this.
<from uri="file:inbox?autoCreate=true&readLockTimeout=2000&charset=utf-8&preMove=.processing&delete=true&moveFailed=.error&maxMessagesPerPoll=50&delay=1000"/>
According to: http://camel.apache.org/file2.html
When moving the files to the “fail” location Camel will handle the
error and will not pick up the file again.
What is the best approach to implement a redelivery policy/strategy so that the files get picked up again when failed?
Setup a retry by redelivering to that certain endpoint component, not to the whole route.
You can do this by specifying number of retries, a delay between retries, and a backoff multiplier if you so wish using an error handler.
onException(RestException.class)
.maximumRedeliveries(3)
.redeliveryDelay(100L)
.backOffMultiplier(1.5)
Or setting this in your camel context:
<errorHandler id="errorhandler" redeliveryPolicyRef="redeliveryPolicy"/>
<redeliveryPolicyProfile id="redeliveryPolicy" maximumRedeliveries="3" redeliveryDelay="100" backOffMultiplier="1.5" retryAttemptedLogLevel="WARN"/>
This way, the file is only delivered to the error folder once it has run out of redelivery attempts.
You could also look at using the dead letter handler, and putting the file into a queue to be processed later.
Related
I am having following Camel route which is trying to read the list of files from an S3 bucket:
from("direct:my-route").
.from("aws-s3://my.bucket?useIAMCredentials=true&useAwsKMS=true&awsKMSKeyId=my-key-id&deleteAfterRead=false&operation=listObjects&includeBody=false&prefix=test1/test.xml")
.log(" File detected: ${header.CamelAwsS3Key}")
.end();
However this route is being called by an external scheduler which is running every minute. It looks like the default behaviour of the Camel-S3 component is to run with a scheduler however this is causing the same files being processed again and again.
I have tried to turn the Camel-S3 scheduler off with startScheduler=false however this does not execute the 'aws-s3' part when the external scheduler kicks in and getting null values for '${header.CamelAwsS3Key}'.
Is it possible to run this component without the internal scheduler?
Camel version being used - 2.22.0
Dependency used for aws:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-aws</artifactId>
<version>${camel.version}</version>
</dependency>
Dont have 2 x from, that is not basically two independent consumers. Instead use a content enricher (pollEnrich) to consume from s3 when the other from is called.
from
pollEnrich
log
Read the docs about content enricher and pollEnrich / enrich (specially around timeouts with poll enrich).
https://camel.apache.org/manual/latest/content-enricher.html
I have a flow like this with first file endpoint from left had configuration like this had set redeliver policy to 5. To make this flow to fail I had configured unknown file location on the second file connector from the left. If I configured redeliver policy to 5 on the first file connector what happens exactly. why we are using redeliver policy. Am not asking what happens exactly to this flow. But in generalized manner what exactly redeliver policy does on inbound file endpoint connector.
The re-delivery policy is a filter which can be applied to any source component. When you add a re-delivery policy basically you are doing a check at the source itself to catch/identify certain errors or to fulfill certain conditions before the actual mule message get passed on to the next components in the flow.
if you sent the redelivery policy to 5 the connector will try redeliver the message 5 times and if it encounter "bad message" 5 times after the 5th try it will throw MULE:REDELIVERY_EXHAUSTED error.
the actual process work in the following manner:
Each time the source receives a new message, Mule identifies the message by generating its key. During this process if the flow encounters an error Mule increments the counter associated with the message key and when the limit specified is reached it throws the error.
with respect to File connector an example would be how many times you want to retry to access a file before you want the connector to give up.
How can I find when a Camel route redelivery error handler successfully recovered one error case.
I would like to be able to get metrics around successful redelivery by a camel error handler retry.
I would like to know how many message exchange instances that happened to have a network error performing file transfer where successful recovered after retry.
Just doing some testing on local machine, would like somewhere to inspect messages that are published and persisted by RabbitMQ (deliveryMode = 2). Or at least to have a time when messages was actually persisted. First try was RabbitMQ admin management, went trough all options and closest what i have found is following:
Database directory: /usr/local/var/lib/rabbitmq/mnesia/rabbit#localhost
There i can found many files with rdq extenstions and many logs file, but can't actually see nothing.
you can't, RabbitMQ uses a custom database and it is not possible to browse it.
you can only browse the RabbitMQ definitions as "queues", "users", "exchanges" etc.. but not the messages.
By default, the messages index is inside:
/usr/local/var/lib/rabbitmq/mnesia/rabbit#localhost/queues/HASHQUEUE
The only way it is as suggested by #Johansson
It's possible to manually inspect your message in the queue via the Management interface. Press on the queue that has the message, and then "Get message". If you mark it as "requeue", RabbitMQ puts it back to the queue in the same order.
https://www.cloudamqp.com/blog/2015-05-27-part3-rabbitmq-for-beginners_the-management-interface.html
Im having major issues with Mule 3 and files being read and that later should be put on some standard queue on ActiveMQ.
basically its a really simple service, initially started that on inbound starts off by
This file is read correctly from the SFTP area, and in the mule log for the reading application its stated that the file is written to the specified archiveDir..
After this, its silent, nothing else happens... the file is just placed in the archiveDir and neithe ActiveMQ or Mule3 gives any indications to that something have gone wrong...
The queue names etc etc is all correct.
Basically the same environment is running on a second server, with no disturbance..
Is there any commonly known issues that could make mule not continue on with its processing putting the file on queue?
Thx in advance!