Exception found trying to logout from ftp in Mule 4 - mule

I am getting Exception found trying to log out from FTP warning while deleting file FTP operation. After the Reconnection strategy, It is working fine. But I have to handle this exception as it is lower down performance.

Expiration Policy with MAX Idle Time set to 2 min and auto-closes the FTP connection, thus avoiding the logout error.

Related

Mule 4 - Anypoint MQ Retry Exhausted Exception and dead letter queue

I started using Anypoint MQ Subscribe with Max Redelivery Countset to 2.
Application should throw ANYPOINT-MQ:RETRY_EXHAUSTED exception after 2 failed deliveries, but the message was returned back to main queue and picked up again in the next batch.
I am trying to put the messages in DLQ manually after 2 failed deliveries using Try scope.
Any idea, how to put the messages in DLQ manually?
Errors related to anypoint-mq:RETRY_EXHAUSTED or HTTP: RETRY_EXHAUSTED, always occurs when it failed to connect to any point mq or http request to any other service.
When you set retry connection strategy in connector like retry 2 time then connector tries to connect 2 times and after that still no connection then we will get retry exhausted error
To catch that error and see message to DLQ, do categorisation of error in on error propagate use type ANYPOINT-MQ:RETRY_EXHAUSTED Or HTTP:RETRY_EXHAUSTED based on which connector you are using.
Then it will catch that error and then inside on error propagate use any logic like send message to file or dlq whatever but there also if it fails to send to file then put a logger with proper details to track the message without losing it
Thanks

Unable to execute HTTP request: Timeout waiting for connection from pool in Flink

I'm working on an app which uploads some files to an s3 bucket and at a later point, it reads files from s3 bucket and pushes it to my database.
I'm using Flink 1.4.2 and fs.s3a API for reading and write files from the s3 bucket.
Uploading files to s3 bucket works fine without any problem but when the second phase of my app that is reading those uploaded files from s3 starts, my app is throwing following error:
Caused by: java.io.InterruptedIOException: Reopen at position 0 on s3a://myfilepath/a/b/d/4: org.apache.flink.fs.s3hadoop.shaded.com.amazonaws.SdkClientException: Unable to execute HTTP request: Timeout waiting for connection from pool
at org.apache.flink.fs.s3hadoop.shaded.org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:125)
at org.apache.flink.fs.s3hadoop.shaded.org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:155)
at org.apache.flink.fs.s3hadoop.shaded.org.apache.hadoop.fs.s3a.S3AInputStream.lazySeek(S3AInputStream.java:281)
at org.apache.flink.fs.s3hadoop.shaded.org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:364)
at java.io.DataInputStream.read(DataInputStream.java:149)
at org.apache.flink.fs.s3hadoop.shaded.org.apache.flink.runtime.fs.hdfs.HadoopDataInputStream.read(HadoopDataInputStream.java:94)
at org.apache.flink.api.common.io.DelimitedInputFormat.fillBuffer(DelimitedInputFormat.java:702)
at org.apache.flink.api.common.io.DelimitedInputFormat.open(DelimitedInputFormat.java:490)
at org.apache.flink.api.common.io.GenericCsvInputFormat.open(GenericCsvInputFormat.java:301)
at org.apache.flink.api.java.io.CsvInputFormat.open(CsvInputFormat.java:53)
at org.apache.flink.api.java.io.PojoCsvInputFormat.open(PojoCsvInputFormat.java:160)
at org.apache.flink.api.java.io.PojoCsvInputFormat.open(PojoCsvInputFormat.java:37)
at org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:145)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
at java.lang.Thread.run(Thread.java:748)
I was able to control this error by increasing the max connection parameter for s3a API.
As of now, I have around 1000 files in the s3 bucket which is pushed and pulled by my app in the s3 bucket and my max connection is 3000. I'm using Flink's parallelism to upload/download these files from s3 bucket. My task manager count is 14.
This is an intermittent failure, I'm having success cases also for this scenario.
My query is,
Why I'm getting an intermittent failure? If the max connection I set was low, then my app should be throwing this error every time I run.
Is there any way to calculate the optimal number of max connection required for my app to work without facing the connection pool timeout error? Or Is this error related to something else that I'm not aware of?
Thanks
In Advance
Some comments, based on my experience with processing lots of files from S3 via Flink (batch) workflows:
When you are reading the files, Flink will calculate "splits" based on the number of files, and each file's size. Each split is read separately, so the theoretical max # of simultaneous connections isn't based on the # of files, but a combination of files and file sizes.
The connection pool used by the HTTP client releases connections after some amount of time, as being able to reuse an existing connection is a win (server/client handshake doesn't have to happen). So that introduces a degree of randomness into how many available connections are in the pool.
The size of the connection pool doesn't impact memory much, so I typically set it pretty high (e.g. 4096 for a recent workflow).
When using AWS connection code, the setting to bump is fs.s3.maxConnections, which isn't the same as a pure Hadoop configuration.

Amazon S3 File Read Timeout. Trying to download a file using JAVA

New to Amazon S3 usage.I get the following error when trying to access the file from Amazon S3 using a simple java method.
2016-08-23 09:46:48 INFO request:450 - Received successful response:200, AWS Request ID: F5EA01DB74D0D0F5
Caught an AmazonClientException, which means the client encountered an
internal error while trying to communicate with S3, such as not being
able to access the network.
Error Message: Unable to store object contents to disk: Read timed out
The exact lines of code worked yesterday.I was able to download 100% of 5GB file in 12 min. Today I'm in a better connected environment but only 2% or 3% of the file is downloaded and then the program fails.
Code that I'm using to download.
s3Client.getObject(new GetObjectRequest("mybucket", file.getKey()), localFile);
You need to set the connection timeout and the socket timeout in your client configuration.
Click here for a reference article
Here is an excerpt from the article:
Several HTTP transport options can be configured through the com.amazonaws.ClientConfiguration object. Default values will suffice for the majority of users, but users who want more control can configure:
Socket timeout
Connection timeout
Maximum retry attempts for retry-able errors
Maximum open HTTP connections
Here is an example on how to do it:
Downloading files >3Gb from S3 fails with "SocketTimeoutException: Read timed out"

Flowgear Change default workflow execution time

My problem is when i retrieve too many data from database and select using odbc node
when my workflow is run exception The timeout (30 secs) was exceeded while waiting for a response from DropPoint transactionRequest
Please help how to resize defalut timelimit of workflow.
The timeout you mention is the timeout on the connection that is using a DropPoint and not a timeout on the workflow as a whole.
From within the connections pane, open the connection you are using on the workflow and modify the timeout setting there.
Separate to this, if you're calling the workflow via rest api, you can set a timeout. To override the default there, add the query string: _timeout=300 to the url in your consuming app (i.e. Not in the endpoint url setting in Flowgear).

MCOErrorAuthentication performing background fetch

I'm fetching messages on background from IMAP server using performFetchWithCompletionHandler.
The first thing I do on method implementation is check for network reachability, discarding operation if its no available.
Then I start refresh process executing fetchMessagesOperationWithFolder on a IMAP session.
As result of this operation I usually receive no error, but sometimes I receive an MCOErrorParse (Unable to parse response from server), MCOErrorConnection (A stable connection to the server could not be established) or MCOErrorAuthentication (Unable to authenticate with the current session's credentials).
Can all this errors be produced due to connection failure?
I want to handle the MCOErrorAuthentication to notify user on the credentials error, but in this scenario the credentials are ok, so when I perform any operation when on foreground again with network reachability it will succeed.
Should I do an extra network connection check before proceed on those errors? Should I create a new IMAP session on every operation?
Thanks!
Edit
I'm adding ConnectionLog, it take me long to reproduce the error. Its very clear why I receive a MCOErrorAuthentation: I'm not loading password well.
2014-12-18 21:00:41.212 * OK Gimap ready for requests from 85.58.177.133 et58mb78762219web
2014-12-18 21:00:41.222 1 CAPABILITY
2014-12-18 21:00:41.302 * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 XYZZY SASL-IR AUTH=XOAUTH AUTH=XOAUTH2 AUTH=PLAIN AUTH=PLAIN-CLIENTTOKEN
1 OK Thats all she wrote! et58mb78762219web
2014-12-18 21:00:41.304 2 LOGIN “polferresamon#gmail.com" ""
2014-12-18 21:00:41.378 2 NO Empty username or password. et58mb78762219web
2014-12-18 21:00:41.380 Error fetching messages: Unable to authenticate with the current session's credentials.
So, to really solve MCOErrorAuthentication I have to check IMAP session init, and the process where I'm loading email and password when returning from background.
I guess parse and connection errors are due to connection issues.
Thanks for your help.