Why does'nt SFTP inbound (deployed on Cloudhub) delete files after transfer is completed - mule

This question is very closely related to this and this. With the SFTP connector streaming is on by default and cannot be turned off (version 3.5.2). If I have a flow like the below
1) Inbound SFTP connector with a large polling frequency (in hours for example)
2) File outbound to store the stream into a file in local storage
I would expect here that the SFTP inbound deletes the source file after the transfer is complete. (But this does not happen)
Reading the documentation I find the below
Auto Delete (Applies to inbound SFTP endpoint only) . Click this box to delete the file after it has been read. Note: If an error occurs when Mule is writing to the outbound endpoint, the file is not deleted. Auto Delete only works if both the inbound and outbound endpoints use the SFTP transport.
Is there a workaround for this ?, basically how can I delete the source file once I have downloaded it off the SFTP server ?
The above is mostly a issue with the documentation, SFTP auto-delete does work even if the outbound is not a SFTP connector. I have logged a JIRA about it here
Update:
I have determined the root cause of the issue, the output from SFTP connector when deployed on cloudhub returns an instance of class java.io.ByteArrayInputStream, however when the application is deployed via Anypoint studio the output is instance of class org.mule.transport.sftp.SftpInputStream.
To reproduce make sure the file size is less than 2 MB
This causes the file not to be deleted when deployed on Cloudhub.
Logs from cloudhub
Local deployment logs
2015-04-28 15:37:50 DEBUG SftpConnectionFactory:118 - Successfully connected to: sftp://decision_1:#####XXXXXXX:22/To_DI_Local
2015-04-28 15:37:50 DEBUG SftpClient:121 - Attempting to cwd to: /To_DI_Local
2015-04-28 15:37:50 DEBUG SftpConnector:121 - Successfully changed working directory to: /To_DI_Local
2015-04-28 15:37:50 DEBUG SftpMessageReceiver:121 - Routing file: ZCCR_EXTRACT_FINAL.CSV
**2015-04-28 15:37:50 INFO LoggerMessageProcessor:193 - InputSFTPEndpoint org.mule.transport.sftp.SftpInputStream**
2015-04-28 15:37:50 DEBUG SftpMessageReceiver:121 - Routed file: ZCCR_EXTRACT_FINAL.CSV
2015-04-28 15:37:50 DEBUG SftpMessageReceiver:121 - Polling. Routed all 1 files found at sftp://decision_1:#####XXXXXX:22/To_DI_Local
2015-04-28 15:37:50 INFO LoggerMessageProcessor:193 - InputSFTPEndpoint org.mule.transport.sftp.SftpInputStream
My Flow looks like the below
Update 2:
However if the file is a big one i think anything about 10MB then the return type is com.google.common.io.MultiInputStream and the file does get deleted.
Any ideas why Cloudhub would be behaving like this ?

I am newer to Mulesoft and I encountered the same issue. What I found was that Mulesoft puts a lock on the file that it is reading. For my particular scenario, I was reading a file and then uploading to Fusion (cloud). Mulesoft would not auto delete the file. When it polled the folder path again, it would see that same file there and try to read it again. What we had to do to force the release of the file, was add a Byte Array to String. It seems to have forced Mulesoft to read the file in its entirety and transform the payload which severed the lock on the file.

Related

apache active mq console not starting

i have connected the Apache MQ broker service through java only by downloading active mq jars in dependency. but activemq web console not connecting. http://localhost/8161/admin . Please tell me is there any setting or configuration is need to connect ?
Verify activemq.xml file from /conf . It must contain your correct IP for transport connectors.
Check log4j.properties in /conf, update logging options to WARN at least, DEBUG if possible. Update File Logging Path.
Run ./activemq start
Let the system throw an exception.
Check logs from the path you specified.
Look for log exception in Stack Overlow or Google to get first analysis on error.
More details

Amazon S3 File Read Timeout. Trying to download a file using JAVA

New to Amazon S3 usage.I get the following error when trying to access the file from Amazon S3 using a simple java method.
2016-08-23 09:46:48 INFO request:450 - Received successful response:200, AWS Request ID: F5EA01DB74D0D0F5
Caught an AmazonClientException, which means the client encountered an
internal error while trying to communicate with S3, such as not being
able to access the network.
Error Message: Unable to store object contents to disk: Read timed out
The exact lines of code worked yesterday.I was able to download 100% of 5GB file in 12 min. Today I'm in a better connected environment but only 2% or 3% of the file is downloaded and then the program fails.
Code that I'm using to download.
s3Client.getObject(new GetObjectRequest("mybucket", file.getKey()), localFile);
You need to set the connection timeout and the socket timeout in your client configuration.
Click here for a reference article
Here is an excerpt from the article:
Several HTTP transport options can be configured through the com.amazonaws.ClientConfiguration object. Default values will suffice for the majority of users, but users who want more control can configure:
Socket timeout
Connection timeout
Maximum retry attempts for retry-able errors
Maximum open HTTP connections
Here is an example on how to do it:
Downloading files >3Gb from S3 fails with "SocketTimeoutException: Read timed out"

Could not connect to ActiveMQ Server - activemq for mcollective failing

We are continuously getting this error:
2014-11-06 07:05:34,460 [main ] INFO SharedFileLocker - Database activemq-data/localhost/KahaDB/lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: Failed to create directory 'activemq-data/localhost/KahaDB'
We have verified that activemq is running as activemq, we have verified that the owner of the directories are activemq. It will not create the directories automatically, and if we create them ourselves, it still gives the same error. The service starts fine, but it will just continuously spit out the same error. There is no lock file as it will not generate any files or directories.
Another way to fix this problem, in one step, is to create the missing symbolic link in /usr/share/activemq/. The permissions are already set properly on /var/cache/activemq/data/, but it seems the activemq RPM is not creating the symbolic link to that location as it should. The symbolic link should be as follows: /usr/share/activemq/activemq-data -> /var/cache/activemq/data/. After creating the symbolic link, restart the activemq service and the issue will be resolved.
I was able to resolve this by the following:
ensure activemq is owner and has access to /var/log/activemq and all sub dirs.
ensure /etc/init.d/activemq has: ACTIVEMQ_CONFIGS="/etc/sysconfig/activemq"
create file activemq in /etc/sysconfig if it doesnt exist.
add this line: ACTIVEMQ_DATA="/var/log/activemq/activemq-data/localhost/KahaDB"
The problem was that activeMQ 5.9.x was using /usr/share/activemq as its KahaDB location.

How to force file-queue-store to keep messages on disk

For asynchronous processing large amount of files it could be nice to store messages in a persistent storage to releave JVM heap and avoid data loss in case of system failure.
I configured file-queue-store, but unfortunatelly, I can not see msg files in the .mule/queuestore/myqueuename folder.
I feed the flow with files from smb:endpoint and send them to a cxf endpoint.
When I stop Mule ESB (version 3.2.0) properly during file processing, it writes a lot of .msg files to the queuestore. After restart it processes them one-by-one.
But, when I kill the JVM (to test a system failure, or OutOfMemoryError, etc.), there is no fies in the queuestore, so all of the is lost.
My question: Is it possible to force queuestore to store the messages on disk and delete them only when they fully processed?
Please advise. Thanks in advance.
Mule 3.2.0 was affected by this issue
You should consider upgrading.

mule sftp archive not continuing flow

Im having major issues with Mule 3 and files being read and that later should be put on some standard queue on ActiveMQ.
basically its a really simple service, initially started that on inbound starts off by
This file is read correctly from the SFTP area, and in the mule log for the reading application its stated that the file is written to the specified archiveDir..
After this, its silent, nothing else happens... the file is just placed in the archiveDir and neithe ActiveMQ or Mule3 gives any indications to that something have gone wrong...
The queue names etc etc is all correct.
Basically the same environment is running on a second server, with no disturbance..
Is there any commonly known issues that could make mule not continue on with its processing putting the file on queue?
Thx in advance!