Hi I have below queries with SFTP component if you guys can help me out that would be a great help:
1) Can we get the file size of the file picked up by SFTP component? I need to restrict the transfer based on the size of file.
2) Can I get the number of files and the file names picked up by the SFTP component?
3) Is the understanding correct: SFTP component picks up all the files from the server and keep in memory and do the processing 1 by 1 until it finishes all files?
4) If server has 5 files can SFTP component process all the 5 files in parallel rather than 1 by 1?
1-Mule does not populate the file-size field for SFTP as they do with FILE. There are Jira tickets open on this matter but MuleSoft has called it an enhancement and not given it a priority. I disagree. Perhaps ping MuleSoft, if enough users do maybe they will raise the priority and address it. They use the size internally, they simply do not expose it outside as is done with the FILE connector.
2-No, not really. It gives them back one at a time, not as a list.
3 & 4-It is only loading the entire file into memory if you tell it not to stream or do something else, like an onject-to-string transformer which forces a memory load. The files or files streams are passed back 1 by 1, but unless you restrict threading and make your flow synchronous, it will go to asych and multi-threaded and process multiple files in parallel. Flows default to asych, subflow are synchronous.
You can use the SFTP endpoint to retrieve files, and then use a Java or script call to get the file's attributes and filter to only process the files you are actually interested in, such as ones larger than your minimum size. This would seem more in line with what you are looking for in point 1. There are other options, but this would be more straight forward that others I can quickly think of.
I found 1 way to get the file size, if we provide transformer-refs="Object_to_Byte_Array" in and then do #[payload.size()] to get the size of file in Bytes? Will this cause any issue?
Related
I have a task to do. I need to build a WCF service that allow a client to import a file inside a database using the server backend. In order to do this, i need to communicate to the server, the setting, the events needed to start and set the importation and most importantly the file to import. Now the problem is that these files can be extremely large (much bigger then 2gb), so it's not possible to send them via browser as they are. The only thing that comes into my mind is to split these files and send them one by one to the server.
I have also another requirement: i need to be 100% sure that this file are not corrupted, so i need to implement also a sort of policy for correction and possibly recover of the errors.
Do you know if there is a sort of API or dll that can help me to achieve my goals or is it better to write the code by myself? And in this case, which would be the optimal size of the packets?
I want to develop a flow in Mule which would poll a folder and pick-up 3 files and transfer it to a separate folder. The flow should log an error or send email if 1 of the file is not present and do the processing if all the 3 files are present.
I developed a flow with File endpoint which picks up all the files in the folder and transfer it to the destination folder. But I am not aware how to keep count on the received files (i.e. 3) or read the file names in this case and then direct the flow with the help of Choice component.
Any help would be much appreciated.
Hi get the files and send to a VM and use a mule requestor to fetch the files after polling time use for each and give the counter value as 3
Using a File inbound endpoint might not help as it will create 3 separate threads (i mean it will execute your flow for each file from the folder).
You can try like this:
1) Use a Quartz scheduler in the beginning which triggers on specific interval
2) Use a java component, use java IO to poll the folder and read the 3 file names
3) Do your business logic on whether all 3 files are available, if yes read them and process them (move to different folder, etc)
This is a cleaner approach than dealing with multiple flows.
Another option might be to override the File endpoint's message dispatcher but that is more complicated than using Quartz for a simple use case.
Could you please suggest. I have two files each have 80 to 90k product and these two files are interlinked with each other(one file have information on other) and i need to generate one single file by looking up the other files. These files probably comes in the sameTime with different name.
Both the files are csv and i need to generate the new csv.
Is that the only way I should keep any one of these files in memory and keep looking by iterating.
I planned to use Batch inside dataMapper. Is there any way we can keep the first file in Datamapper userDefined table or something like that.And the getting the new file to make a look up on it.( I'm not provided with external DB)
If any one of the file have some 5000 or 10k lines it the sense, i can keep that in memory and make the 80k file to look on it. I'm not comfortable to keep 80 or 90k file in memory.
Have reference this link: Mule ESB - design a multi file processing flow when files are dependent on each other.
Could you please suggest me the best solution.
Also any idea How long to process the file it does take, Thanks in advance.
Mule studio:5.3.1 and Runtime: 3.7.2
I would think of the problem as two distinct events from Mule's perspective, and plan to keep state from the first one in a "database" of some kind. This doesn't have to be an Oracle cluster or anything, you can run H2 in process or Redis on the same server as Mule for example.
I think you're on the right track with the Batch idea. When the first file is received, I'd create a record for each in a batch job. Then when the second file is received, I'd run a second batch job that looks up the relevant information from the database, and generates the CSV file you need. It could also remove the records that have been matched from the database in a subsequent batch step.
For the transformations, I'd recommend trying DataWeave instead of DataMapper. It's a better way to write transformation logic, and Mulesoft has deprecated DataMapper, to be removed as of Mule 4.0.
My app is monitoring a "hot" folder somewhere on the local filesystem for newly added files to push to a network location. I'm running into a problem when very large files are being written into the hot folder: the file system event notifying me of changes in the hot folder will fire well before the file completes writing. When my app tries to upload the file, it mis-reads the file size as the current number of copied bytes, not the eventual total number of bytes.
Things I've tried:
NSURL getResourceValue:forKey:error: to read NSURLAllocatedFileSizeKey (same value as NSURLFileSizeKey while the file is being written).
NSFileManager attributesOfItemAtPath:error: to look at NSFileBusy (always NO).
I can't seem to find any mechanism short of repeatedly polling a file for its size to determine if the file is finished copying and can be uploaded.
There aren't great ways to do this.
If you can be certain that the writer is using NSFileCoordinator, then you can also use that to coordinate your access to the file.
Likewise, if you're sure that the writer has opted in to advisory locking, you could try to open the file for shared access by calling open() with the O_SHLOCK and O_NONBLOCK flags. If you succeed, then there are no other descriptors open for exclusive access. You can either use the file descriptor you've got or close it and then use some other API to access the file.
However, if you can't be sure of any of those, then your best bet may be to set a timer to repeatedly check the file's metadata (size, date modified, etc.). Only when you see that it has stopped changing over a reasonable time interval (2 seconds, maybe) would you attempt to access it (and cancel the timer).
You might want to do all three. Wait for the file's metadata to settle down, then use a NSFileCoordinator to read from the file. When it calls your reader block, use open() with O_SHLOCK | O_NONBLOCK to make sure there are no other processes which have exclusive access to it.
You need some form of coordinated file locking.
fcntl() and flock() are common functions for this.
Read up on it first.
Then see what options you have.
If you can control the code base of those other processes, all the better.
The problem with really large files is that what's changed or changing inside them is opaque and isn't always at the end.
Good processes should generally be doing atomic writes. (Write to a temp file then swap it out) but if these files are actually databases then you will want to look at using the db's server app for this sort of thing.
If the files are wrappers containing other files then it gets extra messy as those contents might have dependencies on one another to be in a usable state.
It's a very common scenario: some process wants to drop a file on a server every 30 minutes or so. Simple, right? Well, I can think of a bunch of ways this could go wrong.
For instance, processing a file may take more or less than 30 minutes, so it's possible for a new file to arrive before I'm done with the previous one. I don't want the source system to overwrite a file that I'm still processing.
On the other hand, the files are large, so it takes a few minutes to finish uploading them. I don't want to start processing a partial file. The files are just tranferred with FTP or sftp (my preference), so OS-level locking isn't an option.
Finally, I do need to keep the files around for a while, in case I need to manually inspect one of them (for debugging) or reprocess one.
I've seen a lot of ad-hoc approaches to shuffling upload files around, swapping filenames, using datestamps, touching "indicator" files to assist in synchronization, and so on. What I haven't seen yet is a comprehensive "algorithm" for processing files that addresses concurrency, consistency, and completeness.
So, I'd like to tap into the wisdom of crowds here. Has anyone seen a really bulletproof way to juggle batch data files so they're never processed too early, never overwritten before done, and safely kept after processing?
The key is to do the initial juggling at the sending end. All the sender needs to do is:
Store the file with a unique filename.
As soon as the file has been sent, move it to a subdirectory called e.g. completed.
Assuming there is only a single receiver process, all the receiver needs to do is:
Periodically scan the completed directory for any files.
As soon as a file appears in completed, move it to a subdirectory called e.g. processed, and start working on it from there.
Optionally delete it when finished.
On any sane filesystem, file moves are atomic provided they occur within the same filesystem/volume. So there are no race conditions.
Multiple Receivers
If processing could take longer than the period between files being delivered, you'll build up a backlog unless you have multiple receiver processes. So, how to handle the multiple-receiver case?
Simple: Each receiver process operates exactly as before. The key is that we attempt to move a file to processed before working on it: that, and the fact the same-filesystem file moves are atomic, means that even if multiple receivers see the same file in completed and try to move it, only one will succeed. All you need to do is make sure you check the return value of rename(), or whatever OS call you use to perform the move, and only proceed with processing if it succeeded. If the move failed, some other receiver got there first, so just go back and scan the completed directory again.
If the OS supports it, use file system hooks to intercept open and close file operations. Something like Dazuko. Other operating systems may let you know about file operations in anoter way, for example Novell Open Enterprise Server lets you define epochs, and read list of files modified during an epoch.
Just realized that in Linux, you can use inotify subsystem, or the utilities from inotify-tools package
File transfers is one of the classics of system integration. I'd recommend you to get the Enterprise Integration Patterns book to build your own answer to these questions -- to some extent, the answer depends on the technologies and platforms you are using for endpoint implementation and for file transfer. It's a quite comprehensive collection of workable patterns, and fairly well written.