How to set schedule and merge file in Anypoint Studio(MuleSoft) - anypoint-studio

I would like to set time schedule and merge file by using MuleSoft Anypoint Studio.
Example:
The file name (folder name)input/a.txt includes
1, 2, 3
and the other file name is (folder name)output/b.txt which includes
4, 5, 6
and I would like to merge contents of a.txt file into b.txt at next day of 0AM file like below:
b.txt
4, 5, 6
1, 2, 3
I think to work out this problem just using Schedule/Write function but I couldn't.
Can anyone please help to resolve this?

It is correct that using the Scheduler component, and the File connector's read and write operations you can achieve that. I wrote an example below. For the output file you have to set the write mode to APPEND, so the existing contents are not overwritten. Note that for this to work the input files need to have an end of line at the end of the record.
Note that Anypoint Studio is an IDE. It will not implement anything, however you can use it to develop a Mule application that will implement your requirements. The application will be executed by Mule Runtime, inside Studio or in some other environment.
<file:config name="File_Config" doc:name="File Config" />
<flow name="test-file-appendFlow" >
<scheduler doc:name="Scheduler" >
<scheduling-strategy >
<fixed-frequency frequency="1" timeUnit="DAYS"/>
</scheduling-strategy>
</scheduler>
<file:read doc:name="Read" config-ref="File_Config" path="/tmp/input/a.txt"/>
<file:write doc:name="Write" config-ref="File_Config" path="/tmp/output/b.txt" mode="APPEND"/>
</flow>

Related

How to write a large CSV file to SFTP in Mule 4

I am trying to write a large CSV file to SFTP.
Used For each to split the records and write using SFTP connector.
But the file is not reaching the SFTP.
What am I doing wrong here?
Below is the flow:
<flow name="sftp-Flow" doc:id="294e7265-0bb3-466b-add4-5819088bd33c">
<file:listener doc:name="File picked from path" directory="${processing.folder}" config-ref="File-Inbound" autoDelete="true" matcher="filename-regex-filter" doc:id="bbfb12df-96a4-443f-a137-ef90c74e7de1" outputMimeType="application/csv" primaryNodeOnly="true" timeBetweenSizeCheck="1" timeBetweenSizeCheckUnit="SECONDS">
<repeatable-in-memory-stream initialBufferSize="1" bufferSizeIncrement="1" maxBufferSize="500" bufferUnit="MB"/>
<scheduling-strategy>
<fixed-frequency frequency="${file.connector.polling.frequency}"/>
</scheduling-strategy>
</file:listener>
<set-variable value="#[attributes.fileName]" doc:name="fileName - Variable" doc:id="5f064507-be62-4484-86ea-62d6cfb547fc" variableName="fileName"/>
<foreach doc:name="For Each" doc:id="87b79f6d-1321-4231-bc6d-cffbb859d94b" batchSize="500" collection="#[payload]">
<sftp:write doc:name="Push file to SFTP" doc:id="d1562478-5276-4a6f-a7fa-4a912bb44b8c" config-ref="SFTP-Connector" path='#["${sftp.remote.folder}" ++ "/" ++ vars.fileName]' mode="APPEND">
<reconnect frequency="${destination.sftp.connection.retries.delay}" count="${destination.sftp.connection.retries}"/>
</sftp:write>
</foreach>
<error-handler ref="catch-exception-strategy"/>
I have found the solution. The foreach directive only supports collections in JSON, XML, or JSON formats. I just placed a transformer to convert the CSV to JSON before the foreach. Now the file is properly saved in batches.
Instead of splitting the payload in rexords try setting the CSV reader to streaming mode.
outputMimeType="application/csv; streaming=true"
Update: the best solution might just to remove both the foreach and the outputMimeType attribute from the File listener. The file will be read and write as a binary using streaming to the SFTP write operation. Remove outputMimeType will prevent Mule from trying to parse the big file as CSV, which is not really needed since the only processing the flow is doing as a CSV is the foreach, which will no longer be needed. This method will be faster and consume less resources.

Pipe Console App to file from batch file

I have a console app (Visual Studio - VB), it runs, it does it's job.
I also have a batch file that runs the program and everything, but I want the output of my console app to also send to a text file.
This is my current batch, which creates the text file, but nothing is in it.
Start "" "C:\Users\wrossi\Desktop\NetLogOnSysInfo Solution\NetLogOnSysInfo Project\bin\Debug\NetLogOnSysInfo Project.exe" -all >>%ComputerName%.txt
Exit
Not sure if the batch is wrong, or if I should look at my program. Also, how do I define a path so I can put the output file exactly where I want it?
"C:\Users\wrossi\Desktop\
NetLogOnSysInfo Solution\NetLogOnSysInfo Project\bin\Debug\NetLogOnSysInfo Project.exe"
-all > %ComputerName%.txt
No need to use Start.
You are pretty close to what I think you want to do:
c:\console.exe > c:\output.exe
c:\console.exe or wherever the location of your console app > redirect stdout (use >> to append) c:\output.txt or wherever you want to create the text file.
Hope this helped.
Also just in case:
type stdin.txt | console.exe > stdout.txt
In the above example you can redirect an input from stdin.txt and pipe it into console.exe then redirect the output of console.exe to stdout.txt
I found one code recently after searching alot
if u wanna run .bat file in vb or c# or simply
just add this in the same manner in which i have written
here is the code
code > C:\Users\Lenovo\Desktop\output.txt
just replace word "code" with your .bat file code or command and after that the directory of output file

Can MSBuild ItemGroup's be chunked?

I've got an ItemGroup that includes source files from my project:
<ItemGroup>
<SourceFiles Include=".\**\*.h;.\**\*.cpp"/>
</ItemGroup>
There are a few hundred source files. I want to pass them to a command line tool in an Exec task.
If I call the command line tool individually for each file:
<Exec Command="tool.exe %(SourceFiles.FullPath)" WorkingDirectory="."/>
Then, it runs very slowly.
If I call the command line tool and pass all of the files in one go:
<Exec Command="tool.exe #(SourceFiles -> '"%(FullPath)"', ' ')" WorkingDirectory="."/>
Then, I get an error if there are too many files (I'm guessing the command line length exceeds some maximum).
Is there a way I can chunk the items so that the tool can be called a number of times, each time passing up to a maximum number of source file names to the tool?
I'm not aware of any mechanism to do that with well known item metadata. What you could do is load all those paths into their own item group and write a custom task that calls the exec task. Writing a custom task is pretty simple, it can be done inline:
http://msdn.microsoft.com/en-us/library/vstudio/dd722601(v=vs.100).aspx

How to write a file object to outbound endpoint in Mule

I have a <file:inbound-endpoint> which reads a large file and pass it to a java component which splits large file into multiple smaller files. I add all these smaller files into a list and return list from java component into mule flow.
Now, in mule flow, I am using <collection-splitter> or <foreach> to output those files to the <file:outbound-endpoint>.
The problem is that
It is outputting only a single file (it overwrites the file, not using the original filename for output file)
The content of the file is filename and not the file content.
You need to add a file:file-to-byte-array-transformer after you've split the List<File> and before file:outbound-endpoint so Mule will read the actual content of the java.io.File.
You need to define an outputPattern on the file:outbound-endpoint, using a MEL expression to construct a unique file name based on the properties of the in-flight message and also on other expressions, like timestamp or a UUID, whatever fits your needs.
or 1st I did as #David suggested to add file:file-to-byte-array-transformer.
For 2nd part, to get the name of file outputting to <file:outbound-endpoint> same as the file name assigned while creating file, I did following:
<foreach>
<set-variable variableName="fname" value="#[payload.path]"/>
<logger level="INFO" message="fname is: #[fname]" />
<file:file-to-byte-array-transformer />
<file:outbound-endpoint path="${file.csv.path}" outputPattern="#[fname]"/>
</foreach>
Before converting file to byte array, get the file name as after byte array conversion, its not available in #[payload] though you may still get it from #[originalPayload]

DB Script generation using fluentmigrator

I'm using fluentmigrator and I'm stuck with a problem I need to create DB script using fluentmigrator each time I run the build script and its done but the problem is I just want to rewrite the script only if the db is altered . How can I achieve that my current code is given below
<Target Name="Migrate" >
<MakeDir Directories="$(OutputFolder)\DBScripts"></MakeDir>
<Migrate Database="sqlserver2008"
Connection="Data Source=ALen-PC;Initial Catalog=TestMigrator;User ID=user;Password=password"
Target="$(OutputFolder)\Release\bin\MigratorTest.dll"
Output="True"
OutputFilename="$(OutputFolder)\DBScripts\DBScript.sql">
</Migrate>
</Target>
At the moment, there is no support for this process in FluentMigrator. You could add a timestamp to the filename and then check the size of the file. If it is very small, less than 200 bytes then throw it away. If it is larger than 200 bytes then the schema has changed so rename the file to DBScript.sql and replace the previous version.
I would recommend submitting this as a feature request for FluentMigrator here.