Mule - Send SFTP outbound message after Collection Split - mule

I am using collection-splitter to split my List. Now how should I set the payload to SFTP outbound-endpoint.
<sftp:inbound-endpoint connector-ref="sftp-inbound" host="${SFTP_HOST}" port="${SFTP_PORT}"
path="/files/" user="${SFTP_USER}" password="${SFTP_PASS}"
responseTimeout="10000" pollingFrequency="30000" fileAge="20000" sizeCheckWaitTime="5000"
archiveDir="/files/archive/" doc:name="SFTP" >
<file:filename-regex-filter pattern="Test(.*).zip" caseSensitive="true"/>
</sftp:inbound-endpoint>
<set-variable variableName="regexVal" value="${REGEX}" doc:name="Variable"/>
<set-variable variableName="sourceFileName" value="#[flowVars.originalFilename]" doc:name="Variable"/>
<custom-transformer name="zipTxt" class="com.mst.transform.UnzipTransformer" doc:name="Java" mimeType="image/gif">
<spring:property name="filenamePattern" value="*.csv,*.txt" />
</custom-transformer>
<set-variable variableName="fileContents" value="#[payload]" />
<collection-splitter enableCorrelation="IF_NOT_SET" />
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
<sftp:outbound-endpoint connector-ref="sftp-inbound"
host="${SFTP_HOST}" port="${SFTP_PORT}"
path="/files/" user="${SFTP_USER}" password="${SFTP_PASS}"
responseTimeout="10000" doc:name="SFTP"
exchange-pattern="one-way"/>
</flow>

If your payload before collection splitter is list of objects that can be consumed by SFTP outbound endpoint like InputStream, then after splitter, you can wrap logger, sftp inside processor-chain. Splitter will send each object one-by-one to processor chain. SFTP should be able to write it if its an InputSream.
<collection-splitter enableCorrelation="IF_NOT_SET" />
<processor-chain doc:name="Processor Chain">
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
<sftp:outbound-endpoint connector-ref="sftp-inbound"
host="${SFTP_HOST}" port="${SFTP_PORT}"
path="/files/" user="${SFTP_USER}" password="${SFTP_PASS}"
responseTimeout="10000" doc:name="SFTP"
exchange-pattern="one-way"/>
</processor-chain>
You wouldn't need processor-chain if you just want to put one processor (eg. SFTP) after splitter.
If this doesn't work, then please add error details to question.

Related

"Read multiple file from different location simultaneously and merge them into one payload"

I am reading the multiple files from different folder and merging them into one but not able to merge into one file.
I am using composite source where I added two file connector then I am logging the payload into logger. payload I am getting one by one. How can I get the one payload combination of the two different payloads or multiple files input?
<flow name="file2Flow">
<composite-source doc:name="Copy_of_Composite Source">
<file:inbound-endpoint path="src/main/resources/input1" responseTimeout="10000" doc:name="File"/>
<file:inbound-endpoint path="src/main/resources/input2" responseTimeout="10000" doc:name="File"/>
</composite-source>
<file:file-to-string-transformer doc:name="File to String"/>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
</flow>
also I am trying this but not getting output
<flow name="file2file2Flow">
<http:listener config-ref="HTTP_Listener_Configuration" path="/files" doc:name="HTTP"/>
<scatter-gather doc:name="Scatter-Gather">
<file:outbound-endpoint path="src/main/resources/input1" responseTimeout="10000" doc:name="File"/>
<file:outbound-endpoint path="src/main/resources/input1" responseTimeout="10000" doc:name="File"/>
</scatter-gather>
<dw:transform-message doc:name="Transform Message">
<dw:set-payload><![CDATA[%dw 1.0
%output application/json
---
{
post1: payload[0],
post2: payload[1]
}]]>
</dw:set-payload>
</dw:transform-message>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
</flow>
file:inbound-endpoint will poll one directory, so if you need different directories that won't work.
composite-source allows it, but they wont be available in the same payload.
file:outbound-endpoint is for writing files only.
In Mule 3, you can achieve this though through a combination of a poll to trigger the flow, scatter-gather to route to multiple processors and the mule requester module to read files mid flow.
Mule Requester Module: https://www.mulesoft.com/exchange/68ef9520-24e9-4cf2-b2f5-620025690913/requester-module/
Rough example:
<flow name="dw-testFlow">
<poll doc:name="Poll" frequency="10000">
<logger level="INFO" doc:name="Logger" />
</poll>
<scatter-gather doc:name="Scatter-Gather">
<mulerequester:request config-ref="muleRequesterConfig" resource="myFileEndpoint" doc:name="Mule Requester" />
<mulerequester:request config-ref="muleRequesterConfig" resource="myFileEndpoint" doc:name="Mule Requester" />
</scatter-gather>
</flow>

Mule flow design to process read large files from remote location

I have a flow with the following steps:
1) Pick a file from source SFTP sever
2) Copy it into local storage
3) Process file using the copy in the local storage
4) Place the processed file (which will be transformed) into a destination SFTP server
5) Move file present in the source SFTP into a different folder on the source SFTP server (I could not find a way to do this and hence I'm copying from the temp location back into the SFTP processed folder)
This seems to a standard workflow, however I could not find any advice on how to specifically implement this in Mule.
My current implementation is described below:
<file:connector name="tempFile" workDirectory="${temp.file.location}/work"
workFileNamePattern="#[message.inboundProperties.originalFilename]"
autoDelete="true" streaming="false" validateConnections="true"
doc:name="File" />
<sftp:connector name="InputSFTP" validateConnections="true" keepFileOnError="true" doc:name="SFTP" >
<reconnect frequency="${reconnectfrequency}" count="5"/>
</sftp:connector>
<sftp:connector name="DestinationSFTP" validateConnections="true" pollingFrequency="30000" doc:name="SFTP">
<reconnect frequency="${reconnectfrequency}" count="5"/>
</sftp:connector>
<smtp:gmail-connector name="Gmail" contentType="text/plain" validateConnections="true" doc:name="Gmail"/>
<flow name="DownloadFTPFileIntoLocalFlow" processingStrategy="synchronous" tracking:enable-default-events="true">
<sftp:inbound-endpoint connector-ref="InputSFTP" host="${source.host}" port="22" path="${source.path}" user="${source.username}"
password="${source.password}" responseTimeout="90000" pollingFrequency="120000" sizeCheckWaitTime="1000" doc:name="InputSFTP" autoDelete="true">
<file:filename-regex-filter pattern="[Z].*\.csv" caseSensitive="false" />
</sftp:inbound-endpoint>
<file:outbound-endpoint path="${temp.file.location}" responseTimeout="10000" doc:name="Templocation" outputPattern="#[message.inboundProperties.originalFilename]" connector-ref="tempFile" />
<exception-strategy ref="Default_Exception_Strategy" doc:name="Reference Exception Strategy"/>
</flow>
<flow name="ProcessCSVFlow" processingStrategy="synchronous" tracking:enable-default-events="true">
<file:inbound-endpoint path="${temp.file.location}" connector-ref="tempFile" pollingFrequency="180000" fileAge="10000" responseTimeout="10000" doc:name="TempFileLocation"/>
<transformer ref="enrichWithHeaderAndEndOfFileTransformer" doc:name="headerAndEOFEnricher" />
<set-variable variableName="outputfilename" value="#['Mercury'+server.dateTime.year+server.dateTime.month+server.dateTime.dayOfMonth+server.dateTime.hours +server.dateTime.minutes+server.dateTime.seconds+'.csv']" doc:name="outputfilename"/>
<sftp:outbound-endpoint exchange-pattern="one-way" connector-ref="DestinationSFTP" host="${destination.host}" port="22" responseTimeout="10000" doc:name="DestinationSFTP"
outputPattern="#[outputfilename]" path="${destination.path}" user="${destination.username}" password="${destination.password}"/>
<gzip-compress-transformer/>
<sftp:outbound-endpoint exchange-pattern="one-way" connector-ref="InputSFTP" host="${source.host}" port="22" responseTimeout="10000" doc:name="SourceArchiveSFTP"
outputPattern="#[outputfilename].gzip" path="Archive" user="${source.username}" password="${source.password}"/>
<set-payload value="Hello world" doc:name="Set Payload"/>
<smtp:outbound-endpoint host="${smtp.host}" port="${smtp.port}" user="${smtp.from.address}" password="${smtp.from.password}"
to="${smtp.to.address}" from="${smtp.from.address}" subject="${mail.success.subject}" responseTimeout="10000"
doc:name="SuccessEmail" connector-ref="Gmail"/>
<logger message="Process completed successfully" level="INFO" doc:name="Logger"/>
<tracking:transaction id="#[server.dateTime]"/>
<exception-strategy ref="Default_Exception_Strategy" doc:name="Reference Exception Strategy"/>
</flow>
<catch-exception-strategy name="Default_Exception_Strategy">
<logger message="Exception has occured Payload is #[payload] and Message is #[message]" level="ERROR" doc:name="Logger"/>
<!-- <smtp:outbound-endpoint host="localhost" responseTimeout="10000" doc:name="Failure Email"/> -->
</catch-exception-strategy>
Have you tried enabling autoDelete="true" on the SFTP connector to force deleting?
Also, is it not possible to do flow1: SFTP-in -> transform -> file out, flow2: file-in -> SFTP-out?
HTH

mule move multiple files from one folder to other inside for each loop

Below is my mule flow. I want to move my corresponding from the jdbc query rseult set
----------------------------------------------------
<foreach doc:name="Foreach" counterVariableName="#[message.payload.size()]">
<logger message="#[payload.filepath] - #[payload.name] - #[payload.filename]" level="INFO" doc:name="Logger" />
</foreach>
---------------------------------------------------------------------
<jdbc-ee:postgresql-data-source name="PostgreSQL_Data_Source"
user="postgres" password="postgres" url="jdbc:postgresql://localhost:5432/postgres"
transactionIsolation="UNSPECIFIED" doc:name="PostgreSQL Data Source">
</jdbc-ee:postgresql-data-source>
<jdbc-ee:connector name="JDBCConnector"
dataSource-ref="PostgreSQL_Data_Source" validateConnections="true"
doc:name="JDBCConnector">
<jdbc-ee:query key="emprec" value="select * from emp where salary>50000";">
</jdbc-ee:query>
</jdbc-ee:connector>
<flow name="empflow" >
<quartz:inbound-endpoint responseTimeout="10000"
doc:name="Quartz" jobName="CronJobSchedule" repeatInterval="0"
cronExpression="0 0/1 * ? * MON-FRI" repeatCount="1">
<quartz:event-generator-job>
<quartz:payload>quartzSchedular started</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<jdbc-ee:outbound-endpoint queryKey="emprec"
queryTimeout="-1" connector-ref="JDBCConnector" exchange-pattern="request-response"
doc:name="Database" />
<logger message="Size of payload is ::: #[message.payload.size()]" level="INFO" doc:name="Logger"/>
<foreach doc:name="Foreach" counterVariableName="#[message.payload.size()]">
<logger message="#[payload.filepath] - #[payload.name] - #[payload.filename]" level="INFO" doc:name="Logger" />
</foreach>
</flow>
please suggest way to move whatever filename got using query result ,that file need to move other location
Inside for each loop i tried file inbound and outpoint .but it is not worked out
1 - You need to load the file, for that I suggest using the Mule Requester Module. You can find more on it in this blogpost.
2 - Right after that you can move it using a file outbound endpoint.
Here's an example:
<mulerequester:request config-ref="Mule_Requester" resource="file:///Users/anafelisatti/test.txt" returnClass="java.lang.String" doc:name="Mule Requester"/>
<file:outbound-endpoint responseTimeout="10000" doc:name="File" outputPattern="test.txt" path="/Users/anafelisatti/Documents"/>
Hope that helps.

validate fields in datamapper (CSV to JSON).

Converting from CSV to JSON using mule datamapper. I want to check if required field is empty. If empty log that field and discard it for further processing.
I know in script option we have if(input.data.length >0).'
But how to discard the whole row if this fails??
You can do this within mule datamapper simply by encapsulating the whole conversion within the if statements opening and closing braces. Something like this:
if ( input.Quantity > 0 ) {
output.id = input.id;
output.Customer = input.Customer;
output.Quantity = input.Quantity;
output.Price = input.Price;
}
However a different, perhaps better, approach would be to let the datamapper transform every row into JSON and then split and filter as seperate steps in the flow.
<flow name="filterindatamapperFlow2" doc:name="filterindatamapperFlow2">
<file:inbound-endpoint path="/tmp/inbox" doc:name="Inbound file"/>
<data-mapper:transform config-ref="CSV_To_UnfilteredJSON" doc:name="CSV To Unfiltered JSON"/>
<request-reply>
<vm:outbound-endpoint path="splittandprocess" exchange-pattern="one-way"/>
<vm:inbound-endpoint path="result"/>
</request-reply>
<json:object-to-json-transformer doc:name="Object to JSON"/>
<file:outbound-endpoint path="/tmp/outbox" doc:name="Outbound file"/>
</flow>
<flow name="splittandprocess">
<vm:inbound-endpoint path="splittandprocess" exchange-pattern="one-way"/>
<json:json-to-object-transformer returnClass="java.util.List" doc:name="JSON to Object"/>
<splitter expression="#[payload]" doc:name="Splitter"/>
<json:json-to-object-transformer returnClass="java.util.Map" doc:name="JSON to Object"/>
<message-filter doc:name="Filter Out Orders With No Quantity" onUnaccepted="handleFilteredMessages">
<expression-filter expression="#[payload['Quantity'] > 0]" />
</message-filter>
<collection-aggregator failOnTimeout="false" timeout="1000"/>
<vm:outbound-endpoint path="result" exchange-pattern="one-way"/>
</flow>
<flow name="handleFilteredMessages">
<logger message="Payload filtered #[payload]" level="ERROR" doc:name="Logger"/>
</flow>

Using a enricher on a Mule outbound endpoint so that the message properties context is not lost

When i make a call to to soap web service using the soap component in Mule. The message properties context is lost. I understand a Mule enricher component can be used but not sure on the usage. Below you will find my test mule code
<spring:beans>
<spring:bean id="myWebServiceImpl" class="com.xxx.xxx.service.MyWebServiceImpl">
</spring:bean>
</spring:beans>
<custom-transformer class="com.xxx.xxx.service.TestTransformer" name="Java" doc:name="Java"/>
<flow name="testwebserviceFlow1" doc:name="testwebserviceFlow1">
<file:inbound-endpoint path="c:\landing" responseTimeout="10000" doc:name="File"/>
<object-to-string-transformer doc:name="Object to String"/>
<http:outbound-endpoint exchange-pattern="request-response" method="POST" address="http://localhost:28081/MyWebService" responseTimeout="100000" doc:name="HTTP" >
<cxf:jaxws-client operation="helloWorld" serviceClass="com.xxx.xxx.service.MyWebService" enableMuleSoapHeaders="true" doc:name="SOAP"/>
</http:outbound-endpoint>
<transformer ref="Java" doc:name="Transformer Reference"/>
<logger level="INFO" doc:name="Logger"/>
</flow>
<flow name="MyWebServiceFlow" doc:name="MyWebServiceFlow">
<http:inbound-endpoint exchange-pattern="request-response" address="http://localhost:28081/MyWebService?wsdl" doc:name="HTTP" responseTimeout="100000">
<cxf:jaxws-service serviceClass="com.xxx.xxx.service.MyWebService" doc:name="SOAP"/>
</http:inbound-endpoint>
<component doc:name="MyWebService">
<spring-object bean="myWebServiceImpl"/>
</component>
</flow>
Yes, you can use an enricher to preserve your original message and put the return value of the web service into a variable. It works like this:
<enricher source="#[payload]" target="#[variable:myVal]">
<http:outbound-endpoint exchange-pattern="request-response" method="POST" address="http://localhost:28081/MyWebService" responseTimeout="100000" doc:name="HTTP" >
<cxf:jaxws-client operation="helloWorld" serviceClass="com.xxx.xxx.service.MyWebService" enableMuleSoapHeaders="true" doc:name="SOAP"/>
</http:outbound-endpoint>
</enricher>
You can then later access the variable like this:
<logger message="#[variable:myVal]" level="INFO"/>
If you just want to call the web service and ignore any return values, you can also do that asynchronously by putting the http outbound inside <async></async> tags instead of the enricher.