I have JMS inbound endpoint which subscribes to the topic. once there is a message transformer splits the payload into list of records and then insert into database using batch-commit. If there is any error while inserting into the database I want roll back the entire payload to JMS. How to achieve this using transactions?
<batch:job name="lockboxBatch" max-failed-records="-1">
<batch:input>
<jms:inbound-endpoint topic="lockbox" connector-ref="Active_MQ1" doc:name="JMS">
</jms:inbound-endpoint>
<custom-transformer class="transformers.PaymentsTransformer" doc:name="Java"/>
<logger level="INFO" doc:name="Logger"/>
</batch:input>
<batch:process-records>
<batch:step name="Batch_Step">
<expression-component doc:name="Expression"><![CDATA[payload[2].batchAmount='hghghfghfhgf']]></expression-component>
<batch:commit size="4" doc:name="Batch Commit">
<db:insert config-ref="Oracle_Configuration" doc:name="Database" bulkMode="true" >
<db:parameterized-query><![CDATA[INSERT INTO TIB_INT_AR_PAYMENT_IFACE (TRANSMISSION_REQUEST_ID,DESTINATION_ACCOUNT,ORIGINATION,TRANSMISSION_RECORD_COUNT,TRANSMISSION_AMOUNT,LOCKBOX_NUMBER,LOCKBOX_BATCH_COUNT,LOCKBOX_RECORD_COUNT,LOCKBOX_AMOUNT,BATCH_NAME,BATCH_AMOUNT,BATCH_RECORD_COUNT,ITEM_NUMBER,CURRENCY_CODE,REMITTANCE_AMOUNT,TRANSIT_ROUTING_NUMBER,ACCOUNT,CHECK_NUMBER,CUSTOMER_NUMBER,OVERFLOW_INDICATOR,OVERFLOW_SEQUENCE,INVOICE1,AMOUNT_APPLIED1) VALUES (#[payload.?transmissiosnRequestID],#[payload.?destinastionAccount],#[payload.?origination],#[payload.?transmissionSrecordCount],#[payload.?transmisssionAmount],#[payload.lockboxNumber],#[payload.lockboxBatchCount],#[payload.lockboxRecordCount],#[payload.lockboxAmount],#[payload.batchName],#[payload.batchAmount],#[payload.batchRecordCount],#[payload.itemNumber],#[payload.currencyCode],#[payload.remittanceAmount],#[payload.transitRoutingNumber],#[payload.account],#[payload.checkNumber],#[payload.customerNumber],#[payload.overflowIndicator],#[payload.overflowSequence],#[payload.invoice1],#[payload.amountApplied1])]]></db:parameterized-query>
</db:insert>
</batch:commit>
</batch:step>
<batch:step name="Batch_Step1" accept-policy="ONLY_FAILURES">
<set-payload value="#[getStepExceptions()]" doc:name="Set Payload"/>
<foreach collection="#[payload.values()]" doc:name="For Each">
<jms:outbound-endpoint queue="Invalid_Transmission" connector-ref="Active_MQ" doc:name="JMS"/>
</foreach>
</batch:step>
</batch:process-records>
<batch:on-complete>
<logger message="Completed the insert" level="INFO" doc:name="Logger"/>
</batch:on-complete>
</batch:job>
You can try two things.
1.Make max-failed-records="0",this will rollback in case of any failure in batch step.
2.Encomapss the DB connector inside transaction scope,and handle the exception scenario as required.
<transactional action="ALWAYS_BEGIN" doc:name="Transactional">
<db:insert...>........</db:insert>
</transactional>
Please consider below updated code,you can make changes to suit your requirements.
<batch:job name="lockboxBatch" max-failed-records="0">
<batch:input>
<jms:inbound-endpoint topic="lockbox" connector-ref="Active_MQ1" doc:name="JMS">
</jms:inbound-endpoint>
<custom-transformer class="transformers.PaymentsTransformer" doc:name="Java"/>
<logger level="INFO" doc:name="Logger"/>
</batch:input>
<batch:process-records>
<batch:step name="Batch_Step">
<expression-component doc:name="Expression"><![CDATA[payload[2].batchAmount='hghghfghfhgf']]></expression-component>
<batch:commit size="4" doc:name="Batch Commit">
<transactional action="ALWAYS_BEGIN" doc:name="Transactional">
<db:insert config-ref="Oracle_Configuration" bulkMode="true" doc:name="Database">
<db:parameterized-query><![CDATA[INSERT INTO TIB_INT_AR_PAYMENT_IFACE (TRANSMISSION_REQUEST_ID,DESTINATION_ACCOUNT,ORIGINATION,TRANSMISSION_RECORD_COUNT,TRANSMISSION_AMOUNT,LOCKBOX_NUMBER,LOCKBOX_BATCH_COUNT,LOCKBOX_RECORD_COUNT,LOCKBOX_AMOUNT,BATCH_NAME,BATCH_AMOUNT,BATCH_RECORD_COUNT,ITEM_NUMBER,CURRENCY_CODE,REMITTANCE_AMOUNT,TRANSIT_ROUTING_NUMBER,ACCOUNT,CHECK_NUMBER,CUSTOMER_NUMBER,OVERFLOW_INDICATOR,OVERFLOW_SEQUENCE,INVOICE1,AMOUNT_APPLIED1) VALUES (#[payload.?transmissiosnRequestID],#[payload.?destinastionAccount],#[payload.?origination],#[payload.?transmissionSrecordCount],#[payload.?transmisssionAmount],#[payload.lockboxNumber],#[payload.lockboxBatchCount],#[payload.lockboxRecordCount],#[payload.lockboxAmount],#[payload.batchName],#[payload.batchAmount],#[payload.batchRecordCount],#[payload.itemNumber],#[payload.currencyCode],#[payload.remittanceAmount],#[payload.transitRoutingNumber],#[payload.account],#[payload.checkNumber],#[payload.customerNumber],#[payload.overflowIndicator],#[payload.overflowSequence],#[payload.invoice1],#[payload.amountApplied1])]]></db:parameterized-query>
</db:insert>
</transactional>
</batch:commit>
</batch:step>
<batch:step name="Batch_Step1" accept-policy="ONLY_FAILURES">
<set-payload value="#[getStepExceptions()]" doc:name="Set Payload"/>
<foreach collection="#[payload.values()]" doc:name="For Each">
<jms:outbound-endpoint queue="Invalid_Transmission" connector-ref="Active_MQ" doc:name="JMS"/>
</foreach>
</batch:step>
</batch:process-records>
<batch:on-complete>
<logger message="Completed the insert" level="INFO" doc:name="Logger"/>
</batch:on-complete>
</batch:job>
The problem statement is to do multiple things parallely and aggregate the response and store it in a file.
link to the mule flow image as in studio:
image
In this flow, what I was trying to do was to set two constant strings in two branches of scatter and gather and aggregate and store in file. I tried overwriting the payload with a "set payload" with "my response". I am expecting "my response" as the content of the file. But instead the file content is:
¨Ìsr)java.util.concurrent.CopyOnWriteArrayListx]ü’F´ê√xpwtmsg 1tmsg 2x
I did debug and the payload at File endpoint was "my response". How and why is the collection getting written into file.
Can anyone help me to get it working.
Following is the xml:
<flow name="mule-assignFlow21123">
<quartz:inbound-endpoint jobName="dummyflow" repeatInterval="10000" responseTimeout="10000" doc:name="Quartz">
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<scatter-gather doc:name="Scatter-Gather1" >
<threading-profile maxThreadsActive="1" poolExhaustedAction="RUN"/>
<processor-chain>
<set-payload value="msg 1" doc:name="Set Payload"/>
<logger level="INFO" doc:name="Logger"/>
</processor-chain>
<processor-chain>
<set-payload value="msg 2" doc:name="Set Payload"/>
<logger level="INFO" doc:name="Logger"/>
</processor-chain>
</scatter-gather>
<set-payload value="my response" doc:name="Set Payload"/>
<file:outbound-endpoint path="/Users/premkumar/Desktop" outputPattern="Results.txt" responseTimeout="10000" mimeType="text/plain" doc:name="Save 2 File"/>
</flow>
The flow will automatically determine the processingStrategy from the in-flight event which will be async because of the quartz endpoint, so the file endpoint will fire async also.
Instead explicitly set the flow's processingStrategy to synchronous:
<flow name="mule-assignFlow21123" processingStrategy="synchronous">
<quartz:inbound-endpoint jobName="dummyflow" repeatInterval="10000" responseTimeout="10000" doc:name="Quartz">
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<scatter-gather doc:name="Scatter-Gather1" >
<threading-profile maxThreadsActive="1" poolExhaustedAction="RUN"/>
<processor-chain>
<set-payload value="msg 1" doc:name="Set Payload"/>
<logger level="INFO" doc:name="Logger"/>
</processor-chain>
<processor-chain>
<set-payload value="msg 2" doc:name="Set Payload"/>
<logger level="INFO" doc:name="Logger"/>
</processor-chain>
</scatter-gather>
<set-payload value="my response" doc:name="Set Payload"/>
<file:outbound-endpoint path="/Users/premkumar/Desktop" outputPattern="Results.txt" responseTimeout="10000" mimeType="text/plain" doc:name="Save 2 File"/>
I want to move files to location D:\target dynamically from the query select filepath from emp where status='y':
This is my table:
emp_Name File Path File Name Status
ABC D:\emp abc.txt y
xyz D:\emp xyz.txt y
bcs D:\emp bcs.txt n
Following is my source code :
<flow name="testdbFlow1">
<http:listener config-ref="HTTP_Listener_Configuration"
path="/" doc:name="HTTP" />
<jdbc-ee:outbound-endpoint queryKey="allEmps"
queryTimeout="-1" connector-ref="JDBCConnector" exchange-pattern="request-response"
doc:name="Database" />
<foreach doc:name="Foreach">
<choice doc:name="Choice">
<when expression="#[payload.status == 'Y']">
<processor-chain doc:name="Processor Chain">
<set-variable variableName="filePath" value="#[payload.filepath]"
doc:name="Variable" />
<set-variable variableName="filename" value="#[payload.filename]"
doc:name="Variable" />
<logger message="#[filePath]" level="INFO"
doc:name="Logger" />
<logger message="#[filename]"
level="INFO" doc:name="Logger" />
<file:inbound-endpoint path="#[filePath]" name="input" doc:name="File"
pollingFrequency="12000" responseTimeout="10000"> <file:filename-wildcard-filter
pattern="#[filename]" /> </file:inbound-endpoint>
<file:outbound-endpoint name="output" path="D:\target" doc:name="File"/>
</processor-chain>
</when>
<otherwise>
<processor-chain doc:name="Processor Chain">
<set-variable variableName="filePath" value="#[payload.filepath]"
doc:name="Variable" />
<set-variable variableName="filename" value="#[payload.filename]"
doc:name="Variable" />
<logger message="#[filePath]" level="INFO"
doc:name="Logger" />
<logger message="#[filename]"
level="INFO" doc:name="Logger" />
</processor-chain>
</otherwise>
</choice>
</foreach>
</flow>
But it's not working.
Inbound doesn't work out of the box in the middle of the flow. I would suggest a simpler solution using Groovy because you know exactly the file you need to consume.
Instead of using file:inbound create a input stream manually:
<set-payload value="#[groovy: new java.io.FileInputStream(new java.io.File(filename))]"/>
You will have to delete the file with another groovy script after consumed.
<flow name="listobjects">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8081" path="listobjects" contentType="text/plain" doc:name="HTTP"/>
<s3:list-objects config-ref="Amazon_S3" bucketName="demo" doc:name="Amazon S3" maxKeys="5" />
<!-- <payload-type-filter expectedType="java.util.List" doc:name="Payload"/> -->
<foreach collection="#[payload]" doc:name="For Each">
<!-- <foreach doc:name="For Each file"> -->
<logger message=" inside foreach...... #[payload.getKey()] ...." level="INFO" doc:name="Logger" />
<s3:get-object-content config-ref="Amazon_S3" bucketName="demo" key="#[payload.getKey()]" doc:name="Amazon S3"/>
<object-to-byte-array-transformer/>
<file:outbound-endpoint path="C:\output" responseTimeout="10000" doc:name="File" outputPattern="#[payload.getKey()] "></file:outbound-endpoint>
</foreach>
</flow>
I have bucket name called demo.
In that bucket I have 3 pdf files. I want to download all files and put it in c:\output folder.
I hit my url like http://localhost:8081/listobjects.
But I got the error:
Could not find a transformer to transform "CollectionDataType{type=org.mule.module.s3.simpleapi.SimpleAmazonS3AmazonDevKitImpl$S3ObjectSummaryIterable, itemType=com.amazonaws.services.s3.model.S3ObjectSummary, mimeType='/'}" to "SimpleDataType{type=org.mule.api.transport.OutputHandler, mimeType='/'}". (org.mule.api.transformer.TransformerException) (org.mule.api.transformer.TransformerException). Message payload is of type: SimpleAmazonS3AmazonDevKitImpl$S3ObjectSummaryIterable
The error occurs because after the foreach processor the payload is an instance of an S3 class, and you haven't specified any Content-Type to return. So Mule tries to transform the S3 instance to the default SimpleDataType and fails.
One way to solve it is simply to add something like
<set-property propertyName="Content-Type" value="application/json" doc:name="Content-Type" />
<set-payload value="{'result': 'ok'}"/>
at the end to make it explicit.
Also note that in your flow after running:
<object-to-byte-array-transformer/>
the S3 payload is gone, so #[payload.getKey()] will fail in the next processor:
<file:outbound-endpoint path="C:\output" responseTimeout="10000" doc:name="File" outputPattern="#[payload.getKey()] "></file:outbound-endpoint>
I've run this without problems:
<flow name="listobjects">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8083" path="listobjects" contentType="text/plain" doc:name="HTTP"/>
<s3:list-objects config-ref="Amazon_S3" bucketName="mule_test" doc:name="Amazon S3" maxKeys="5" />
<foreach collection="#[payload]" doc:name="For Each">
<logger message=" inside foreach...... #[payload.getKey()] ...." level="INFO" doc:name="Logger" />
<set-variable variableName="fileKey" value="#[payload.getKey()]" doc:name="Variable" />
<s3:get-object-content config-ref="Amazon_S3" bucketName="#[payload.getBucketName()]" key="#[payload.getKey()]" doc:name="Amazon S3"/>
<object-to-byte-array-transformer/>
<file:outbound-endpoint path="/tmp" responseTimeout="10000" doc:name="File" outputPattern="#[flowVars.fileKey] "></file:outbound-endpoint>
</foreach>
<set-property propertyName="Content-Type" value="application/json" doc:name="Content-Type" />
<set-payload value="{'result': 'ok'}"/>
</flow>
I was exploring Mule scatter-gather for parallel processing of flows in Mule fork and join pattern. My Mule config is as follows:
<flow name="fork" doc:name="fork">
<http:inbound-endpoint host="localhost" port="8090" path="mainPath" exchange-pattern="request-response" doc:name="HTTP"/>
<set-property propertyName="MULE_CORRELATION_GROUP_SIZE" value="2" doc:name="Property"/>
<!-- <all enableCorrelation="IF_NOT_SET" doc:name="All"> -->
<scatter-gather timeout="6000">
<processor-chain>
<async doc:name="Async">
<set-property propertyName="MULE_CORRELATION_SEQUENCE" value="1" doc:name="Property"/>
<flow-ref name="Flow1" doc:name="Flow Reference"/>
</async>
</processor-chain>
<async doc:name="Async">
<set-property propertyName="MULE_CORRELATION_SEQUENCE" value="2" doc:name="Property"/>
<flow-ref name="Flow2" doc:name="Flow Reference"/>
</async>
<!-- </all> -->
</scatter-gather>
<logger message="Main Flow" level="INFO" doc:name="Logger"/>
</flow>
<sub-flow name="Flow1" doc:name="Flow1">
<logger level="INFO" message="Flow1: processing started" doc:name="Logger"/>
<set-payload value="Flow1 Payload" doc:name="Set Payload"/>
<!-- Transformation payload -->
<logger level="INFO" message="Flow1: processing finished" doc:name="Logger"/>
<flow-ref name="Join-Flow" doc:name="Flow Reference"/>
</sub-flow>
<sub-flow name="Flow2" doc:name="Flow2">
<logger level="INFO" message="Flow2: processing started" doc:name="Logger"/>
<set-payload value="Flow2 Payload" doc:name="Set Payload"/>
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[sleep(2000); return message.payload;]]></scripting:script>
</scripting:component>
<!-- Transformation payload -->
<logger level="INFO" message="Flow2: processing finished" doc:name="Logger"/>
<flow-ref name="Join-Flow" doc:name="Flow Reference"/>
</sub-flow>
<sub-flow name="Join-Flow" doc:name="Join-Flow">
<collection-aggregator timeout="6000" failOnTimeout="true" doc:name="Collection Aggregator"/>
<combine-collections-transformer doc:name="Combine Collections"/>
<logger level="INFO" message="Combined Payload: #[message.payload]" doc:name="Logger"/>
<set-payload value="Soap XML Response" doc:name="Set Payload"/>
</sub-flow>
The scatter and gather is throwing following exception though I am getting the aggregated Payload in Logger:
Exception stack is:
1. null (java.lang.UnsupportedOperationException)
org.mule.VoidMuleEvent:50 (null)
2. null (java.lang.UnsupportedOperationException). Message payload is of type: String (org.mule.api.MessagingException)
org.mule.execution.ExceptionToMessagingExceptionExecutionInterceptor:32 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/MessagingException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.lang.UnsupportedOperationException
at org.mule.VoidMuleEvent.getMessage(VoidMuleEvent.java:50)
at org.mule.api.routing.AggregationContext$1.evaluate(AggregationContext.java:41)
at org.apache.commons.collections.CollectionUtils.select(CollectionUtils.java:517)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
INFO 2014-08-17 18:04:52,008 [[try1].fork.2.02] org.mule.api.processor.LoggerMessageProcessor: Flow2: processing started
INFO 2014-08-17 18:04:52,016 [[try1].fork.1.02] org.mule.api.processor.LoggerMessageProcessor: Flow1: processing started
INFO 2014-08-17 18:04:52,017 [[try1].fork.1.02] org.mule.api.processor.LoggerMessageProcessor: Flow1: processing finished
INFO 2014-08-17 18:04:54,309 [[try1].fork.2.02] org.mule.api.processor.LoggerMessageProcessor: Flow2: processing finished
INFO 2014-08-17 18:04:54,347 [[try1].fork.2.02] org.mule.api.processor.LoggerMessageProcessor: Combined Payload: [Flow1 Payload, Flow2 Payload]
One interesting fact is that if I use <all> router instead of scatter-gather, I don't get this exception. but I guess <all> router process the flows sequentially and not in parallel as per Mule documentation. So for that reason I tried to choose scatter-gather over all router. But I am not sure how to handle the above exception thrown by the scatter-gather. Is there any way out?
UPDATED FLOW :-
<flow name="fork" doc:name="fork">
<http:inbound-endpoint host="localhost" port="8090" path="mainPath" exchange-pattern="request-response" doc:name="HTTP"/>
<scatter-gather timeout="6000">
<processor-chain>
<flow-ref name="Flow1" doc:name="Flow Reference"/>
</processor-chain>
<flow-ref name="Flow2" doc:name="Flow Reference"/>
</scatter-gather>
<logger message="Main Flow" level="INFO" doc:name="Logger"/>
</flow>
<sub-flow name="Flow1" doc:name="Flow1">
<logger level="INFO" message="Flow1: processing started" doc:name="Logger"/>
<set-payload value="Flow1 Payload" doc:name="Set Payload"/>
<!-- Transformation payload -->
<logger level="INFO" message="Flow1: processing finished" doc:name="Logger"/>
<flow-ref name="Join-Flow" doc:name="Flow Reference"/>
</sub-flow>
<sub-flow name="Flow2" doc:name="Flow2">
<logger level="INFO" message="Flow2: processing started" doc:name="Logger"/>
<set-payload value="Flow2 Payload" doc:name="Set Payload"/>
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[sleep(2000); return message.payload;]]>
</scripting:script>
</scripting:component>
<!-- Transformation payload -->
<logger level="INFO" message="Flow2: processing finished" doc:name="Logger"/>
<flow-ref name="Join-Flow" doc:name="Flow Reference"/>
</sub-flow>
<sub-flow name="Join-Flow" doc:name="Join-Flow">
<collection-aggregator timeout="6000" failOnTimeout="true" doc:name="Collection Aggregator"/>
<combine-collections-transformer doc:name="Combine Collections"/>
<logger level="INFO" message="Combined Payload: #[message.payload]" doc:name="Logger"/>
<set-payload value="Soap XML Response" doc:name="Set Payload"/>
</sub-flow>
EXCEPTION :-
[try1].ScatterGatherWorkManager.02] org.mule.api.processor.LoggerMessageProcessor: Flow2: processing started
INFO 2014-08-18 13:34:24,361 [[try1].ScatterGatherWorkManager.01] org.mule.api.processor.LoggerMessageProcessor: Flow1: processing started
INFO 2014-08-18 13:34:24,364 [[try1].ScatterGatherWorkManager.01] org.mule.api.processor.LoggerMessageProcessor: Flow1: processing finished
WARN 2014-08-18 13:34:24,366 [[try1].ScatterGatherWorkManager.01] org.mule.routing.correlation.CollectionCorrelatorCallback: Correlation Group Size not set, but correlation aggregator is being used. Message is being forwarded as is
INFO 2014-08-18 13:34:24,401 [[try1].ScatterGatherWorkManager.01] org.mule.api.processor.LoggerMessageProcessor: Combined Payload: [Flow1 Payload]
INFO 2014-08-18 13:34:26,615 [[try1].ScatterGatherWorkManager.02] org.mule.api.processor.LoggerMessageProcessor: Flow2: processing finished
ERROR 2014-08-18 13:34:26,625 [[try1].connector.http.mule.default.receiver.03] org.mule.exception.DefaultMessagingExceptionStrategy:
********************************************************************************
Message : null (java.lang.NullPointerException). Message payload is of type: String
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. null (java.lang.NullPointerException)
org.mule.api.routing.AggregationContext$1:41 (null)
2. null (java.lang.NullPointerException). Message payload is of type: String (org.mule.api.MessagingException)
org.mule.execution.ExceptionToMessagingExceptionExecutionInterceptor:32 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/MessagingException.html)
--------------------------------------------------------------------------------
Root Exception stack trace:
java.lang.NullPointerException
at org.mule.api.routing.AggregationContext$1.evaluate(AggregationContext.java:41)
at org.apache.commons.collections.CollectionUtils.select(CollectionUtils.java:517)
at org.apache.commons.collections.CollectionUtils.select(CollectionUtils.java:498)
+ 3 more (set debug level logging or '-Dmule.verbose.exceptions=true' for everything)
********************************************************************************
Now as you can see it's not combining the both payload of both flow after removing the set-property ..... It is getting only payload of first flow :- Combined Payload: [Flow1 Payload] ..
Remove the async scope around the message processors in the scatter-gather, which indeed does the parallelization for you so you don't have to.
EDIT: Also remove the set-property message processors that deal with the MULE_CORRELATION properties. The scatter-gather is supposed to do that for you.
EDIT2: You can remove the processor-chain around the single flow-ref: it's useless.
Also, there seem to be a deep misunderstanding about what scatter-gather does: you forcefully make its sub-flow responses converge to a single Join-Flow where you aggregate stuff, effectively by-passing and re-implementing all what the scatter-gather proposes to do for you.
Remove the flow-refs towards Join-Flow, remove Join-Flow and just put its processing logic (not the aggregator) after the scatter-gather.
So as per David's suggestion the final working solution is :-
<flow name="fork" doc:name="fork">
<http:inbound-endpoint host="localhost" port="8090" path="scattergather" exchange-pattern="request-response" doc:name="HTTP"/>
<scatter-gather timeout="6000">
<!-- Calling Flow1 -->
<flow-ref name="Flow1" doc:name="Flow Reference"/>
<!-- Calling Flow2 -->
<flow-ref name="Flow2" doc:name="Flow Reference"/>
</scatter-gather>
<!-- <collection-aggregator timeout="6000" failOnTimeout="true" doc:name="Collection Aggregator"/>
<combine-collections-transformer doc:name="Combine Collections"/> -->
<logger level="INFO" message="Combined Payload: #[message.payload]" doc:name="Logger"/>
<logger level="INFO" message="Payload1: #[message.payload[0]] and Payload2: #[message.payload[1]] " doc:name="Logger"/>
<set-payload value="Done Merging ...!!!" doc:name="Set Payload"/>
<logger message="Back to Main Flow" level="INFO" doc:name="Logger"/>
</flow>
<sub-flow name="Flow1" doc:name="Flow1">
<logger level="INFO" message="Flow1: processing started" doc:name="Logger"/>
<set-payload value="Flow1 Payload" doc:name="Set Payload"/>
<logger level="INFO" message="Flow1: processing finished" doc:name="Logger"/>
</sub-flow>
<sub-flow name="Flow2" doc:name="Flow2">
<logger level="INFO" message="Flow2: processing started" doc:name="Logger"/>
<!-- Sleep function to delay the flow2 payload -->
<set-payload value="Flow2 Payload" doc:name="Set Payload"/>
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[sleep(3000); return message.payload;]]></scripting:script>
</scripting:component>
<logger level="INFO" message="Flow2: processing finished" doc:name="Logger"/>
</sub-flow>
</mule>