I have JMS inbound endpoint which subscribes to the topic. once there is a message transformer splits the payload into list of records and then insert into database using batch-commit. If there is any error while inserting into the database I want roll back the entire payload to JMS. How to achieve this using transactions?
<batch:job name="lockboxBatch" max-failed-records="-1">
<batch:input>
<jms:inbound-endpoint topic="lockbox" connector-ref="Active_MQ1" doc:name="JMS">
</jms:inbound-endpoint>
<custom-transformer class="transformers.PaymentsTransformer" doc:name="Java"/>
<logger level="INFO" doc:name="Logger"/>
</batch:input>
<batch:process-records>
<batch:step name="Batch_Step">
<expression-component doc:name="Expression"><![CDATA[payload[2].batchAmount='hghghfghfhgf']]></expression-component>
<batch:commit size="4" doc:name="Batch Commit">
<db:insert config-ref="Oracle_Configuration" doc:name="Database" bulkMode="true" >
<db:parameterized-query><![CDATA[INSERT INTO TIB_INT_AR_PAYMENT_IFACE (TRANSMISSION_REQUEST_ID,DESTINATION_ACCOUNT,ORIGINATION,TRANSMISSION_RECORD_COUNT,TRANSMISSION_AMOUNT,LOCKBOX_NUMBER,LOCKBOX_BATCH_COUNT,LOCKBOX_RECORD_COUNT,LOCKBOX_AMOUNT,BATCH_NAME,BATCH_AMOUNT,BATCH_RECORD_COUNT,ITEM_NUMBER,CURRENCY_CODE,REMITTANCE_AMOUNT,TRANSIT_ROUTING_NUMBER,ACCOUNT,CHECK_NUMBER,CUSTOMER_NUMBER,OVERFLOW_INDICATOR,OVERFLOW_SEQUENCE,INVOICE1,AMOUNT_APPLIED1) VALUES (#[payload.?transmissiosnRequestID],#[payload.?destinastionAccount],#[payload.?origination],#[payload.?transmissionSrecordCount],#[payload.?transmisssionAmount],#[payload.lockboxNumber],#[payload.lockboxBatchCount],#[payload.lockboxRecordCount],#[payload.lockboxAmount],#[payload.batchName],#[payload.batchAmount],#[payload.batchRecordCount],#[payload.itemNumber],#[payload.currencyCode],#[payload.remittanceAmount],#[payload.transitRoutingNumber],#[payload.account],#[payload.checkNumber],#[payload.customerNumber],#[payload.overflowIndicator],#[payload.overflowSequence],#[payload.invoice1],#[payload.amountApplied1])]]></db:parameterized-query>
</db:insert>
</batch:commit>
</batch:step>
<batch:step name="Batch_Step1" accept-policy="ONLY_FAILURES">
<set-payload value="#[getStepExceptions()]" doc:name="Set Payload"/>
<foreach collection="#[payload.values()]" doc:name="For Each">
<jms:outbound-endpoint queue="Invalid_Transmission" connector-ref="Active_MQ" doc:name="JMS"/>
</foreach>
</batch:step>
</batch:process-records>
<batch:on-complete>
<logger message="Completed the insert" level="INFO" doc:name="Logger"/>
</batch:on-complete>
</batch:job>
You can try two things.
1.Make max-failed-records="0",this will rollback in case of any failure in batch step.
2.Encomapss the DB connector inside transaction scope,and handle the exception scenario as required.
<transactional action="ALWAYS_BEGIN" doc:name="Transactional">
<db:insert...>........</db:insert>
</transactional>
Please consider below updated code,you can make changes to suit your requirements.
<batch:job name="lockboxBatch" max-failed-records="0">
<batch:input>
<jms:inbound-endpoint topic="lockbox" connector-ref="Active_MQ1" doc:name="JMS">
</jms:inbound-endpoint>
<custom-transformer class="transformers.PaymentsTransformer" doc:name="Java"/>
<logger level="INFO" doc:name="Logger"/>
</batch:input>
<batch:process-records>
<batch:step name="Batch_Step">
<expression-component doc:name="Expression"><![CDATA[payload[2].batchAmount='hghghfghfhgf']]></expression-component>
<batch:commit size="4" doc:name="Batch Commit">
<transactional action="ALWAYS_BEGIN" doc:name="Transactional">
<db:insert config-ref="Oracle_Configuration" bulkMode="true" doc:name="Database">
<db:parameterized-query><![CDATA[INSERT INTO TIB_INT_AR_PAYMENT_IFACE (TRANSMISSION_REQUEST_ID,DESTINATION_ACCOUNT,ORIGINATION,TRANSMISSION_RECORD_COUNT,TRANSMISSION_AMOUNT,LOCKBOX_NUMBER,LOCKBOX_BATCH_COUNT,LOCKBOX_RECORD_COUNT,LOCKBOX_AMOUNT,BATCH_NAME,BATCH_AMOUNT,BATCH_RECORD_COUNT,ITEM_NUMBER,CURRENCY_CODE,REMITTANCE_AMOUNT,TRANSIT_ROUTING_NUMBER,ACCOUNT,CHECK_NUMBER,CUSTOMER_NUMBER,OVERFLOW_INDICATOR,OVERFLOW_SEQUENCE,INVOICE1,AMOUNT_APPLIED1) VALUES (#[payload.?transmissiosnRequestID],#[payload.?destinastionAccount],#[payload.?origination],#[payload.?transmissionSrecordCount],#[payload.?transmisssionAmount],#[payload.lockboxNumber],#[payload.lockboxBatchCount],#[payload.lockboxRecordCount],#[payload.lockboxAmount],#[payload.batchName],#[payload.batchAmount],#[payload.batchRecordCount],#[payload.itemNumber],#[payload.currencyCode],#[payload.remittanceAmount],#[payload.transitRoutingNumber],#[payload.account],#[payload.checkNumber],#[payload.customerNumber],#[payload.overflowIndicator],#[payload.overflowSequence],#[payload.invoice1],#[payload.amountApplied1])]]></db:parameterized-query>
</db:insert>
</transactional>
</batch:commit>
</batch:step>
<batch:step name="Batch_Step1" accept-policy="ONLY_FAILURES">
<set-payload value="#[getStepExceptions()]" doc:name="Set Payload"/>
<foreach collection="#[payload.values()]" doc:name="For Each">
<jms:outbound-endpoint queue="Invalid_Transmission" connector-ref="Active_MQ" doc:name="JMS"/>
</foreach>
</batch:step>
</batch:process-records>
<batch:on-complete>
<logger message="Completed the insert" level="INFO" doc:name="Logger"/>
</batch:on-complete>
</batch:job>
I have this mule flow , where its polling the source folder to read the text file which I am adding as a attachment and sending through REST call , the same attachment I am trying to read in the different flow but inbound attachment is coming as null , please have a look into the code and help me on this.
<flow name="createAttachment" doc:description="Reading file and sending as attachment.">
<file:inbound-endpoint path="src/test/resources/in/attachment/" responseTimeout="10000" doc:name="File"/>
<file:file-to-byte-array-transformer doc:name="File to Byte Array"/>
<!-- <set-attachment attachmentName="#[originalFilename]" value="#[payload]" contentType="multipart/form-data" doc:name="Attachment"/> -->
<set-attachment attachmentName="#[originalFilename]" value="#[payload]" contentType="multipart/form-data" doc:name="Attachment" />
<http:request config-ref="HTTP_Request_Configuration" path="attachment/excel" method="POST" doc:name="HTTP"/>
</flow>
<flow name="readAttachment">
<http:listener config-ref="HTTP_Listener_Configuration" path="attachment/excel" allowedMethods="POST" parseRequest="false" />
<set-payload value="#[message.inboundAttachments['myattachment.txt']]" doc:name="Retrieve Attachments"/>
<set-payload value="#[payload.getInputStream() ]" doc:name="Get Inputstream from Payload"/>
<file:outbound-endpoint path="src/test/resources/out/attachment" responseTimeout="10000" doc:name="File" outputPattern="#[server.dateTime.toString()].pdf"/>
</flow>
I used the following:
<flow name="readAttachment">
<http:listener config-ref="HTTP_Listener_Configuration"
path="/" allowedMethods="POST" parseRequest="false" doc:name="HTTP" />
<byte-array-to-string-transformer doc:name="Byte Array to String"/>
<logger message="#[payload]" level="INFO" doc:name="Logger" /><file:outbound-endpoint path="src/test/resources" connector-ref="File" responseTimeout="10000" doc:name="File"/>
</flow>
When the attachment was received it was automatically parsed to be the payload, so it was just a case of turning the byte array to string.
I hope this helps
The problem statement is to do multiple things parallely and aggregate the response and store it in a file.
link to the mule flow image as in studio:
image
In this flow, what I was trying to do was to set two constant strings in two branches of scatter and gather and aggregate and store in file. I tried overwriting the payload with a "set payload" with "my response". I am expecting "my response" as the content of the file. But instead the file content is:
¨Ìsr)java.util.concurrent.CopyOnWriteArrayListx]ü’F´ê√xpwtmsg 1tmsg 2x
I did debug and the payload at File endpoint was "my response". How and why is the collection getting written into file.
Can anyone help me to get it working.
Following is the xml:
<flow name="mule-assignFlow21123">
<quartz:inbound-endpoint jobName="dummyflow" repeatInterval="10000" responseTimeout="10000" doc:name="Quartz">
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<scatter-gather doc:name="Scatter-Gather1" >
<threading-profile maxThreadsActive="1" poolExhaustedAction="RUN"/>
<processor-chain>
<set-payload value="msg 1" doc:name="Set Payload"/>
<logger level="INFO" doc:name="Logger"/>
</processor-chain>
<processor-chain>
<set-payload value="msg 2" doc:name="Set Payload"/>
<logger level="INFO" doc:name="Logger"/>
</processor-chain>
</scatter-gather>
<set-payload value="my response" doc:name="Set Payload"/>
<file:outbound-endpoint path="/Users/premkumar/Desktop" outputPattern="Results.txt" responseTimeout="10000" mimeType="text/plain" doc:name="Save 2 File"/>
</flow>
The flow will automatically determine the processingStrategy from the in-flight event which will be async because of the quartz endpoint, so the file endpoint will fire async also.
Instead explicitly set the flow's processingStrategy to synchronous:
<flow name="mule-assignFlow21123" processingStrategy="synchronous">
<quartz:inbound-endpoint jobName="dummyflow" repeatInterval="10000" responseTimeout="10000" doc:name="Quartz">
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<scatter-gather doc:name="Scatter-Gather1" >
<threading-profile maxThreadsActive="1" poolExhaustedAction="RUN"/>
<processor-chain>
<set-payload value="msg 1" doc:name="Set Payload"/>
<logger level="INFO" doc:name="Logger"/>
</processor-chain>
<processor-chain>
<set-payload value="msg 2" doc:name="Set Payload"/>
<logger level="INFO" doc:name="Logger"/>
</processor-chain>
</scatter-gather>
<set-payload value="my response" doc:name="Set Payload"/>
<file:outbound-endpoint path="/Users/premkumar/Desktop" outputPattern="Results.txt" responseTimeout="10000" mimeType="text/plain" doc:name="Save 2 File"/>
<flow name="listobjects">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8081" path="listobjects" contentType="text/plain" doc:name="HTTP"/>
<s3:list-objects config-ref="Amazon_S3" bucketName="demo" doc:name="Amazon S3" maxKeys="5" />
<!-- <payload-type-filter expectedType="java.util.List" doc:name="Payload"/> -->
<foreach collection="#[payload]" doc:name="For Each">
<!-- <foreach doc:name="For Each file"> -->
<logger message=" inside foreach...... #[payload.getKey()] ...." level="INFO" doc:name="Logger" />
<s3:get-object-content config-ref="Amazon_S3" bucketName="demo" key="#[payload.getKey()]" doc:name="Amazon S3"/>
<object-to-byte-array-transformer/>
<file:outbound-endpoint path="C:\output" responseTimeout="10000" doc:name="File" outputPattern="#[payload.getKey()] "></file:outbound-endpoint>
</foreach>
</flow>
I have bucket name called demo.
In that bucket I have 3 pdf files. I want to download all files and put it in c:\output folder.
I hit my url like http://localhost:8081/listobjects.
But I got the error:
Could not find a transformer to transform "CollectionDataType{type=org.mule.module.s3.simpleapi.SimpleAmazonS3AmazonDevKitImpl$S3ObjectSummaryIterable, itemType=com.amazonaws.services.s3.model.S3ObjectSummary, mimeType='/'}" to "SimpleDataType{type=org.mule.api.transport.OutputHandler, mimeType='/'}". (org.mule.api.transformer.TransformerException) (org.mule.api.transformer.TransformerException). Message payload is of type: SimpleAmazonS3AmazonDevKitImpl$S3ObjectSummaryIterable
The error occurs because after the foreach processor the payload is an instance of an S3 class, and you haven't specified any Content-Type to return. So Mule tries to transform the S3 instance to the default SimpleDataType and fails.
One way to solve it is simply to add something like
<set-property propertyName="Content-Type" value="application/json" doc:name="Content-Type" />
<set-payload value="{'result': 'ok'}"/>
at the end to make it explicit.
Also note that in your flow after running:
<object-to-byte-array-transformer/>
the S3 payload is gone, so #[payload.getKey()] will fail in the next processor:
<file:outbound-endpoint path="C:\output" responseTimeout="10000" doc:name="File" outputPattern="#[payload.getKey()] "></file:outbound-endpoint>
I've run this without problems:
<flow name="listobjects">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8083" path="listobjects" contentType="text/plain" doc:name="HTTP"/>
<s3:list-objects config-ref="Amazon_S3" bucketName="mule_test" doc:name="Amazon S3" maxKeys="5" />
<foreach collection="#[payload]" doc:name="For Each">
<logger message=" inside foreach...... #[payload.getKey()] ...." level="INFO" doc:name="Logger" />
<set-variable variableName="fileKey" value="#[payload.getKey()]" doc:name="Variable" />
<s3:get-object-content config-ref="Amazon_S3" bucketName="#[payload.getBucketName()]" key="#[payload.getKey()]" doc:name="Amazon S3"/>
<object-to-byte-array-transformer/>
<file:outbound-endpoint path="/tmp" responseTimeout="10000" doc:name="File" outputPattern="#[flowVars.fileKey] "></file:outbound-endpoint>
</foreach>
<set-property propertyName="Content-Type" value="application/json" doc:name="Content-Type" />
<set-payload value="{'result': 'ok'}"/>
</flow>
How can I get Mule to download just attachments using pop3? I tried following the example at http://www.mulesoft.org/documentation/display/current/POP3+Transport+Reference as closely as possible, but I keep getting two files: one with the e-mail body and one with the actual attachment. Here's the flow I'm using:
<pop3:connector name="pop3Connector" checkFrequency="5000" doc:name="POP3"/>
<expression-transformer name="returnAttachments" doc:name="Expression">
<return-argument evaluator="attachments-list" expression="*" />
</expression-transformer>
<file:connector name="fileName" doc:name="File">
<file:expression-filename-parser/>
</file:connector>
<flow name="incoming-orders" doc:name="incoming-orders">
<pop3s:inbound-endpoint host="pop.gmail.com" port="995" user="myuser%40mydomain" password="mypassword" responseTimeout="10000" doc:name="POP3" transformer-refs="returnAttachments" />
<collection-splitter doc:name="Collection Splitter"/>
<file:outbound-endpoint path="C:/popthreetest" outputPattern="#[function:datestamp].dat" doc:name="File">
<expression-transformer>
<return-argument expression="payload.inputStream" evaluator="groovy" />
</expression-transformer>
</file:outbound-endpoint>
</flow>
Thanks!
edit:
Here's the final flow based on #David Dossot's answer. I have an added complexity in that I'm reading in a JSON file that specifies attachment names and an arbitrary destinations for the attachment. I included the replaceAll because I was getting an error about an invalid character in the path file:///C:\.
<pop3:connector name="pop3Connector" checkFrequency="5000" doc:name="POP3"/>
<expression-transformer name="returnAttachments" doc:name="Expression">
<return-argument evaluator="attachments-list" expression="*" />
</expression-transformer>
<file:connector name="DestinationsFileConnector" doc:name="File" autoDelete="false" streaming="true" validateConnections="true">
<file:expression-filename-parser/>
</file:connector>
<file:endpoint path="C:/popthreetest/" name="DestinationsFileEndpoint" responseTimeout="10000" doc:name="File" connector-ref="DestinationsFileConnector">
<file:filename-regex-filter pattern="destinations\.json" caseSensitive="true"/>
</file:endpoint>
<mulerequester:config name="DestinationsMuleRequestorConnector" doc:name="Mule Requester"/>
<flow name="incoming-orders" doc:name="incoming-orders">
<pop3s:inbound-endpoint host="pop.gmail.com" port="995" user="myusername%40mydomain" password="mypassword" responseTimeout="10000" doc:name="POP3" transformer-refs="returnAttachments" />
<collection-splitter doc:name="Collection Splitter"/>
<set-variable variableName="MessagePart" value="#[message.payload]" doc:name="MessagePart"/>
<logger message="Got #[message.payload.dataSource.name]." level="INFO" doc:name="Logger"/>
<mulerequester:request config-ref="DestinationsMuleRequestorConnector" resource="DestinationsFileEndpoint" doc:name="GetDestinations"/>
<json:json-to-object-transformer returnClass="java.util.HashMap" doc:name="JSON to Object"/>
<choice doc:name="Choice">
<when expression="#[message.payload.get(MessagePart.dataSource.name) != null]">
<set-payload value="#[message.payload.get(MessagePart.dataSource.name)]" doc:name="Destination List"/>
<foreach doc:name="For Each">
<logger message="Saving to #[message.payload]." level="INFO" doc:name="Logger"/>
<set-variable variableName="DestinationPath" value="#[java.nio.file.Paths.get(message.payload).getParent().toString().replaceAll('\\\\', '/')]" doc:name="DestinationPath"/>
<set-variable variableName="DestinationPattern" value="#[java.nio.file.Paths.get(message.payload).getFileName()]" doc:name="DestinationPattern"/>
<logger message="Saving to #[DestinationPattern] in #[DestinationPath]." level="INFO" doc:name="Logger"/>
<set-payload value="#[MessagePart]" doc:name="MessagePart"/>
<file:outbound-endpoint path="#[DestinationPath]" outputPattern="#[DestinationPattern]" doc:name="File">
<expression-transformer>
<return-argument expression="payload.inputStream" evaluator="groovy"/>
</expression-transformer>
</file:outbound-endpoint>
</foreach>
</when>
<otherwise>
<logger message="Did not find destination(s) for #[MessagePart.dataSource.name]." level="INFO" doc:name="Logger"/>
</otherwise>
</choice>
</flow>
For completeness, here's the JSON file:
{
"attachment-name.txt": [
"C:/popthreetest/firstDestination.txt"
, "C:/path/to/secondDestination.txt"
, "C:\\popthreetest\\destination\\using-backslashes.txt"
]
}
An email with attachment is a multi-part email into which attachments are parts but also the body. Hence Mule can only download "the whole package" and give the different parts to you.
You should be able to filter the body part after the collection-splitter based on the name of the part.
Alternatively, you could use a MEL expression to drop the first element of the collection, which is typically the body (Mule uses this technique internally to set the message payload: https://github.com/mulesoft/mule/blob/mule-3.x/transports/email/src/main/java/org/mule/transport/email/transformers/EmailMessageToString.java#L50 )