I am fairly new to Mule ,using 3.3.0, but I am trying what I think should be a fairly stock example.
I have a mule config which will read a csv file and attempt to process the lines and columns of in different flows async. However, we are seeing ConcurrentModificationException when the message is being "handed off" to one of the async flows. I was wondering if anyone else has seen this issue and what they may have done to work around the problem.
java.util.ConcurrentModificationException
at org.apache.commons.collections.map.AbstractHashedMap$HashIterator.nextEntry(AbstractHashedMap.java:1113)
at org.apache.commons.collections.map.AbstractHashedMap$KeySetIterator.next(AbstractHashedMap.java:938)
at org.mule.DefaultMuleEvent.setMessage(DefaultMuleEvent.java:933)
at org.mule.DefaultMuleEvent.(DefaultMuleEvent.java:318)
at org.mule.DefaultMuleEvent.(DefaultMuleEvent.java:290)
at org.mule.DefaultMuleEvent.copy(DefaultMuleEvent.java:948)
<queued-asynchronous-processing-strategy poolExhaustedAction="RUN" name="commonProcessingStrategy" maxQueueSize="1000" doc:name="Queued Asynchronous Processing Strategy"/>
<file:connector name="inboundFileConnector" fileAge="1000" autoDelete="true" pollingFrequency="1000" workDirectory="C:/mule/orca/dataprovider/work"/>
<file:endpoint name="dataProviderInbound" path="C:\mule\orca\dataprovider\inbound" moveToPattern="#[function:datestamp]-#[header:originalFilename]" moveToDirectory="C:\mule\orca\dataprovider\history" connector-ref="inboundFileConnector" doc:name="Data Feed File" doc:description="new files are processed in 'work' folder, then moved to 'archive' folder"/>
<flow name="dataProviderFeedFlow">
<inbound-endpoint ref="dataProviderInbound"/>
<file:file-to-string-transformer />
<flow-ref name="dataSub"/>
</flow>
<sub-flow name="dataSub" >
<splitter expression="#[rows=org.mule.util.StringUtils.split(message.payload, '\n\r')]" />
<expression-transformer expression="#[org.mule.util.StringUtils.split(message.payload, ',')]" />
<foreach>
<flow-ref name="storageFlow" />
<flow-ref name="id" />
</foreach>
</sub-flow>
<flow name="storageFlow" processingStrategy="commonProcessingStrategy">
<logger level="INFO" message="calling the 'storageFlow' sub flow."/>
</flow>
<flow name="id" processingStrategy="commonProcessingStrategy">
<logger level="INFO" message="calling the 'id' sub flow."/>
</flow>
Here is a fixed version of the dataSub sub-flow that works fine:
<sub-flow name="dataSub">
<splitter expression="#[org.mule.util.StringUtils.split(message.payload, '\n\r')]" />
<splitter expression="#[org.mule.util.StringUtils.split(message.payload, ',')]" />
<flow-ref name="storageFlow" />
<all>
<async>
<flow-ref name="storageFlow" />
</async>
<async>
<flow-ref name="id" />
</async>
</all>
</sub-flow>
Notice that:
I use two splitter expressions,
I use an all message processor to ensure the same payload is sent to both private flows,
I have to wrap the flow-refs with an async message processor otherwise the invocation fails because the private flows are asynchronous but all forces synchronous.
Related
I have a mule flow(mock-flow) that currently makes http calls to a couple of microservices. If one of the service calls fails due to a connection exception, i have a rollback exception strategy configured to reprocess the message (sending to kafka which in turn invokes the mock flow)but the retry seems to be happening indefinitely inspite of specifying a maxRedeliveryAttempts attribute. How do I limit the no of retries? Any help would be greatly appreciated
<flow name="mock-flow">
<logger level="INFO" message="CRD::: Calling Selldown settle Micro Service***"/>
<logger message="CRD::: Response received : #[message.payload]" level="INFO" />
<http:request config-ref="tp-ins-selldown-msConfig" path="/settle"
method="POST" doc:name="HTTP">
<http:success-status-code-validator
values="200,201" />
</http:request>
<http:request parseResponse="false" config-ref="tp-ins-limits-msConfig" path="booksdlimit"
method="POST" doc:name="HTTP">
<http:success-status-code-validator values="200,201"/>
</http:request>
<choice-exception-strategy doc:name="Choice Exception Strategy">
<rollback-exception-strategy when="exception.causedBy(java.net.ConnectException)" maxRedeliveryAttempts="2" doc:name="Rollback Exception Strategy">
<logger message="Will attempt redelivery" level="INFO" doc:name="Logger" />
<vm:outbound-endpoint exchange-pattern="one-way" path="kafka.inpath" doc:name="VM" />
<on-redelivery-attempts-exceeded>
<logger message="redelivery attempt exceeded" level="INFO" doc:name="Logger" />
<logger message="Retry exhausted" level="INFO" doc:name="Logger" />
</on-redelivery-attempts-exceeded>
</rollback-exception-strategy>
</choice-exception-strategy>
Since the maxReDeliveryAttempts attrtibute wasn't working(for whatever weird reason), I had to do it on my own(manually) by configuring a mule object store with the message request as the key and retry counter as the value.
<rollback-exception-strategy when="exception.causedBy(java.net.ConnectException)" doc:name="Rollback Exception Strategy">
<objectstore:retrieve config-ref="ObjectStore" key="#[uuid]" targetProperty="retryCounter" doc:name="Get value from ObjectStore" />
<logger level="INFO" doc:name="Logger" message="Retry counter --------------> #[retryCounter]"/>
<choice doc:name="Choice">
<when expression="#[retryCounter > 3]">
<logger message="Retry exhausted" level="INFO" doc:name="Logger" />
<objectstore:remove key="#[uuid]" config-ref="ObjectStore" ignoreNotExists="true" doc:name="Remove the key after retry exhaust" />
</when>
<otherwise>
<expression-component>
Thread.sleep(3000);
</expression-component>
<logger message="Will attempt redelivery" level="INFO" doc:name="Logger" />
<transformer ref="ObjectToString"/>
<vm:outbound-endpoint exchange-pattern="one-way" path="kafka.inpath" doc:name="VM" />
<objectstore:remove key="#[uuid]" config-ref="ObjectStore" ignoreNotExists="true" doc:name="Remove if exists" />
<objectstore:store config-ref="ObjectStore" key="#[uuid]" value-ref="#[retryCounter + 1]" doc:name="Store new value" />
</otherwise>
</choice>
</rollback-exception-strategy>
Seems even if maxRedeliveryAttempts variable had worked, it might not have helped my use case since when multiple requests are triggered, there is no way we would ensure that every request was retriggered only 3 times by simply relying on this attribute(unless mule internally uses a hashcode for every req to determine if it has already been retriggered or not)
Also flow variables and session variables didn't work in my case since the retrigger point was a kafka queue (which wasn't manually invoked by flow-ref). Hence had to use an object store
I want to develop a flow that could allow me to make queries to an external system that could take a long time to return. I may have to make queries for multiple values in a list. I am using an until-successful scope in solving the problem. Unfortunately, the even though the request is run several times, the failed records never get put in the dead letter queue. Here is my attempt at solving the problem:
<!-- Dead Letter Queue for exhausted attempts-->
<vm:endpoint name="DLQ" path="DLQ_VM" doc:name="VM"/>
<flow name="StartFlow" processingStrategy="synchronous">
<!--Place a list of String errors to query for on this vm -->
<vm:inbound-endpoint path="request-processing-queue" "
exchange-pattern="one-way" doc:name="VM"/>
<vm:outbound-endpoint path="reprocessing-queue"
exchange-pattern="request-response" doc:name="VM"/>
<logger level="INFO" message="Data returned is #[payload]"/>
<catch-exception-strategy>
<logger level="ERROR" message="Failure During Processing"/>
</catch-exception-strategy>
</flow>
<flow name="RetryingProcess">
<vm:inbound-endpoint name="reprocessing-vm" exchange-
pattern="request-response"
path="reprocessing-queue" doc:name="VM"/>
<foreach collection="#[payload]" doc:name="For Each">
<vm:outbound-endpoint path="by-singles-vm" exchange-
pattern="request-response"/>
</foreach>
</flow>
<flow name="query-retry">
<vm:inbound-endpoint path="by-singles-vm" exchange-
pattern="request-response" doc:name="VM"/>
<until-successful objectStore-ref="objectStore"
failureExpression="#[groovy:(exception &&
exception in com.trion.CustomException)
||!(payload instanceof
com.trion.QueryResult])]"
maxRetries="5"
millisBetweenRetries="300000"
deadLetterQueue-ref="DLQ_VM" doc:name="Until
Successful">
<vm:outbound-endpoint path="try-again-vm" exchange-
pattern="request-response" doc:name="VM"/>
</until-successful>
</flow>
<flow name="GetQueryValue" >
<vm:inbound-endpoint path="try-again-vm" exchange-
pattern="request-response" doc:name="VM"/>
<flow-ref name="QueryRequest" />
</flow>
<!-- This never happens, i.e. the results are not put here... after retying
-->
<flow name="AttemptsExceededProcessing">
<inbound-endpoint ref="DLQ_VM" doc:name="Generic"/>
<logger level="DEBUG" message="Entering Final Destination Queue with
payload is #[payload]"/>
</flow>
<!-- Here I have a query to the external system... >
<flow name="QueryRequest">
...... Makes the long running query Here..
//returns com.trion.QueryResult
</flow>
</mule>
Please help!
There was no problem with the configuration. I had a millisSecondsBetweenRetry value set so small I wasn't seeing the log messages and assumed it wasn't working.
I am trying implement deadLetterQueue on UntilSuccessful for JDBC Connector. I would like to send the payload to a queue(DeadLetterQueue) when UntilSuccessful fails after trying no of times as configured. I referred following links
http://blogs.mulesoft.org/meet-until-successful-store-and-forward-for-mule/
Where in the application would you define the vm:endpoint for a dlqEndpoint-ref defined in an until-successful scope?
Below is my code snippet
<vm:endpoint exchange-pattern="one-way" path="dlqChannel" name="VM" doc:name="VM"/>
Above line is my global element
<flow...> .... <until-successful objectStore-ref="objectStore" deadLetterQueue-ref="dlqChannel" maxRetries="5" secondsBetweenRetries="60" doc:name="Until Successful" failureExpression="exception-type:java.sql.SQLException">
<jdbc-ee:outbound-endpoint exchange-pattern="request-response" queryKey="Insert Query" queryTimeout="-1" connector-ref="Database" doc:name="Database"/>
</until-successful>....</flow>
<flow name="Flow2" doc:name="Flow2">
<endpoint ref="dlqChannel" />
<logger message="DEAD DEAD DEAD LETTER LETTER LETTER #[message]" level="INFO" doc:name="Logger"/>
</flow>
At this line <endpoint ref="dlqChannel" /> I am getting compile error says "Reference to unknown global element:dlqChannel"
Can any one suggest a best way to achieve this scenario.
Thanks,
Kalyan
Your endpoint is called 'VM' not 'dlqChannel'. Change either the name to dlqChannel or point it to VM.
This issue is resolved.
Below is my code snippet.
<vm:endpoint exchange-pattern="one-way" path="dlq" name="dlqChannel" doc:name="VM"/>
Above line is vm global element
<flow...> ... <until-successful objectStore-ref="objectStore" deadLetterQueue-ref="dlqChannel" maxRetries="2" secondsBetweenRetries="10" doc:name="Until Successful" failureExpression="exception-type:java.sql.SQLException">
<jdbc-ee:outbound-endpoint exchange-pattern="request-response" queryKey="Insert Query" queryTimeout="-1" connector-ref="Database" doc:name="Database"/>
</until-successful>
....
</flow>
<flow name="Flow2" doc:name="Flow2">
<vm:inbound-endpoint exchange-pattern="one-way" path="dlq" doc:name="VM"/>
<logger message="DEAD DEAD DEAD LETTER LETTER LETTER #[message.payload]" level="INFO" doc:name="Logger"/>
</flow>
Based on "deadLetterQueue-ref" in UntilSuccessful, payload goes to vm:inbound-endpoint(vm://dlq) as defined in the vm global endpoint.
As Seba correctly points out, your error is due to a wrong name/ref. As for how to implement the deadLetterQueue, you need an inbound endpoint. So in Flow2, change the endpoint to <inbound-endpoint ref="dlqChannel" />.
I'm trying to follow the example in this blog post (http://blogs.mulesoft.org/meet-until-successful-store-and-forward-for-mule/) for defining a dead letter queue for an until-successful scope element. This is the snippet from the blog post that doesn't quite make sense:
<vm:endpoint name="dlqChannel" path="dlq" />
<until-successful objectStore-ref="objectStore"
dlqEndpoint-ref="dlqChannel"
maxRetries="3"
secondsBetweenRetries="10">
...
</until-successful>
I don't quite understand where the vm endpoint lives in the app. I don't think it goes in the same flow as the until-successful element. I've tried putting it in its own flow but I get a NoSuchBeanDefinitionException.
Here is my relevant code:
<flow ....>
....
<until-successful objectStore-ref="objectStore" failureExpression="#[header:INBOUND:http.status != 201]" maxRetries="1" secondsBetweenRetries="5" doc:name="Until Successful" deadLetterQueue-ref="aggieFeedDestinedDeadLetterQueue">
....
</flow>
<flow name="edus-pubFlow1" doc:name="edus-pubFlow1">
<vm:inbound-endpoint exchange-pattern="one-way" path="aggieFeedDestinedDeadLetterQueue" doc:name="aggieFeedDestinedDeadLetterQueue"/>
<logger message="DEAD DEAD DEAD LETTER LETTER LETTER #[message]" level="INFO" doc:name="Logger"/>
</flow>
The dlqChannel attribute refers to a global endpoint. In your case, use:
<vm:inbound-endpoint exchange-pattern="one-way" path="aggieFeedDestinedDeadLetterQueue" />
<flow ....>
....
<until-successful objectStore-ref="objectStore"
failureExpression="#[header:INBOUND:http.status != 201]"
maxRetries="1" secondsBetweenRetries="5"
deadLetterQueue-ref="aggieFeedDestinedDeadLetterQueue">
....
</flow>
<flow name="edus-pubFlow1">
<endpoint ref="aggieFeedDestinedDeadLetterQueue" />
<logger message="DEAD DEAD DEAD LETTER LETTER LETTER #[message]" level="INFO" doc:name="Logger"/>
</flow>
Also if you're using Mule 3.3.0 or above, you can use MEL:
failureExpression="#[message.inboundProperties['http.status'] != 201]"
I basically have two flows :
HTTP Inbound endpoint receives batch XML, splits to individual pieces and stages it to a JMS queue.
Reads the staged XMLs from the JMS queue and processes the messages.
I need to control the execution of flow 2 above using a Rest call (i.e) flow 2 should run only when an HTTP inbound call is received. I am using Mule version 3.2.2
Here are the flows:
<flow name="flow-stage-input">
<http:inbound-endpoint host="localhost"
port= "8082"
path= "test/order"
exchange-pattern= "request-response"
>
</http:inbound-endpoint>
<object-to-string-transformer></object-to-string-transformer>
<splitter evaluator="xpath" expression="//Test/TestNode" enableCorrelation="ALWAYS"/>
<custom-transformer class="org.testing.transformers.DocumentToString"></custom-transformer>
<pooled-component>
<spring-object bean="receiver"></spring-object>
</pooled-component>
<!-- DECIDE SUCCESS OR FAILURE -->
<choice>
<when expression="//Test/TestNode" evaluator="xpath">
<jms:outbound-endpoint queue="stagingQueue" exchange-pattern="one-way" connector-ref="jmsConnector" />
</when>
<otherwise>
<logger message="Skipped staging message due to errors" level="ERROR" />
</otherwise>
</choice>
<collection-aggregator></collection-aggregator>
<custom-transformer class="org.testing.transformers.ListOfStringsToString"></custom-transformer>
<!-- RESPONSE SENT BACK TO CALLER -->
</flow>
<flow name="flow-process-jms-input" >
<jms:inbound-endpoint queue="stagingQueue" exchange-pattern="one-way" connector-ref="jmsConnector" />
<pooled-component>
<spring-object bean="processor"></spring-object>
</pooled-component>
<!-- DECIDE SUCCESS OR FAILURE -->
<choice>
<when expression="//ErrorCondition/Path" evaluator="xpath">
<jms:outbound-endpoint queue="errorQueue" exchange-pattern="one-way" connector-ref="jmsConnector" />
</when>
<otherwise>
<logger message="Message processed successfully" level="ERROR" />
</otherwise>
</choice>
</flow>
Use a Groovy script in flow 2 to request one JMS message from the queue using:
muleContext.client.request("jms://stagingQueue", 0)
This will return null if the queue was empty otherwise a Mule message containing the JMS message.