Vm end point in mule pros and cons - mule

I want to use VM endpoints for achieving parallel processing in mule flows. Being a beginner with mule, I am not quite sure about the implications of doing so. I read about private flows in mule 3 , but not sure, if I can replace vm endpoints with private flows in this case and if at all there would be any advantage I can get with that. Can someone, please let me know about the pros and cons of using VM . Here is the example I wanted to use for parallel processing.
<flow name="forkAndJoinFlow">
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="81" path="lowestprice" />
<not-filter>
<wildcard-filter pattern="*favicon*" />
</not-filter>
<request-reply>
<all>
<vm:outbound-endpoint path="shop1"/>
<vm:outbound-endpoint path="shop2"/>
</all>
<vm:inbound-endpoint path="response">
<message-properties-transformer>
<add-message-property key="MULE_CORRELATION_GROUP_SIZE" value="2" />
</message-properties-transformer>
<collection-aggregator />
</vm:inbound-endpoint>
</request-reply>
<expression-transformer evaluator="groovy" expression="java.util.Collections.min(payload)" />
<object-to-string-transformer/>
<logger level="WARN" message="#[string:Lowest price: #[payload]]" />
</flow>
<flow name="shop1Flow">
<vm:inbound-endpoint path="shop1"/>
<logger level="INFO" message="SHOP1 Flow..." />
<expression-transformer evaluator="groovy" expression="new java.lang.Double(1000.0 * Math.random()).intValue()" />
<logger level="WARN" message="#[string:Price from shop 1: #[payload]]" />
</flow>
<flow name="shop2Flow">
<vm:inbound-endpoint path="shop2" />
<logger level="INFO" message="SHOP2 Flow..." />
<expression-transformer evaluator="groovy" expression="new java.lang.Double(1000.0 * Math.random()).intValue()" />
<logger level="WARN" message="#[string:Price from shop 2: #[payload]]" />
</flow>

vm:endpoints are one-way and asynchronous, that means you'll not receive a response back from them.
Private flows can be synchronous as well as asynchronous, means they can return a response also if the exchange-pattern is request-response.
Also, in private flow you get all variables and headers etc versus when you push your message in vm or any other JMS, variables and inbound properties are not propagated and outbound properties of the caller flow becomes inbound in called flow

Private flows can be used as a replacement for your VM.
As explained private flows can be Synchronus or Asynchronous based on the processingStrategy of the flow.
I case you use a private flow in asynchronous way to acheive parallel processing, then make sure that response is posted back onto the response VM at the end of each of the private flows.
I have implemented the fork-join pattern of parallel processing using sub-flows as well. Try this.
<set-property propertyName="MULE_CORRELATION_GROUP_SIZE" value="1" />
<all enableCorrelation="IF_NOT_SET">
<async>
<set-property propertyName="MULE_CORRELATION_SEQUENCE" value="1" />
<flow-ref name="parallel-flow1"></flow-ref>
</async>
<async>
<set-property propertyName="MULE_CORRELATION_SEQUENCE" value="2" />
<flow-ref name="parallel-flow2"></flow-ref>
</async>
</all>
And the sub-flows as follows.
<sub-flow name="parallel-flow1">
....
....
<flow-ref name="join-flow" />
</sub-flow>
<sub-flow name="parallel-flow2">
....
....
<flow-ref name="join-flow" />
</sub-flow>
<sub-flow name="join-flow">
<collection-aggregator timeout="10000" failOnTimeout="true" />
<combine-collections-transformer />
....
....
</sub-flow>
You can try this with private flows.
Hope this helps.

Related

Async block makes all flow-refs inside async?

Consider the following case:
I have a webservice that starts importing stuff from a source. I'd like to return 200 OK to the caller as soon as the call is accepted.
There few different flows that need to executed in the right order.
<flow name="startImport" >
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8080" path="startImport"/>
<async>
<flow-ref name="Country" />
<flow-ref name="Account" />
<flow-ref name="Contact" />
</async>
</flow>
Well, this doesn't work, as all of the flows are executed at once.
How about wrapping it in an another flow like this?
<flow name="startImport" >
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8080" path="startImport"/>
<async>
<flow-ref name="startImport2" />
</async>
</flow>
<flow name="startImport2" processingStrategy="synchronous" >
<processor-chain>
<flow-ref name="Country" />
<flow-ref name="Account" />
<flow-ref name="Contact" />
</processor-chain>
</flow>
Nope, still the same result!
How can I stop the async-block making all the other flow-refs from turning async as well? I only want the first call to start asynchronously!
Private flows invoked via flow-ref will pick up the synchronicity of the in-flight event.
To change this you can change the processingStrategy of the flows you want to execute synchronously. Example:
<flow name="Country" processingStrategy="synchronous">
</flow>
Well, I ended up removing the flow refs and using vm-endpoints instead, like this:
<flow name="startImport" >
<http:inbound-endpoint exchange-pattern="request-response" host="localhost" port="8080" path="startImport"/>
<async>
<vm:outbound-endpoint exchange-pattern="one-way" doc:name="VM" path="import-all.service.in"/>
</async>
</flow>
<flow name="startImport2" processingStrategy="synchronous" >
<vm:outbound-endpoint exchange-pattern="one-way" doc:name="VM" path="import-all.service.in"/>
<flow-ref name="Country" />
<flow-ref name="Account" />
<flow-ref name="Contact" />
</flow>
Now it works like I'd expect it to work, it quickly returns from the call and lets the async flow do the heavy lifting.

Mule throttling on a backend service

I have a back end service that I need to throttle access to. I'm trying to use the approach described here: http://blogs.mulesoft.org/synchronous-and-asynchronous-throttling-2/
I started with a simple pass through flow that receives a SOAP request and forwards it. When I hit this using the SOAPUI utility, I get the expected response in a second or two.
<http:connector name="httpConnector" doc:name="HTTP\HTTPS">
<receiver-threading-profile maxThreadsActive="1" maxBufferSize="100" />
</http:connector>
<jms:activemq-connector name="amqConnector" brokerURL="tcp://localhost:61616" specification="1.1" doc:name="AMQ" />
<flow name="Flow1" processingStrategy="synchronous" doc:name="Flow1">
<http:inbound-endpoint exchange-pattern="request-response"
host="localhost" port="8088" path="test" doc:name="HTTP"
mimeType="text/xml" encoding="UTF-8" connector-ref="httpConnector"/>
<http:outbound-endpoint
address="http://dnbdirect-api.dnb.com/DnBAPI-11"
exchange-pattern="request-response" doc:name="HTTP" mimeType="text/xml"/>
</flow>
If I then move the outbound call to a separate flow and add in the request-reply block, the behavior changes. I get no response back (nor do I get the "After queue" message from the logger) and SOAPUI eventually times out.
<http:connector name="httpConnector" doc:name="HTTP\HTTPS">
<receiver-threading-profile maxThreadsActive="1" maxBufferSize="100" />
</http:connector>
<jms:activemq-connector name="amqConnector" brokerURL="tcp://localhost:61616" specification="1.1" doc:name="AMQ" />
<flow name="Flow1" processingStrategy="synchronous" doc:name="Flow1">
<http:inbound-endpoint exchange-pattern="request-response"
host="localhost" port="8088" path="test" doc:name="HTTP"
mimeType="text/xml" encoding="UTF-8" connector-ref="httpConnector"/>
<message-properties-transformer doc:name="Message Properties">
<add-message-property key="AMQ_SCHEDULED_DELAY" value="5000"/>
</message-properties-transformer>
<logger message="Before queue" level="INFO"/>
<request-reply>
<jms:outbound-endpoint queue="request" connector-ref="amqConnector"></jms:outbound-endpoint>
<jms:inbound-endpoint queue="response" connector-ref="amqConnector"></jms:inbound-endpoint>
</request-reply>
<logger message="After queue" level="INFO"/>
</flow>
<flow name="flow2" doc:name="Flow2">
<jms:inbound-endpoint queue="request" connector-ref="amqConnector" doc:name="JMS"/>
<http:outbound-endpoint
address="http://dnbdirect-api.dnb.com/DnBAPI-11"
exchange-pattern="request-response" doc:name="HTTP" mimeType="text/xml" />
</flow>
The throttling behavior works as I see the delays if I pull out the call to the back end service. But I can't get it to work with the service call there.
What am I missing?
I found that the message's payload will be "ArrayList" after "request-reply".
so i add a java component to split it, then the result will be corrected.
#Override
public Object onCall(MuleEventContext eventContext) throws Exception {
MuleMessage message = eventContext.getMessage();
//int groupSize = message.getCorrelationGroupSize();
//System.out.println("############# correlationGroupSize: " + groupSize);
Object payload = message.getPayload();
if (payload != null && payload instanceof ArrayList) {
//message.setPayload(((ArrayList)payload).get(0));
return ((ArrayList)payload).get(0);
}
return message.getPayload();
}
the completed flow is:
<message-properties-transformer doc:name="Message Properties">
<add-message-property key="AMQ_SCHEDULED_DELAY" value="10000"/>
</message-properties-transformer>
<request-reply storePrefix="mainFlow">
<jms:outbound-endpoint queue="request" connector-ref="amqConnector" doc:name="JMS"></jms:outbound-endpoint>
<jms:inbound-endpoint queue="response" connector-ref="amqConnector" doc:name="JMS"></jms:inbound-endpoint>
</request-reply>
<component class="com.neusoft.fx.JmsMessageTransformer" doc:name="Java"/>
<message-properties-transformer doc:name="Set Content Type">
<delete-message-property key="Content-type" />
<add-message-property key="Content-Type" value="text/xml"/>
</message-properties-transformer>
<logger message="----- LOGGER ----- after #[groovy:message.toString()]" level="INFO" doc:name="Logger" />
</flow>
Try adding the following before the HTTP outbound endpoint in flow2:
<copy-properties propertyName="MULE_*"/>

Parallel Processing in Mule : Issue with getting right response

Requirement is to develop mule flow which calls 3 different sync services in parallel and then aggregate the response of each of these and send it back to caller.
I have followed fork join approach as mentioned in docs and How to make parallel outbound calls.
My config file looks like below :
<flow name="fork">
<http:inbound-endpoint host="localhost" port="8090" path="mainPath" exchange-pattern="request-response">
<set-property propertyName="MULE_CORRELATION_GROUP_SIZE"
value="2" />
<all enableCorrelation="IF_NOT_SET">
<async>
<set-property propertyName="MULE_CORRELATION_SEQUENCE"
value="1" />
<flow-ref name="parallel1" />
</async>
<async>
<set-property propertyName="MULE_CORRELATION_SEQUENCE"
value="2" />
<flow-ref name="parallel2" />
</async>
</all>
</flow>
<sub-flow name="parallel1">
<logger level="INFO" message="parallel1: processing started" />
<!- Transformation payload -->
<http:outbound-endpoint address="..."
exchange-pattern="request-response" />
<logger level="INFO" message="parallel1: processing finished" />
<flow-ref name="join" />
</sub-flow>
<sub-flow name="parallel2">
<logger level="INFO" message="parallel2: processing started" />
<!- Transformation payload -->
<http:outbound-endpoint address="..."
exchange-pattern="request-response" />
<logger level="INFO" message="parallel2: processing finished" />
<flow-ref name="join" />
</sub-flow>
<sub-flow name="join">
<collection-aggregator timeout="6000"
failOnTimeout="true" />
<combine-collections-transformer />
<logger level="INFO" message="Continuing processing of: #[message.payloadAs(java.lang.String)]" />
<set-payload value="Soap XML Response"/>
</sub-flow>
I am able to verify that till "join" subflow is working fine but the response is not coming back as "Soap XML Response".
The response is the same initial SOAP Request.
How can I make this thread wait till sub-flow processing is complete and it sends back response whatever "join" sub-flow returns ??
The fork join in the above post looks good. The issue here is there is no way to capture the payload after the join and bring it back to the main flow.
As the calls to parallel made async the main flow continues without waiting for the join output.
I have modified the flow to address this issue. Now the flow will have a processor to wait for the reply and read the joined output to be written onto the http transformer.
<flow name="fork">
<http:inbound-endpoint host="localhost" port="8090" path="mainPath" exchange-pattern="request-response">
<!-- To get back the response after the fork-join -->
<request-reply timeout="60000">
<jms:outbound-endpoint queue="parallel.processor.queue">
<message-properties-transformer scope="outbound">
<delete-message-property key="MULE_REPLYTO" />
</message-properties-transformer>
</jms:outbound-endpoint>
<jms:inbound-endpoint queue="join.queue" >
</jms:inbound-endpoint>
</request-reply>
</flow>
<flow name="fork_join_flow" >
<jms:inbound-endpoint queue="parallel.processor.queue" exchange-pattern="one-way" />
<set-property propertyName="MULE_CORRELATION_GROUP_SIZE"
value="2" />
<all enableCorrelation="IF_NOT_SET">
<async>
<set-property propertyName="MULE_CORRELATION_SEQUENCE"
value="1" />
<flow-ref name="parallel1" />
</async>
<async>
<set-property propertyName="MULE_CORRELATION_SEQUENCE"
value="2" />
<flow-ref name="parallel2" />
</async>
</all>
</flow>
<sub-flow name="parallel1">
<logger level="INFO" message="parallel1: processing started" />
<!- Transformation payload -->
<http:outbound-endpoint address="..."
exchange-pattern="request-response" />
<logger level="INFO" message="parallel1: processing finished" />
<flow-ref name="join" />
</sub-flow>
<sub-flow name="parallel2">
<logger level="INFO" message="parallel2: processing started" />
<!- Transformation payload -->
<http:outbound-endpoint address="..."
exchange-pattern="request-response" />
<logger level="INFO" message="parallel2: processing finished" />
<flow-ref name="join" />
</sub-flow>
<sub-flow name="join">
<collection-aggregator timeout="6000"
failOnTimeout="true" />
<combine-collections-transformer />
<logger level="INFO" message="Continuing processing of: #[message.payloadAs(java.lang.String)]" />
<set-payload value="Soap XML Response"/>
<jms:outbound-endpoint queue="join.queue">
</jms:outbound-endpoint>
</sub-flow>
Hope this helps.

Why is Mule payload getting lost in <catch-exception-strategy> for java.net.ConnectException

I'm testing my Mule(3.3.1) flow which sends a web service call to external vendor. My aim is to catch java.net.ConnectException and apply appropriate XSLT to original payload and send it to caller.
But the payload received in <catch-exception-strategy> is of type org.apache.commons.httpclient.methods.PostMethod#12b13004 and not original XML. Tried using <objexct-to-string-transformer> but didn't help.
Any suggestions how to retrieve the original payload in catch block?
Part of mule-config.xml is below:
<flow name="orderRequirementsToVendor">
<jms:inbound-endpoint queue="order.vendor" />
<set-property propertyName="SOAPAction" value="http://vendor.com/services/InterfacePoint/Call" />
<cxf:proxy-client payload="body" enableMuleSoapHeaders="false">
<cxf:inInterceptors>
<spring:bean class="org.apache.cxf.interceptor.LoggingInInterceptor" />
</cxf:inInterceptors>
<cxf:outInterceptors>
<spring:bean class="org.apache.cxf.interceptor.LoggingOutInterceptor" />
</cxf:outInterceptors>
</cxf:proxy-client>
<outbound-endpoint address="${vendor.ws.url}" mimeType="text/xml" connector-ref="https.connector" />
<byte-array-to-string-transformer />
<choice-exception-strategy>
<catch-exception-strategy when="#[exception.causedBy(java.net.ConnectException)]">
<logger message="#[exception.causeException]" level="ERROR" />
<object-to-string-transformer/>
<transformer ref="vendorConnectExceptionTransformer" />
</catch-exception-strategy>
<catch-exception-strategy>
<logger message="#[exception.causeException]" level="ERROR" />
<transformer ref="generalErrorTransformer" />
</catch-exception-strategy>
</choice-exception-strategy>
</flow>
Store the original payload in a flow variable right after jms:inbound-endpoint with
<set-variable variableName="originalPayload" value="#[message.payload]" />
Access it back in your exception strategy with a MEL expression like: #[flowVars.originalPayload].

Mule - Unable to find an StreamCloser for the stream type: class java.lang.String

After the JMS message is delivered to the queue I see a log statement related to Stream Closer. It doesn't look right to me .. why do I see this message?
2013-04-22 19:08:29,385 [DEBUG] org.mule.transport.jms.activemq.ActiveMQJmsConnector - Returning dispatcher for endpoint: jms://retry.queue = EeJmsMessageDispatcher{this=5c5801d7, endpoint=jms://retry.queue, disposed=false}
2013-04-22 19:08:29,433 [DEBUG] org.mule.util.DefaultStreamCloserService - Unable to find an StreamCloser for the stream type: class java.lang.String, the stream: <?xml version="1.0" encoding="UTF-8"?> < ....... rest of the XML ....... /> will not be closed.
What does it mean by - "the stream: will not be closed."?
What should I do to fix this?
====EDIT =====
There is an error happening. The JMS message has XML as payload. Mule version: 3.3.2
Here's my flow
<flow name="sendToHost">
<jms:inbound-endpoint queue="host.queue" exchange-pattern="one-way" />
<copy-properties propertyName="*" />
<file:outbound-endpoint path="/hostmessages" outputPattern="outgoing-xml-[function:dateStamp].log" />
<set-variable variableName="hostXML" value="#[payload]" />
<flow-ref name="webServiceCall" />
<flow-ref name="inspectWSResponse" />
<exception-strategy ref="retryExceptionStrategy" />
</flow>
<flow name="resendFailedMessages">
<description>
"*/15 07-18 * * ?" run every 15 minutes from 7 am to 6 pm every day -->
</description>
<quartz:inbound-endpoint jobName="hostRedeliveryJob" cronExpression="0 0/1 * * * ?">
<quartz:endpoint-polling-job>
<quartz:job-endpoint ref="redeliverToHost" />
</quartz:endpoint-polling-job>
</quartz:inbound-endpoint>
<set-variable variableName="hostXML" value="#[payload]" />
<logger message="QUARTZ found message for host" level="INFO" />
<flow-ref name="webServiceCall" />
<flow-ref name="inspectWSResponse" />
<exception-strategy ref="retryExceptionStrategy" />
</flow>
<choice-exception-strategy name="retryExceptionStrategy">
<catch-exception-strategy when="#[exception.causedBy(java.io.IOException)]">
<logger message="In retryExceptionStrategy IO exception strategy. " level="ERROR" />
<logger message="retryExceptionStrategy exception is #[exception.causeException]" level="ERROR" />
<set-property propertyName="exception" value="#[exception.summaryMessage]" />
<set-payload value="#[hostXML]" />
<logger message="retryExceptionStrategy payload is #[payload]" level="ERROR" />
<jms:outbound-endpoint queue="retry.queue" />
</catch-exception-strategy>
<catch-exception-strategy>
<logger message="Other error in sending result to host in retryExceptionStrategy flow." level="INFO" />
<set-property propertyName="exception" value="#[exception.summaryMessage]" />
<set-payload value="#[hostXML]" />
<jms:outbound-endpoint queue="declined.queue" />
</catch-exception-strategy>
</choice-exception-strategy>
<sub-flow name="webServiceCall">
<cxf:proxy-client payload="body" enableMuleSoapHeaders="false">
<cxf:inInterceptors>
<spring:bean class="org.apache.cxf.interceptor.LoggingInInterceptor" />
</cxf:inInterceptors>
<cxf:outInterceptors>
<spring:bean class="org.apache.cxf.interceptor.LoggingOutInterceptor" />
</cxf:outInterceptors>
</cxf:proxy-client>
<outbound-endpoint address="${host.ws.url}" mimeType="text/xml" connector-ref="http.connector" />
<byte-array-to-string-transformer />
</sub-flow>
<sub-flow name="inspectWSResponse">
<choice>
<when expression="#[xpath('//acord:TestResult/acord:TestCode/acord:Name/#tc').value == '1']">
<logger message="Message Delivered Successfully to host" level="INFO" />
</when>
<otherwise>
<set-payload value="#[hostXML]" />
<jms:outbound-endpoint queue="declined.queue" />
</otherwise>
</choice>
</sub-flow>
Log entries at DEBUG level can typically be safely ignored.
In this particular case, it seems Mule is using the StreamCloserService on a message's payload that is not a stream but a string.
Looking at the source code this can seem to possibly happen only when an exception is processed and Mule attempts to forcefully close a streaming payload without first checking if it is actually streaming. This is benign and can't trigger any side-effect so you can safely ignore this DEBUG statement.