to make until successful synchronous in mule 3.4 - mule

Below is a part of my mule flow
<until-successful objectStore-ref="ObjStreuntil" maxRetries="60" secondsBetweenRetries="60" doc:name="Until Successful" failureExpression="# [payload.state == 'Queued' || payload.state == 'InProgress']">
<processor-chain doc:name="Processor Chain">
<sfdc:batch-info config-ref="Salesforce" doc:name="Salesforce">
<sfdc:batch-info ref="#[payload]"/>
</sfdc:batch-info>
<logger message="#[payload]" level="INFO" doc:name="Logger"/>
</processor-chain>
</until-successful>
I would like my flow to wait until my batch is completed and then proceed to the next processor. I believed using processing chain will get the outcome.
But the flow doesn't work. I'm aware that until sucessfull is made synchronos in 3.5 is there any method to acheive this on 3.4.0
Any suggestions would be of great help
Thank you in advance

To achieve your goal in 3.4, add a flow-ref or vm:outbound-endpoint after the batch call so the subsequent logic can be executed when the batch is done.
This is preferable to blocking the main flow thread anyways, since batch processing can take a while.
Note that you may need to add a filter after sfdc:batch-info if you want to process the subsequent logic only for certain return codes.

Related

Mule 4 - Until Successful - increase wait time for each iteration

In my application i am making a HTTP call to read data from external system. I wanted to make repeated trails until 3 times with an interval until i get the success response. I am using Mule's component to do this. below is the code.
<until-successful maxRetries="3" millisBetweenRetries="10000">
<http:request method="GET"></http:request>
</until-successful>
This code is making 3 trails with an interval of 10 seconds.
However, i wanted to increase wait time for each iteration.
i.e. I want component to wait for 10s on first iteration, 20s on second iteration and 30s on third iteration.
Is there any option to do this with component.
Please suggest. Thanks.
I don't think there is a way to do this out of the box. The wait time can be an expression but I don't think you can change the values in each iteration.
You can make this happen by nesting <until-successful> scopes with different delays. Not the most elegant but at least it uses out of the box components!
<flow name="pavanFlow">
<http:listener doc:name="go" config-ref="HTTP_Listener_config" path="go"/>
<until-successful maxRetries="1" doc:name="Third Retry" millisBetweenRetries="30000">
<until-successful maxRetries="1" doc:name="Second Retry" millisBetweenRetries="20000">
<until-successful maxRetries="1" doc:name="First Retry" millisBetweenRetries="10000">
<flow-ref doc:name="attemptFlow" name="attemptFlow" />
</until-successful>
</until-successful>
</until-successful>
</flow>
<flow name="attemptFlow">
<logger level="INFO" doc:name="Trying" message="Trying"/>
<raise-error doc:name="SOMEFAILURE" type="X:SOMEFAILURE"/>
</flow>

How to process a list in parallel in mule?

I have a list of objects, which right now I am processing in foreach. The list is nothing but a string of ids that kicks off other stuff internally.
<flow name="flow1" processingStrategy="synchronous">
<quartz:inbound-endpoint jobName="integration" repeatInterval="86400000" responseTimeout="10000" doc:name="Quartz" >
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<component class="RequestFeeder" doc:name="RequestFeeder"/>
<foreach collection="#[payload]" doc:name="For Each">
<flow-ref name="createFlow" doc:name="createFlow"/>
<flow-ref name="queueFlow" doc:name="queueFlow"/>
<flow-ref name="statusCheckFlow" doc:name="statusCheckFlow"/>
<flow-ref name="resultsFlow" doc:name="resultsFlow"/>
<flow-ref name="sftpFlow" doc:name="sftpFlow"/>
<logger message="RequestType #[flowVars['rqstType']] complete" level="INFO" doc:name="Done"/>
</foreach>
<logger message="ALL 15 REQUESTS HAVE BEEN PROCESSED" level="INFO" doc:name="Logger"/>
</flow>
I want to process them in parallel. ie execute the same 4 flow-refs in parallel for all 15 requests coming in the list. This seems simple, but I havent been able to figure it out yet. Any help appreciated.
An alternative to the scatter-gather approach is to simply split the collection and use a VM queue for the items in the list. This method can be simpler if you don't need to wait and collect all 15 results, and will still work if you do.
Try something like this. Mule automatically uses a thread pool (more info) to run your flow, so the requestProcessor flow below will process your requests in parallel.
<flow name="scheduleRequests">
<quartz:inbound-endpoint jobName="integration" repeatInterval="86400000" responseTimeout="10000" doc:name="Quartz" >
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<component class="RequestFeeder" doc:name="RequestFeeder"/>
<collection-splitter />
<vm:outbound-endpoint path="requests" />
</flow>
<flow name="requestProcessor">
<vm:inbound-endpoint path="requests" />
<flow-ref name="createFlow" doc:name="createFlow"/>
<flow-ref name="queueFlow" doc:name="queueFlow"/>
<flow-ref name="statusCheckFlow" doc:name="statusCheckFlow"/>
<flow-ref name="resultsFlow" doc:name="resultsFlow"/>
<flow-ref name="sftpFlow" doc:name="sftpFlow"/>
</flow>
I reckon you still want those four flows to run sequentially, right?
If that were not the case you could always change the threading profile.
Another thing you could do is to wrap the four flows in an async scope although you may need a processor change.
In any event I think you'll be better of using the scatter gather component:
https://developer.mulesoft.com/docs/display/current/Scatter-Gather
https://www.mulesoft.com/exchange#!/scatter-gather-flow-control
Which without needing the for each scope will split the list and execute each item in a different thread. You could define how many threads you want to run in parallel (so you don't just spin of a new thread you use a pool).
One final note though, is meant to aggregate the result of all the processed items. I reckon you could change that with a custom aggregation strategy but not sure really, please take a look at the docs for that.
HTH
You say 4 flows, but the list contains 5 flows. If you want all flows executed in sequence, but each item in the collection executed in parallel, you will want a splitter followed by a separate vm flow containing all (4/5) flows, as explained here: https://support.mulesoft.com/s/article/Concurrently-processing-Collection-and-getting-the-results.
If you want the flows inside the loop to execute in parallel then you choose a Scatter-Gather component.
It is important to be clear which of the two things you are wanting to achieve as the solution would be very different. So the basic difference is, in Scatter-Gather a single message is sent to multiple recipients for processing in parallel, but in Splitter-Aggregator a single message is split into multiple sub messages and processed individually and then aggregated. See: http://muthurajud.blogspot.com/2016/07/eai-patterns-scattergather-versus.html
Scatter- gather of Mule component is one of the component to make easy for parallel processing, A simple example will be following :-
<scatter-gather >
<flow-ref name="flow1" />
<flow-ref name="flow2" />
<flow-ref name="flow3" />
</scatter-gather>
So, the flows you want to execute in parallel can be kept inside the

UntilSuccessful component to poll http endpoint till condition is met

Polling http endpoint, receiving JSON response, I wish to keep polling till a condition is met.
I have tried <until-successful failureExpression="#[json:status != 'COMPLETED']" maxRetries="5" secondsBetweenRetries="10" synchronous="true"> but this is giving exception.
Please also let me know if there is another method for my scenario
assign the compute value to flow variable and provide condition evaluting the corresponding flow variable.
<until-successful failureExpression="#[flowVars['testRetryCondition'] != 'COMPLETED']" maxRetries="5" secondsBetweenRetries="10" synchronous="true">
<processor-chain doc:name="Processor Chain">
<http:........./>
<set-variable variableName="testRetryCondition" value="#[json:status != 'COMPLETED']" doc:name="set-invocation-status"/>
</processor-chain>
</until-successful>

How to Extract the Flow Name and MessageProcessor Name using MEL - MULE ESB

I'm not sure, how can we extract the flow-name and message-processor Name through MEL. For example I have multiple message processor. For logging, i need to extract the flow Name and Message-processor, so that I can find out transaction has crossed this particular flow and its message processor. Is there any simple way to find out. Please guide me. Please find the screenshot below. Here i need to Extract - set payload and its flowName (flow1)
Thanks in advance.
For mule 3.8+ version onwards #[flow.name] don't work.
Use #[mule:context.serviceName] expression in logger or component to extract the name of the flow
I know this post is old but I have been trying to find a way to do this in MEL for error handling emails.
For the flow name you can use #[exception.event.flowConstruct.name]
for the failing message processor you can use #[exception.failingMessageProcessor].
Both of these work in MEL without the need to use an flowVar.
Please note however, that the failing processor does not always come back with a value but comes back with null, I'm not sure why.
You can extract the flow-name with MEL : #[flow.name]
<flow name="name" doc:name="name">
<http:inbound-endpoint address="http://localhost:8090/resources" doc:name="HTTP" />
<logger message="name of flow: #[flow.name]" level="INFO" doc:name="Logger"/>
<set-payload value="name" doc:name="Set Payload"/>
</flow>
or
flowConstruct.getName() in a Message Processor
Two ways to acthive this (from current flow name)
First one is -
<logger message="Current flowName: #[flow.name]" level="INFO" doc:name="Logger"/>
and the second one is -
<logger message="Current flowName: #[context:serviceName]" level="INFO" doc:name="Logger"/>

Why Batch scope behave strange when trying to load a Huge Records- Mule ESB

I'm facing issues in Process Record Phase of Batch, Kindly suggest- I'm trying to load the some KB file ( which has about 5000 record). For the success scenario it works.
If suppose error happened in input phase for the first hit and the flows stops, when the second time when it try to hit the same record. Mule stops executing in Process Record step.It is not running After loading Phase. Please find the run time logs below
11:55:33 INFO info.org.mule.module.logging.DispatchingLogger - Starting loading phase for instance 'ae67601a-5fbe-11e4-bc4d-f0def1ed6871' of job 'test'
11:55:33 INFO info.org.mule.module.logging.DispatchingLogger - Finished loading phase for instance ae67601a-5fbe-11e4-bc4d-f0def1ed6871 of job order. 5000 records were loaded
11:55:33 INFO info.org.mule.module.logging.DispatchingLogger - **Started execution of instance 'ae67601a-5fbe-11e4-bc4d-f0def1ed6871' of job 'test**
It stopped processing after instance starts- I'm not sure what is happening here.
When i stop the flow and delete the .mule folder from the workspace. It then works.
I hope in loading phase mule using temporary queue it is not being deleted automatically when exception happens in input phase, but not sure this could be the real cause.
I cant go and delete each time the .muleFolder in a real time.
Could you please anyone suggest what makes the strange behavior here. How to i get rid of this issue. Please find config xml
<batch:job name="test">
<batch:threading-profile poolExhaustedAction="WAIT"/>
<batch:input>
<component class="com.ReadFile" doc:name="File Reader"/>
<mulexml:jaxb-xml-to-object-transformer returnClass="com.dto" jaxbContext-ref="JAXB_Context" doc:name="XML to JAXB Object"/>
<component class="com.Transformer" doc:name="Java"/>
</batch:input>
<batch:process-records>
<batch:step name="Batch_Step" accept-policy="ALL">
<batch:commit doc:name="Batch Commit" streaming="true">
<logger message="************after Data mapper" level="INFO" doc:name="Logger"/>
<data-mapper:transform config-ref="Orders_Pojo_To_XML" stream="true" doc:name="Transform_CanonicalToHybris"/>
<file:outbound-endpoint responseTimeout="10000" doc:name="File" path="#[sessionVars.uploadFilepath]"">
</file:outbound-endpoint>
</batch:commit>
</batch:step>
</batch:process-records>
<batch:on-complete>
<set-payload value="BatchJobInstanceId:#[payload.batchJobInstanceId+'\n'], Number of TotalRecords: #[payload.totalRecords+'\n'], Number of loadedRecord: #[payload.loadedRecords+'\n'], ProcessedRecords: #[payload.processedRecords+'\n'], Number of sucessfull Records: #[payload.successfulRecords+'\n'], Number of failed Records: #[payload.failedRecords+'\n'], ElapsedTime: #[payload.elapsedTimeInMillis+'\n'], InpuPhaseException #[payload.inputPhaseException+'\n'], LoadingPhaseException: #[payload.loadingPhaseException+'\n'], CompletePhaseException: #[payload.onCompletePhaseException+'\n'] " doc:name="Set Batch Result"/>
<logger message="afterSetPayload: #[payload]" level="INFO" doc:name="Logger"/>
<flow-ref name="log" doc:name="Logger" />
</batch:on-complete>
I'm in struck with this behavior quite a long days. Your help will be much appreciated.
Version:3.5.1
Thanks in advance.
Set max-failed-records to -1 so that batch job will continue even though there an exception
<batch:job name="test" max-failed-records="-1">
in the real time environment you don't have the situation to clean .mule folder
this happens only when you are working with Anypoint Studio