Timeout Configuration - mule

I am configuring transaction and response timeout for API in Mule 4, Is there anyway that I set different timeout for different methods(GET,POST,DELETE) for single API in Mule Soft because the API has different SLA for different operations ?

As the http timeout is set at the connector level that does not allow you to set timeouts per method.
One way you could try to achieve this is via separating your interface flow from your logic. Then have your interface flow call your logic flow via something like vm where you can set a timeout individually. You can then catch the timeout error and do what you want with it.
Here is an example that has a flow for a POST method. All this flow does is offload the logic to another floe and invokes that logic using v:publish-consume and awaits the response. It sets a timeout of 2 seconds(configurable with properties etc.) and catches VM:QUEUE-TIMEOUT errors and sets an 'SLA exceeded'error message:
<flow name="myPOSTInterface">
<vm:publish-consume queueName="postQueue" config-ref="vm" timeout="2" timeoutUnit="SECONDS" />
<logger level="INFO" message="Result from logic flow: #[payload]" />
<error-handler>
<on-error-continue type="VM:QUEUE_TIMEOUT">
<set-payload value="#[{error: 'SLA exceeded'}]" />
</on-error-continue>
</error-handler>
</flow>
<flow name="myPOSTLogic">
<vm:listener config-ref="vm" queueName="postQueue" />
<set-payload value="#[{result: 'Result from my logic'}]" />
</flow>

Related

Session Variable not available in FlowRef Lookup Table

I am using a Message Enricher to call a web services and return what a part number is for the external data source. I am saving that payload into a Session Variable. I am then using a Lookup Table from within a Datamapper to send the current payloads' part number to be referenced against the external data source (using xpath). I am able to invoke the Lookup and pass the local variable but the payload that was saved into the Session Variable is not being passed through to the Lookup Flow, so my xpath query will not work.
Here is the Session Variable and Datamapper
<flow>
<enricher target="#[sessionVars['SesVar']]" doc:name="Message Enricher">
<flow-ref name="query-line-details-erpFlow" doc:name="query-line-details-erpFlow"/>
</enricher>
<logger message="Session Var: #[sessionVars['SesVar']]" level="INFO" doc:name="Logger"/>
<data-mapper:transform config-ref="XML_To_XML" doc:name="XML To XML"/>
</flow>
Here is the Lookup Table logic
output.ExternalPart = (isnull(lookup(LookUpPart).get([input.LocalPart])) ? null : lookup(LookUpPart).get([input.LocalPart]).ExternalPart);
Finally here is the second flow where the Session Var should be accessed from
<flow>
<logger message="Spit out the var #[sessionVars.SesVar]" level="INFO" doc:name="Logger"/>
</flow>
From what research I have done, the Session Variable is not passing a Transport Barrier so it should be able to be referenced from this scope. I have also tried with flowVars also.
Any help would be greatly appreciated.

How to process a list in parallel in mule?

I have a list of objects, which right now I am processing in foreach. The list is nothing but a string of ids that kicks off other stuff internally.
<flow name="flow1" processingStrategy="synchronous">
<quartz:inbound-endpoint jobName="integration" repeatInterval="86400000" responseTimeout="10000" doc:name="Quartz" >
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<component class="RequestFeeder" doc:name="RequestFeeder"/>
<foreach collection="#[payload]" doc:name="For Each">
<flow-ref name="createFlow" doc:name="createFlow"/>
<flow-ref name="queueFlow" doc:name="queueFlow"/>
<flow-ref name="statusCheckFlow" doc:name="statusCheckFlow"/>
<flow-ref name="resultsFlow" doc:name="resultsFlow"/>
<flow-ref name="sftpFlow" doc:name="sftpFlow"/>
<logger message="RequestType #[flowVars['rqstType']] complete" level="INFO" doc:name="Done"/>
</foreach>
<logger message="ALL 15 REQUESTS HAVE BEEN PROCESSED" level="INFO" doc:name="Logger"/>
</flow>
I want to process them in parallel. ie execute the same 4 flow-refs in parallel for all 15 requests coming in the list. This seems simple, but I havent been able to figure it out yet. Any help appreciated.
An alternative to the scatter-gather approach is to simply split the collection and use a VM queue for the items in the list. This method can be simpler if you don't need to wait and collect all 15 results, and will still work if you do.
Try something like this. Mule automatically uses a thread pool (more info) to run your flow, so the requestProcessor flow below will process your requests in parallel.
<flow name="scheduleRequests">
<quartz:inbound-endpoint jobName="integration" repeatInterval="86400000" responseTimeout="10000" doc:name="Quartz" >
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<component class="RequestFeeder" doc:name="RequestFeeder"/>
<collection-splitter />
<vm:outbound-endpoint path="requests" />
</flow>
<flow name="requestProcessor">
<vm:inbound-endpoint path="requests" />
<flow-ref name="createFlow" doc:name="createFlow"/>
<flow-ref name="queueFlow" doc:name="queueFlow"/>
<flow-ref name="statusCheckFlow" doc:name="statusCheckFlow"/>
<flow-ref name="resultsFlow" doc:name="resultsFlow"/>
<flow-ref name="sftpFlow" doc:name="sftpFlow"/>
</flow>
I reckon you still want those four flows to run sequentially, right?
If that were not the case you could always change the threading profile.
Another thing you could do is to wrap the four flows in an async scope although you may need a processor change.
In any event I think you'll be better of using the scatter gather component:
https://developer.mulesoft.com/docs/display/current/Scatter-Gather
https://www.mulesoft.com/exchange#!/scatter-gather-flow-control
Which without needing the for each scope will split the list and execute each item in a different thread. You could define how many threads you want to run in parallel (so you don't just spin of a new thread you use a pool).
One final note though, is meant to aggregate the result of all the processed items. I reckon you could change that with a custom aggregation strategy but not sure really, please take a look at the docs for that.
HTH
You say 4 flows, but the list contains 5 flows. If you want all flows executed in sequence, but each item in the collection executed in parallel, you will want a splitter followed by a separate vm flow containing all (4/5) flows, as explained here: https://support.mulesoft.com/s/article/Concurrently-processing-Collection-and-getting-the-results.
If you want the flows inside the loop to execute in parallel then you choose a Scatter-Gather component.
It is important to be clear which of the two things you are wanting to achieve as the solution would be very different. So the basic difference is, in Scatter-Gather a single message is sent to multiple recipients for processing in parallel, but in Splitter-Aggregator a single message is split into multiple sub messages and processed individually and then aggregated. See: http://muthurajud.blogspot.com/2016/07/eai-patterns-scattergather-versus.html
Scatter- gather of Mule component is one of the component to make easy for parallel processing, A simple example will be following :-
<scatter-gather >
<flow-ref name="flow1" />
<flow-ref name="flow2" />
<flow-ref name="flow3" />
</scatter-gather>
So, the flows you want to execute in parallel can be kept inside the

Limiting HTTP Listener Active Threads to control number of concurrent mule flow instaces

I am trying to do a test on how to limit the number of concurrent incoming HTTP requests.
So I tried to simulate the following scenario.
I have created a simple flow with
1. Http Listener as Msg source
2. Using Groovy Script, Sleeping for 15 seconds to introduce delay
3. Set the Payload with Hello world.
So any single request will have Min 15 sec of response time. Then in order to limit number of active threads (i.e assuming to control concurrent requests / processing threads) , I have set the maxActiveThreads to 1. So ideally I would allow 1 concurrent thread to process the flow.
Now when I fire using apache benchmarking, as simple get with 1 request, I receive with response time as 15 seconds, which is fine. Now when I increase the number of concurrent requests to 2, I still receive the response time as 15 seconds . I am expecting it be 30 seconds
I see the behaviour until 9 concurrent requests. Beyond 9, then from 10th request onwards it is placed in the waiting queue.
So can an expert please explain how Can I limit the number of active threads to 1. And how is the number of concurrent requests set to 9 (I see in threads using JConsole there are 9 SelectorRunner threads, which I assume there are linked to it).
Below is the simple flow.
<http:listener-config name="HTTP_Listener_Configuration" host="localhost" port="8082" doc:name="HTTP Listener Configuration">
<http:worker-threading-profile maxThreadsActive="1" />
</http:listener-config>
<flow name="getting-sartedFlow">
<http:listener config-ref="HTTP_Listener_Configuration" path="/" doc:name="HTTP" />
<scripting:component doc:name="Groovy">
<scripting:script engine="Groovy">
<![CDATA[Thread.currentThread().sleep((long)(15000));]]>
</scripting:script>
</scripting:component>
<set-payload value="Hello World" doc:name="Set Payload" />
</flow>
As answered in the forum, you need to define a poolExhaustedAction for the worker-threading-profile. If you don't then the default one will be used which is RUN, which explains the behavior you're seeing. From what I understand you should use WAIT.
<http:worker-threading-profile maxThreadsActive="1" poolExhaustedAction="WAIT"/>

How to call private flow execution from main flow when Quartz scheduler in configured?

<flow name="Flow1">
<quartz:inbound-endpoint jobName="ReadQ1" cronExpression="* 30 15 * * ?">
<quartz:endpoint-polling-job>
<quartz:job-endpoint address="jms://Q1"/>
</quartz:endpoint-polling-job>
</quartz:inbound-endpoint>
<component>
<singleton-object class="MyComponenet"/>
</component>
<choice>
<when expression="payload==200" evaluator="groovy">
<flow-ref name="Flow2"/>
</when>
</choice>
</flow>
<flow name="Flow2">
<jms:inbound-endpoint queue="Q2"/>
<component class="AnotherComponent"/>
<jms:outbound-endpoint queue="Q3"/>
</flow>
I expect Flow1 to execute at the defined quartz schedule(15:30).
And based on the payload return from MyComponent, I refer Flow2 to execute.
But Flow2 executes even before Flow1 is triggered.
How do I implement the flows so that Flow2 is always called from Flow1 ?
If you use flow-ref in flow1, then make your flow2 to sub-flow. Then remove the jms-inbound endpoint from flow2
<sub-flow name="Flow2">
<component class="AnotherComponent"/>
<jms:outbound-endpoint queue="Q3"/>
</sub-flow>
And replace also flow3 with sub-flow if you want to use flow-ref.
Another option is to replace you flow-ref call with <jms:outbound-endpoint queue="Q2"/> and keep your flow2 as it is in your example.
Actually the Flow2 is not private. To make it private you have to remove the inbound endpoint.
Without taking that approach the flow won't ba callable from a flow-ref element.
With that said you can use 3 approaches
Sub-flow (introduced in mule 3.2 and currently recommended): it will behave as a subroutine sharing exception the context of the invoking flow (e.g. exception strategy and threading pool)
Private flow (for mule 3.1.x): it will create a new flow with is own context that will handle the message generated by the invoking flow
Publish the message to queue (e.g. the jms Q2 or a vm one) and consume it from your current flow
You can declare Flow2's initial state as "stopped":
This will prevent Flow2 to run before Flow1 (Or to run at all).
Then you can programmatically start it with a groovy script:
if (muleContext.registry.lookupFlowConstruct('Flow2').isStopped())
{
muleContext.registry.lookupFlowConstruct('Flow2').start()
}
Once you start it the jms queue will start polling.
I hope this helps.

How to log or handle original payload message in Mule

<flow>
<jms:inbound-endpoint queue="InputQueue"/>
<component class="MyComponent"/>
<choice>
<when expression="/Response/Status/Success" evaluator="xpath">
<jms:outbound-endpoint queue="LogInputQueue"/>
<jms:outbound-endpoint queue="SuccessQueue"/>
</when>
<when expression="/Response/Status/Error" evaluator="xpath">
<jms:outbound-endpoint queue="LogInputQueue"/>
<jms:outbound-endpoint queue="ErrorQueue"/>
</when>
<otherwise>
<jms:outbound-endpoint queue="LogInputQueue"/>
<jms:outbound-endpoint queue="ExceptionQueue"/>
</otherwise>
</choice>
</flow>
In this flow, MyComponent returns either success message as a response or error response or exception?
I need to log the original message from InputQueue in LogInputQueue in all the cases. How do I achieve this in my flow?
Did you mean to create a log file? In that case, you have to implement log4j using slf4j and use the line
<logger level="log_level" category="your_category" message="#[message:payload]"/>
where log_level is your desired logging level- "error", "debug", "info" etc...
your_category is your category of log defined in the log4j.properties file (it is optional actually)
and message="#[expression:value]" is your message to be logged given as an expression:scope:key combination. Scope is optional here.
Using log4j or slf4j you can log the payload.
[payload],we have logger component using this log payload in console.
Since, you need to send original message from InputQueue to LogInputQueue in all the cases, as you mentioned, what you need to do is :-
1. Remove <jms:outbound-endpoint queue="LogInputQueue"/> from all the cases in choice block
2. Store the original payload from InputQueue in a variable by placing it just after the JMS inbound endpoint
3. At the end of the flow, after the choice router, set the payload in set payload component from variable you stored the original payload
4. Now put <jms:outbound-endpoint queue="LogInputQueue"/> after your set payload component.
In this way you will able to send the original payload to the LogInputQueue as per your requirement.