I have a loop that runs three flow references in order. At least that is the plan. Run in the debugger, processing takes place in the following unexpected order:
the first flow-ref (A)
the second flow-ref (B)
the first component of flow A
the third flow-ref (C)
the first component of flow B
the second component of the flow A
the first component of flow C
the second component of flow B
the third component of flow A
...now things blow up (in 1st of flow C), since payload is not expected
I changed the processing strategy from implicit to 'synchronous' with no noticeable change.
What is going on?
<flow name="Loop_until_successfull" doc:name="Loop_until_successfull" processingStrategy="synchronous">
<flow-ref name="A" doc:name="Go to A"></flow-ref>
<flow-ref name="B" doc:name="Go to B"></flow-ref>
<flow-ref name="C" doc:name="Go to C"></flow-ref>
</flow>
Changing the "Loop_until_successful" flow to synchronous will only assure that calls to "Loop_until_successful" are processed synchronously, not necessarily any other flow called by it. You need to change each of flows called by "Loop_until_successful" to be processed synchronously to ensure you get the response back from each call out before you make a call to the next flow. If you do this, then Loop_until_successful (I'll call L.U.S for now on) calls A, waits for a response, then calls B, waits for a response, then calls C. The way it is configured now, L.U.S. calls A and then moves immediately on to B using the payload it has rather than waiting for the response from A.
Related
I created a small application (Spring Boot and camunda) to process an order process. The Order-Service receives the new order via Rest and calls the Start Event of the BPMN Order workflow. The order process contains two asynchronous JMS calls (Customer check and Warehouse Stock check). If both checks return the order process should continue.
The Start event is called within a Spring Rest Controller:
ProcessInstance processInstance =
runtimeService.startProcessInstanceByKey("orderService", String.valueOf(order.getId()));
The Send Task (e.g. the customer check) sends the JMS message into a asynchronous queue.
The answer of this service is catched by a another Spring component which then trys to send an intermediate message:
runtimeService.createMessageCorrelation("msgReceiveCheckCustomerCredibility")
.processInstanceBusinessKey(response.getOrder().getBpmnBusinessKey())
.setVariable("resultOrderCheckCustomterCredibility", response)
.correlate();
I deactivated the warehouse service to see if the order process waits for the arrival of the second call, but instead I get this exception:
1115 06:33:08.564 WARN [o.c.b.e.jobexecutor] ENGINE-14006 Exception while executing job 67d2cc24-0769-11ea-933a-d89ef3425300:
org.springframework.messaging.MessageHandlingException: nested exception is org.camunda.bpm.engine.MismatchingMessageCorrelationException: ENGINE-13031 Cannot correlate a message with name 'msgReceiveCheckCustomerCredibility' to a single execution. 4 executions match the correlation keys: CorrelationSet [businessKey=1, processInstanceId=null, processDefinitionId=null, correlationKeys=null, localCorrelationKeys=null, tenantId=null, isTenantIdSet=false]
This is my process. I cannot see a way to post my bpmn file :-(
What can't it not correlate with the message name and the business key? The JMS queues are empty, there are other messages with the same businessKey waiting.
Thanks!
Just to narrow the problem: Do a runtimeService eventSubscription query before you try to correlate and check what subscriptions are actually waiting .. maybe you have a duplicate message name? Maybe you (accidentally) have another instance of the same process running? Once you identified the subscriptions, you could just notify the execution directly without using the correlation builder ...
getting an interesting exception. Using a splitter processor to split a collection using a Collection Splitter. It splits the collection fine but when the flow returns back to the main flow and the flow ends, it throws this exception. Wondering if you'd seen it before :
ERROR 2018-12-07 16:06:26,052 [[ahld_kpi_enabler].HTTP_Listener_Configuration.worker.01] org.mule.exception.DefaultMessagingExceptionStrategy: Caught exception in Exception Strategy: java.lang.UnsupportedOperationException: getPayloadAsBytes(), There has been an attempt to directly access the payload of a message collection, which is unsupported.
Please retrieve the value from messageList or use getPayload(DataType.BYTE_ARRAY_DATA_TYPE)
java.lang.RuntimeException: java.lang.UnsupportedOperationException: getPayloadAsBytes(), There has been an attempt to directly access the payload of a message collection, which is unsupported.
Please retrieve the value from messageList or use getPayload(DataType.BYTE_ARRAY_DATA_TYPE)
The flow is triggered via HTTP and it makes outbound HTTP calls.
There's no aggregation happening of the collection split, its merely used to split the collection and for each object in the collection subsbequent calls / actions are taken
At the end of your flow when using a collection-splitter, your payload is going to be a Mule message collection and as your using HTTP, it's going to try and serialise that as the HTTP response, which it can't.
So you can either aggregate your payload and then set your payload to something to return or even #[null].
Or you can put your collection-splitter and the logic after that in a separate flow - wrapped in an enricher:
<enricher target="#[flowVars.someVar]">
<flow-ref name="myCollectionSplitterLogicFlow" />
</enricher>
Or you can just use foreach, which I would personally advise, as splitters are removed in Mule 4.
If you have nested collections, you can have any number of nested foreach :
<foreach collection="#[payload]">
<foreach collection="#[payload.nestedCollection]">
</foreach>
</foreach>
i would like to route my http:request to my main ( or secondary ) error handler in Anypoint Studio 7
i does not seem to have a clear way of doing it.
And the documentation does not have guideline for this specific case.
in my case is necessary, i need to know and send a signal to another service and communicate the error response, like: connection_timeout
You can catch the errors you want using an error-handler in your flow where you are executing the http:request. if you do not catch the error, it will bubble up to the calling flow and so on. If no error-handler is configured, the default mule one will be used which just logs the message basically.
In Mule 4 you can catch all errors in your flow like so:
<flow name="retrieveMatchingOrders">
<http:request config-ref="customersConfig" path="/customer">
</http:request>
<error-handler>
<on-error-continue>
<!-- error handling logic -->
</on-error-continue>
</error-handler>
</flow>
An on-error-continue will execute and use the result of the execution, as the result of its owner (as if the owner had actually completed the execution successfully). Any transactions at this point would be committed as well
So in there, you can set the payload to your desired message to be returned etc.
There also an on-error-propogate handler and a try scope, more information on those are available here: https://docs.mulesoft.com/mule-runtime/4.1/intro-error-handlers
All errors thrown in Mule contain meta-data including a TYPE. If you need to catch specific HTTP Errors you can configure your error-handler like so:
<error-handler>
<on-error-continue type="HTTP:TIMEOUT">
<!-- error handling logic -->
</on-error-continue>
</error-handler>
Here is a list of all specific HTTP: errors thrown by the HTTP module:
HTTP:UNSUPPORTED_MEDIA_TYPE
HTTP:CONNECTIVITY
HTTP:INTERNAL_SERVER_ERROR
HTTP:METHOD_NOT_ALLOWED
HTTP:NOT_ACCEPTABLE
HTTP:TOO_MANY_REQUESTS
HTTP:SERVICE_UNAVAILABLE
HTTP:CLIENT_SECURITY
HTTP:FORBIDDEN
HTTP:UNAUTHORIZED
HTTP:RETRY_EXHAUSTED
HTTP:NOT_FOUND
HTTP:BAD_REQUEST
HTTP:PARSING
HTTP:TIMEOUT
HTTP:SECURITY
Each module's documentation should contain all specific error types thrown by that module. Here is the HTTP one example:
https://docs.mulesoft.com/connectors/http/http-documentation#throws
And here is a full list of core error types you can catch like EXPRESSION for example:
https://docs.mulesoft.com/mule-runtime/4.1/mule-error-concept
Can someone pls explain how a Mule processing strategy works when one flow is calling another one with flow-ref?
Case 1.
Let's say we have 2 flows: flowA and flowB, with processing strategies procA and procB, both are asynchronous but procA has 10 threads allowed while procB has only 1.
<queued-asynchronous-processing-strategy name="procA" maxThreads="10" doc:name="procA"/>
<queued-asynchronous-processing-strategy name="procB" maxThreads="1" doc:name="procB"/>
flowA is reading from a queue and calling flowB with
<flow-ref name="flowB" doc:name="flowB"/>
Will be another queue created in this case between the flowA and flowB so that all the flowB calls executed in a single thread one by one?
Or flowB will be following the flowA strategy with possible 10 messages processed at the same time?
Case 2.
flowA is a synchronous flow reading from a queue.
It's calling an asynchronous flowB with 1 max thread allowed like this:
<queued-asynchronous-processing-strategy name="procB" maxThreads="1" doc:name="procB"/>
The async block has it's own strategy procC with 10 threads allowed:
<queued-asynchronous-processing-strategy name="procC" maxThreads="10" doc:name="procC"/>
flowA is calling flowB like this:
<async doc:name="Async" processingStrategy="procC">
<flow-ref name="flowB" doc:name="flowB"/>
</async>
The question is similar:
Will be another queue created in this case between the async block and flowB so that all the flowB calls executed in a single thread one by one?
Or flowB will be following the procC strategy with 10 messages processed at the same time?
Case 1.
Another queue with 1 thread will be created for Flow B.
VM receiver pool thread-> SEDA thread from procA -> SEDA thread from procB
Case 2.
As above, another queue with 1 thread will be created for Flow B
VM Receiver pool thread -> SEDA thread from procC -> SEDA thread from procB
Flow processing strategies are covered in the Mule documentation but I didn't find that overly useful. It is straightforward to set these flows up in Anypoint Studio and use Loggers to determine the thread that is running at a particular time.
My current scenario:
I have 10000 records as input to batch.
As per my understanding, batch is only for record-by-record processing.Hence, i am transforming each record using dataweave component inside batch step(Note: I havenot used any batch-commit) and writing each record to file. The reason for doing record-by-record processing is suppose in any particular record, there is an invalid data, only that particular record gets failed, rest of them will be processed fine.
But in many of the blogs I see, they are using a batchcommit(with streaming) with dataweave component. So as per my understanding, all the records will be given in one shot to dataweave, and if one record has invalid data, all the 10000 records will get failed(at dataweave). Then, the point of record-by-record processing is lost.
Is the above assumption correct or am I thinking wrong way??
That is the reason I am not using batch Commit.
Now, as I said am sending each record to a file. Actually, i do have the requirement of sending each record to 5 different CSV files. So, currently I am using Scatter-Gather component inside my BatchStep to send it to five different routes.
As, you can see the image. the input phase gives a collection of 10000 records. Each record will be send to 5 routes using Scatter-Gather.
Is, the approach I am using is it fine, or any better Design can be followed??
Also, I have created a 2nd Batch step, to capture ONLY FAILEDRECORDS. But, with the current Design, I am not able to Capture failed records.
SHORT ANSWERS
Is the above assumption correct or am I thinking wrong way??
In short, yes you are thinking the wrong way. Read my loooong explanation with example to understand why, hope you will appreciate it.
Also, I have created a 2nd Batch step, to capture ONLY FAILEDRECORDS.
But, with the current Design, I am not able to Capture failed records.
You probably forget to set max-failed-records = "-1" (unlimited) on batch job. Default is 0, on first failed record batch will return and not execute subsequent steps.
Is, the approach I am using is it fine, or any better Design can be
followed??
I think it makes sense if performance is essential for you and you can't cope with the overhead created by doing this operation in sequence.
If instead you can slow down a bit it could make sense to do this operation in 5 different steps, you will loose parallelism but you can have a better control on failing records especially if using batch commit.
MULE BATCH JOB IN PRACTICE
I think the best way to explain how it works it trough an example.
Take in consideration the following case:
You have a batch processing configured with max-failed-records = "-1" (no limit).
<batch:job name="batch_testBatch" max-failed-records="-1">
In this process we input a collection composed by 6 strings.
<batch:input>
<set-payload value="#[['record1','record2','record3','record4','record5','record6']]" doc:name="Set Payload"/>
</batch:input>
The processing is composed by 3 steps"
The first step is just a logging of the processing and the second step will instead do a logging and throw an exception on record3 to simulate a failure.
<batch:step name="Batch_Step">
<logger message="-- processing #[payload] in step 1 --" level="INFO" doc:name="Logger"/>
</batch:step>
<batch:step name="Batch_Step2">
<logger message="-- processing #[payload] in step 2 --" level="INFO" doc:name="Logger"/>
<scripting:transformer doc:name="Groovy">
<scripting:script engine="Groovy"><![CDATA[
if(payload=="record3"){
throw new java.lang.Exception();
}
payload;
]]>
</scripting:script>
</scripting:transformer>
</batch:step>
The third step will instead contain just the commit with a commit count value of 2.
<batch:step name="Batch_Step3">
<batch:commit size="2" doc:name="Batch Commit">
<logger message="-- committing #[payload] --" level="INFO" doc:name="Logger"/>
</batch:commit>
</batch:step>
Now you can follow me in the execution of this batch processing:
On start all 6 records will be processed by the first step and logging in console would look like this:
-- processing record1 in step 1 --
-- processing record2 in step 1 --
-- processing record3 in step 1 --
-- processing record4 in step 1 --
-- processing record5 in step 1 --
-- processing record6 in step 1 --
Step Batch_Step finished processing all records for instance d8660590-ca74-11e5-ab57-6cd020524153 of job batch_testBatch
Now things would be more interesting on step 2 the record 3 will fail because we explicitly throw an exception but despite this the step will continue in processing the other records, here how the log would look like.
-- processing record1 in step 2 --
-- processing record2 in step 2 --
-- processing record3 in step 2 --
com.mulesoft.module.batch.DefaultBatchStep: Found exception processing record on step ...
Stacktrace
....
-- processing record4 in step 2 --
-- processing record5 in step 2 --
-- processing record6 in step 2 --
Step Batch_Step2 finished processing all records for instance d8660590-ca74-11e5-ab57-6cd020524153 of job batch_testBatch
At this point despite a failed record in this step batch processing will continue because the parameter max-failed-records is set to -1 (unlimited) and not to the default value of 0.
At this point all the successful records will be passed to step3, this because, by default, the accept-policy parameter of a step is set to NO_FAILURES. (Other possible values are ALL and ONLY_FAILURES).
Now the step3 that contains the commit phase with a count equal to 2 will commit the records two by two:
-- committing [record1, record2] --
-- committing [record4, record5] --
Step: Step Batch_Step3 finished processing all records for instance d8660590-ca74-11e5-ab57-6cd020524153 of job batch_testBatch
-- committing [record6] --
As you can see this confirms that record3 that was in failure was not passed to the next step and therefore not committed.
Starting from this example I think you can imagine and test more complex scenario, for example after commit you could have another step that process only failed records for make aware administrator with a mail of the failure.
After you can always use external storage to store more advanced info about your records as you can read in my answer to this other question.
Hope this helps