Anypoint MQ messages remains in in-flight and not been processed - mule

I have a flow which submits around 10-20 salesforce bulk query job details to anypoint mq to be processed asynchronously.
I am using normal Queue, Not using FIFO queue and wants process one message at a time.
My subscriber configurations are given below. I am putting this whooping ack timeout to 15 minutes as max it has taken 15 minutes for a Job to change the status from jobUpload to JobCompleted.
MuleRuntime: 4.4
MQ Connector Version: 3.2.0
<anypoint-mq:subscriber doc:name="Subscribering Bulk Query Job Details"
config-ref="Anypoint_MQ_Config"
destination="${anyPointMq.name}"
acknowledgementTimeout="15"
acknowledgementTimeoutUnit="MINUTES">
<anypoint-mq:subscriber-type >
<anypoint-mq:prefetch maxLocalMessages="1" />
</anypoint-mq:subscriber-type>
</anypoint-mq:subscriber>
Anypoint MQ Connector Configuration
<anypoint-mq:config name="Anypoint_MQ_Config" doc:name="Anypoint MQ Config" doc:id="ce3aaed9-dcba-41bc-8c68-037c5b1420e2">
<anypoint-mq:connection clientId="${secure::anyPointMq.clientId}" clientSecret="${secure::anyPointMq.clientSecret}" url="${anyPointMq.url}">
<reconnection>
<reconnect frequency="3000" count="3" />
</reconnection>
<anypoint-mq:tcp-client-socket-properties connectionTimeout="30000" />
</anypoint-mq:connection>
</anypoint-mq:config>
Subscriber flow
<flow name="sfdc-bulk-query-job-subscription" doc:id="7e1e23d0-d7f1-45ed-a609-0fb35dd23e6a" maxConcurrency="1">
<anypoint-mq:subscriber doc:name="Subscribering Bulk Query Job Details" doc:id="98b8b25e-3141-4bd7-a9ab-86548902196a" config-ref="Anypoint_MQ_Config" destination="${anyPointMq.sfPartnerEds.name}" acknowledgementTimeout="${anyPointMq.ackTimeout}" acknowledgementTimeoutUnit="MINUTES">
<anypoint-mq:subscriber-type >
<anypoint-mq:prefetch maxLocalMessages="${anyPointMq.prefecth.maxLocalMsg}" />
</anypoint-mq:subscriber-type>
</anypoint-mq:subscriber>
<json-logger:logger doc:name="INFO - Bulk Job Details have been fetched" doc:id="b25c3850-8185-42be-a293-659ebff546d7" config-ref="JSON_Logger_Config" message='#["Bulk Job Details have been fetched for " ++ payload.object default ""]'>
<json-logger:content ><![CDATA[#[output application/json ---
payload]]]></json-logger:content>
</json-logger:logger>
<set-variable value="#[p('serviceName.sfdcToEds')]" doc:name="ServiceName" doc:id="f1ece944-0ed8-4c0e-94f2-3152956a2736" variableName="ServiceName"/>
<set-variable value="#[payload.object]" doc:name="sfObject" doc:id="2857c8d9-fe8d-46fa-8774-0eed91e3a3a6" variableName="sfObject" />
<set-variable value="#[message.attributes.properties.key]" doc:name="key" doc:id="57028932-04ab-44c0-bd15-befc850946ec" variableName="key" />
<flow-ref doc:name="bulk-job-status-check" doc:id="c6b9cd40-4674-47b8-afaa-0f789ccff657" name="bulk-job-status-check" />
<json-logger:logger doc:name="INFO - subscribed bulk job id has been processed successfully" doc:id="7e469f92-2aff-4bf4-84d0-76577d44479a" config-ref="JSON_Logger_Config" message='#["subscribed bulk job id has been processed successfully for salesforce " ++ vars.sfObject default "" ++ " object"]' tracePoint="END"/>
</flow>
After the bulk query job subscriber, I am checking the status of the job for 5 time with an interval of 1 minutes inside until successful scope. It generally exhausts all 5 attempts and subscribe it again and do the same process again until it gets completed. I have seen until successfull scope gets exhausted more than one for a single job.
Once the job's status changes to jobComplete. I fetch the result and sends to AWS S3 bucket via mulesoft system api. Here also I use a retry logic as due to large volume of data I always get this message while making first call
HTTP POST on resource 'https://****//dlb.lb.anypointdns.net:443/api/sys/aws/s3/databricks/object' failed: Remotely closed.
But during the second retry it gets successful response from S3 Bucket system api.
Now the main problem:
Though I am using normal queue. I have notice messages remains in flight mode for infinite amount of time and still not get picket up by mule flow/subscriber. Below screenshot shows an example, there were 7 messages in flight but were not being picked up even after many days.
As I have kept maxConcurrency and maxPrefetchLocalMsg to 1. But there are more than 1 messages are been taken out of the queue. Please help understand this.

Related

Mulesoft batch crashes

I am new to mulesoft, have a question regarding batch processing.if batch process crashed in between and some record already processed, what happened when batch processing start again. Duplicate data ???
It will try to continue processing the pending data in the batch queues, unless it was corrupted by the crash.
The answer depends upon a few things. First, unless you configure the batch job scope to allow record level failures, the entire job will stop when a record fails. Second, if you do configure to allow failures, then the batch will continue to process all records. In such a case, each batch step can be configured to accept only successful records (the default), or only failed records, or all records.
So the answer to your question is dependent upon configuration.
And as far as duplicate data, this part is entirely up to you.
If you have the job stop for failure, when you restart it, the set of records you provide at that time will be the ones processed. If you submit records that have been processed once before, then they will be processed again. You can provide for filtering either upon reentry to the batch job, or as the records are successfully processed.
Before answering your question I would like to know couple of things.
a. what do u mean by batch crashes ?
are you saying during the batch processing there is some JVM hit and the
batch started again ?
or There is failure during processing of some records the batch?
a.1 --> if there is JVM hit then then the entire program halts.
need to restart the programme. which results into processing
the same set of record.
a.2 --> To handle the failure record inside batch.
In the batch step you can do three steps as below .
set the batch job to continue irrespective of any error
<batch:job jobName="Batch1" maxFailedRecords="-1">
In the batch job create 3 batch step :-
a.processRecord.- process the 1st record in batch
queue.
b.ProcessSuccessful-
if there is no exception in a. go to batch step b.
c.processFailure record.-
if there is exception in a. goto batch step. c
THE ENTIRE SAMPLE CODE IS SHOWN BELOW .
<batch:job jobName="batchJob" maxFailedRecords="-1" >
<batch:process-records>
<batch:step name="processRecord" acceptPolicy="ALL" >
log.info("process any record COMES IN THE STEP");
</batch:step>
<batch:step name="ProcessSuccessful"
acceptPolicy="NO_FAILURES">
log.info("process only SUCCESSFUL record")
</batch:step>
<batch:step name="processFailure"
acceptPolicy="ONLY_FAILURES">
log.info("process only FAILURE record");
</batch:step>
</batch:process-records>
<batch:on-complete>
log.info("on complete phase , log the successful and
failure count");
</batch:on-complete>
</batch:job>
N.B --> the code as per Mule 4.0.

Camunda - Intermedia message event cannot correlate to a single execution

I created a small application (Spring Boot and camunda) to process an order process. The Order-Service receives the new order via Rest and calls the Start Event of the BPMN Order workflow. The order process contains two asynchronous JMS calls (Customer check and Warehouse Stock check). If both checks return the order process should continue.
The Start event is called within a Spring Rest Controller:
ProcessInstance processInstance =
runtimeService.startProcessInstanceByKey("orderService", String.valueOf(order.getId()));
The Send Task (e.g. the customer check) sends the JMS message into a asynchronous queue.
The answer of this service is catched by a another Spring component which then trys to send an intermediate message:
runtimeService.createMessageCorrelation("msgReceiveCheckCustomerCredibility")
.processInstanceBusinessKey(response.getOrder().getBpmnBusinessKey())
.setVariable("resultOrderCheckCustomterCredibility", response)
.correlate();
I deactivated the warehouse service to see if the order process waits for the arrival of the second call, but instead I get this exception:
1115 06:33:08.564 WARN [o.c.b.e.jobexecutor] ENGINE-14006 Exception while executing job 67d2cc24-0769-11ea-933a-d89ef3425300:
org.springframework.messaging.MessageHandlingException: nested exception is org.camunda.bpm.engine.MismatchingMessageCorrelationException: ENGINE-13031 Cannot correlate a message with name 'msgReceiveCheckCustomerCredibility' to a single execution. 4 executions match the correlation keys: CorrelationSet [businessKey=1, processInstanceId=null, processDefinitionId=null, correlationKeys=null, localCorrelationKeys=null, tenantId=null, isTenantIdSet=false]
This is my process. I cannot see a way to post my bpmn file :-(
What can't it not correlate with the message name and the business key? The JMS queues are empty, there are other messages with the same businessKey waiting.
Thanks!
Just to narrow the problem: Do a runtimeService eventSubscription query before you try to correlate and check what subscriptions are actually waiting .. maybe you have a duplicate message name? Maybe you (accidentally) have another instance of the same process running? Once you identified the subscriptions, you could just notify the execution directly without using the correlation builder ...

Mule Batch Max Failures not working

I'm using mule batch flow to process the files. As per the requirement I should stop processing the batch step for further processing after 10 failures.
So I've configured max-failed-records="10" but still I see around 99 failures in my logger that is kept in complete phase. The file which the app recieves will have around 8657 rows. so loaded records will be 8657 records.
Logger in complete phase:
<logger message="#['Failed Records'+payload.failedRecords]" level="INFO" doc:name="Logger"/>
Below image is my flow:
Its default behavior of the mule. As per Batch Documentation Mule loads 1600 records at once (16 threads x 100 records per block). Though max failure is set 10 it will process all loaded records, but it wont load next record blocks as max failure limit is reached.
Hope this helps.

Mule ActiveMQ at particular time interval

I have a scenario wherein I need to start receiving messages from Queue after a particular time interval irrespective of time the message is placed in queue.
For example Flow A process some service calls and then place the below message in queue
{
filename:"blahblah.pdf"
}
Now Flow B need to start recieving the messages from queue after 9PM(or some time) daily and then process it.
I wonder is it possible to achieve this scenario in Mule.
You can achieve this in Mulesoft using Poll Scope or Quartz Schedular.
Code will be some thing like
<quartz:inbound-endpoint jobName="ReadQIN"
cronExpression="* * * * * ?" doc:name="Quartz">
<quartz:endpoint-polling-job>
<quartz:job-endpoint address="jms://QIN" />
</quartz:endpoint-polling-job>
</quartz:inbound-endpoint>

How to check number of pending messages or all messages are processed or not in ActiveMQ using mule flow

I have a requirement to stop the ActiveMQ connector if all messages have been processed in the queue. This needs to be done in Mule flow.
As shown below, I have two connectors one for reading and other for writing on vci.staging.queue. I want to check if all messages are processed in the queue then disable the reader connector.
Below piece of script to use client.request from muleContext using queue name and reader or writer connector is always returning ‘null’ for me.
Is there any way to get number of pending messages in a queue or to check if all messages are processed or not so that connector can be disabled?
<jms:activemq-connector name="jmsConnectorStagingQReaderNormal"
brokerURL="${mule.activemq.broker.read.normal.url}"
specification="1.1"
maxRedelivery="-1"
persistentDelivery="true"
numberOfConcurrentTransactedReceivers="${mule.activemq.concurrent.receivers}"
connectionFactory-ref="connectionFactory"
disableTemporaryReplyToDestinations="false">
</jms:activemq-connector>
<jms:activemq-connector name="jmsConnectorStagingQWriter"
brokerURL="${mule.activemq.broker.write.url}"
specification="1.1"
maxRedelivery="-1"
persistentDelivery="true"
numberOfConcurrentTransactedReceivers="${mule.activemq.concurrent.receivers}"
connectionFactory-ref="connectionFactory"
disableTemporaryReplyToDestinations="false">
</jms:activemq-connector>
<script:component>
<script:script engine="groovy">
if(muleContext.getRegistry().lookupConnector('jmsConnectorStagingQReaderNormal').isStarted()) {
if(muleContext.client.request("jms://vci.staging.queue?connector= jmsConnectorStagingQReaderNormal ", 5000) == null) {
muleContext.getRegistry().lookupConnector('jmsConnectorStagingQReaderNormal').stop()
}
}
return payload
</script:script>
</script:component>
Use a QueueBrowser to peek into a JMS queue without consuming its messages.
For this:
Create a custom component,
Have Spring inject your jms:activemq-connector in the component,
Call getSession(false, false) on it to get an active JMS Session,
Call createBrowser(Queue queue) on the Session (you can get a hold of the Queue with Session.createQueue(..).