Mule ActiveMQ at particular time interval - mule

I have a scenario wherein I need to start receiving messages from Queue after a particular time interval irrespective of time the message is placed in queue.
For example Flow A process some service calls and then place the below message in queue
{
filename:"blahblah.pdf"
}
Now Flow B need to start recieving the messages from queue after 9PM(or some time) daily and then process it.
I wonder is it possible to achieve this scenario in Mule.

You can achieve this in Mulesoft using Poll Scope or Quartz Schedular.
Code will be some thing like
<quartz:inbound-endpoint jobName="ReadQIN"
cronExpression="* * * * * ?" doc:name="Quartz">
<quartz:endpoint-polling-job>
<quartz:job-endpoint address="jms://QIN" />
</quartz:endpoint-polling-job>
</quartz:inbound-endpoint>

Related

Anypoint MQ messages remains in in-flight and not been processed

I have a flow which submits around 10-20 salesforce bulk query job details to anypoint mq to be processed asynchronously.
I am using normal Queue, Not using FIFO queue and wants process one message at a time.
My subscriber configurations are given below. I am putting this whooping ack timeout to 15 minutes as max it has taken 15 minutes for a Job to change the status from jobUpload to JobCompleted.
MuleRuntime: 4.4
MQ Connector Version: 3.2.0
<anypoint-mq:subscriber doc:name="Subscribering Bulk Query Job Details"
config-ref="Anypoint_MQ_Config"
destination="${anyPointMq.name}"
acknowledgementTimeout="15"
acknowledgementTimeoutUnit="MINUTES">
<anypoint-mq:subscriber-type >
<anypoint-mq:prefetch maxLocalMessages="1" />
</anypoint-mq:subscriber-type>
</anypoint-mq:subscriber>
Anypoint MQ Connector Configuration
<anypoint-mq:config name="Anypoint_MQ_Config" doc:name="Anypoint MQ Config" doc:id="ce3aaed9-dcba-41bc-8c68-037c5b1420e2">
<anypoint-mq:connection clientId="${secure::anyPointMq.clientId}" clientSecret="${secure::anyPointMq.clientSecret}" url="${anyPointMq.url}">
<reconnection>
<reconnect frequency="3000" count="3" />
</reconnection>
<anypoint-mq:tcp-client-socket-properties connectionTimeout="30000" />
</anypoint-mq:connection>
</anypoint-mq:config>
Subscriber flow
<flow name="sfdc-bulk-query-job-subscription" doc:id="7e1e23d0-d7f1-45ed-a609-0fb35dd23e6a" maxConcurrency="1">
<anypoint-mq:subscriber doc:name="Subscribering Bulk Query Job Details" doc:id="98b8b25e-3141-4bd7-a9ab-86548902196a" config-ref="Anypoint_MQ_Config" destination="${anyPointMq.sfPartnerEds.name}" acknowledgementTimeout="${anyPointMq.ackTimeout}" acknowledgementTimeoutUnit="MINUTES">
<anypoint-mq:subscriber-type >
<anypoint-mq:prefetch maxLocalMessages="${anyPointMq.prefecth.maxLocalMsg}" />
</anypoint-mq:subscriber-type>
</anypoint-mq:subscriber>
<json-logger:logger doc:name="INFO - Bulk Job Details have been fetched" doc:id="b25c3850-8185-42be-a293-659ebff546d7" config-ref="JSON_Logger_Config" message='#["Bulk Job Details have been fetched for " ++ payload.object default ""]'>
<json-logger:content ><![CDATA[#[output application/json ---
payload]]]></json-logger:content>
</json-logger:logger>
<set-variable value="#[p('serviceName.sfdcToEds')]" doc:name="ServiceName" doc:id="f1ece944-0ed8-4c0e-94f2-3152956a2736" variableName="ServiceName"/>
<set-variable value="#[payload.object]" doc:name="sfObject" doc:id="2857c8d9-fe8d-46fa-8774-0eed91e3a3a6" variableName="sfObject" />
<set-variable value="#[message.attributes.properties.key]" doc:name="key" doc:id="57028932-04ab-44c0-bd15-befc850946ec" variableName="key" />
<flow-ref doc:name="bulk-job-status-check" doc:id="c6b9cd40-4674-47b8-afaa-0f789ccff657" name="bulk-job-status-check" />
<json-logger:logger doc:name="INFO - subscribed bulk job id has been processed successfully" doc:id="7e469f92-2aff-4bf4-84d0-76577d44479a" config-ref="JSON_Logger_Config" message='#["subscribed bulk job id has been processed successfully for salesforce " ++ vars.sfObject default "" ++ " object"]' tracePoint="END"/>
</flow>
After the bulk query job subscriber, I am checking the status of the job for 5 time with an interval of 1 minutes inside until successful scope. It generally exhausts all 5 attempts and subscribe it again and do the same process again until it gets completed. I have seen until successfull scope gets exhausted more than one for a single job.
Once the job's status changes to jobComplete. I fetch the result and sends to AWS S3 bucket via mulesoft system api. Here also I use a retry logic as due to large volume of data I always get this message while making first call
HTTP POST on resource 'https://****//dlb.lb.anypointdns.net:443/api/sys/aws/s3/databricks/object' failed: Remotely closed.
But during the second retry it gets successful response from S3 Bucket system api.
Now the main problem:
Though I am using normal queue. I have notice messages remains in flight mode for infinite amount of time and still not get picket up by mule flow/subscriber. Below screenshot shows an example, there were 7 messages in flight but were not being picked up even after many days.
As I have kept maxConcurrency and maxPrefetchLocalMsg to 1. But there are more than 1 messages are been taken out of the queue. Please help understand this.

Camunda - Intermedia message event cannot correlate to a single execution

I created a small application (Spring Boot and camunda) to process an order process. The Order-Service receives the new order via Rest and calls the Start Event of the BPMN Order workflow. The order process contains two asynchronous JMS calls (Customer check and Warehouse Stock check). If both checks return the order process should continue.
The Start event is called within a Spring Rest Controller:
ProcessInstance processInstance =
runtimeService.startProcessInstanceByKey("orderService", String.valueOf(order.getId()));
The Send Task (e.g. the customer check) sends the JMS message into a asynchronous queue.
The answer of this service is catched by a another Spring component which then trys to send an intermediate message:
runtimeService.createMessageCorrelation("msgReceiveCheckCustomerCredibility")
.processInstanceBusinessKey(response.getOrder().getBpmnBusinessKey())
.setVariable("resultOrderCheckCustomterCredibility", response)
.correlate();
I deactivated the warehouse service to see if the order process waits for the arrival of the second call, but instead I get this exception:
1115 06:33:08.564 WARN [o.c.b.e.jobexecutor] ENGINE-14006 Exception while executing job 67d2cc24-0769-11ea-933a-d89ef3425300:
org.springframework.messaging.MessageHandlingException: nested exception is org.camunda.bpm.engine.MismatchingMessageCorrelationException: ENGINE-13031 Cannot correlate a message with name 'msgReceiveCheckCustomerCredibility' to a single execution. 4 executions match the correlation keys: CorrelationSet [businessKey=1, processInstanceId=null, processDefinitionId=null, correlationKeys=null, localCorrelationKeys=null, tenantId=null, isTenantIdSet=false]
This is my process. I cannot see a way to post my bpmn file :-(
What can't it not correlate with the message name and the business key? The JMS queues are empty, there are other messages with the same businessKey waiting.
Thanks!
Just to narrow the problem: Do a runtimeService eventSubscription query before you try to correlate and check what subscriptions are actually waiting .. maybe you have a duplicate message name? Maybe you (accidentally) have another instance of the same process running? Once you identified the subscriptions, you could just notify the execution directly without using the correlation builder ...

RabbitMQ and queue data

I have an application with RabbitMQ where I get the number of messages in a Rabbit queue using the HTTP API (/api/queues/vhost/name).
However, it appears that this information is refreshed from time to time (by default every 5 seconds). I thought the information was always up to date, and it was the administration page that was updated in a given interval.
Is there any way to get the number of messages in a queue with real-time information?
Thank you
The management database is updated each 5 seconds by default.
use the command line rabbitmqctl list_queues for real-time values.
Try to use:
channel.messageCount(you_queue)
see if it works for you
/**
* Returns the number of messages in a queue ready to be delivered
* to consumers. This method assumes the queue exists. If it doesn't,
* an exception will be closed with an exception.
* #param queue the name of the queue
* #return the number of messages in ready state
* #throws IOException Problem transmitting method.
*/
long messageCount(String queue) throws IOException;

How to check number of pending messages or all messages are processed or not in ActiveMQ using mule flow

I have a requirement to stop the ActiveMQ connector if all messages have been processed in the queue. This needs to be done in Mule flow.
As shown below, I have two connectors one for reading and other for writing on vci.staging.queue. I want to check if all messages are processed in the queue then disable the reader connector.
Below piece of script to use client.request from muleContext using queue name and reader or writer connector is always returning ‘null’ for me.
Is there any way to get number of pending messages in a queue or to check if all messages are processed or not so that connector can be disabled?
<jms:activemq-connector name="jmsConnectorStagingQReaderNormal"
brokerURL="${mule.activemq.broker.read.normal.url}"
specification="1.1"
maxRedelivery="-1"
persistentDelivery="true"
numberOfConcurrentTransactedReceivers="${mule.activemq.concurrent.receivers}"
connectionFactory-ref="connectionFactory"
disableTemporaryReplyToDestinations="false">
</jms:activemq-connector>
<jms:activemq-connector name="jmsConnectorStagingQWriter"
brokerURL="${mule.activemq.broker.write.url}"
specification="1.1"
maxRedelivery="-1"
persistentDelivery="true"
numberOfConcurrentTransactedReceivers="${mule.activemq.concurrent.receivers}"
connectionFactory-ref="connectionFactory"
disableTemporaryReplyToDestinations="false">
</jms:activemq-connector>
<script:component>
<script:script engine="groovy">
if(muleContext.getRegistry().lookupConnector('jmsConnectorStagingQReaderNormal').isStarted()) {
if(muleContext.client.request("jms://vci.staging.queue?connector= jmsConnectorStagingQReaderNormal ", 5000) == null) {
muleContext.getRegistry().lookupConnector('jmsConnectorStagingQReaderNormal').stop()
}
}
return payload
</script:script>
</script:component>
Use a QueueBrowser to peek into a JMS queue without consuming its messages.
For this:
Create a custom component,
Have Spring inject your jms:activemq-connector in the component,
Call getSession(false, false) on it to get an active JMS Session,
Call createBrowser(Queue queue) on the Session (you can get a hold of the Queue with Session.createQueue(..).

Query for Number of Messages in Mule ESB VM Queue

In a Mule flow, I would like to add an Exception Handler that forwards messages to a "retry queue" when there is an exception. However, I don't want this retry logic to run automatically. Instead, I'd rather receive a notification so I can review the errors and then decide whether to retry all messages in the queue or not.
I don't want to receive a notification for every exception. I'd rather have a scheduled job that runs every 15 minutes and checks to see if there are messages in this retry queue and then only send the notification if there are.
Is there any way to determine how many messages are currently in a persistent VM queue?
Assuming you use the default VM queue persistence mechanism and that the VM connector is named vmConnector, you can do this:
final String queueName = "retryQueue";
int messageCount = 0;
final VMConnector vmConnector = (VMConnector) muleContext.getRegistry()
.lookupConnector("vmConnector");
for (final Serializable key : vmConnector.getQueueProfile().getObjectStore().allKeys())
{
final QueueKey queueKey = (QueueKey) key;
if (queueName.equals(queueKey.queueName))
{
messageCount++;
}
}
System.out.printf("Queue %s has %d pending messages%n", queueName, messageCount);