Mule: Processing messages in intervals. Delayed message processing - mule

How do I create a delayed JMS message processor in Mule 3.3.1? My goal is to process messages from a queue in certain interval...some listener that wakes up every minute to process messages.
I have the following configuration, but the delay is not honored. When a message is rolled back, it is immediately picked for processing.
<spring:bean id="MQConnectionFactory" class="com.ibm.mq.jms.MQQueueConnectionFactory">
<spring:property name="transportType" value="1"/>
<spring:property name="hostName" value="myHost"/>
<spring:property name="port" value="1414"/>
<spring:property name="queueManager" value="myQmgr"/>
</spring:bean>
<jms:connector name="queueConnector" connectionFactory-ref="MQConnectionFactory"
specification="1.1" username="xxx" password="yyy"
disableTemporaryReplyToDestinations="true"
numberOfConcurrentTransactedReceivers="3" maxRedelivery="5">
<service-overrides transactedMessageReceiver="com.mulesoft.mule.transport.jms.TransactedPollingJmsMessageReceiver"/>
</jms:connector>
<jms:endpoint name="someQueue" queue="osmQueue" connector-ref="queueConnector">
<jms:transaction action="ALWAYS_BEGIN"/>
<property key="pollingFrequency" value="60000"/>
</jms:endpoint>
I did a lot of search but am unable to indentify a proper solution. If there is a better option, I'm open. Appreciate any help. 2 days and no response? Did I phrase the question wrong?

Have you tried using Quartz?
This config fires up your JMS inbound every minute
<flow name="ftpFlow2" doc:name="ftpFlow2">
<quartz:inbound-endpoint jobName="job1" repeatInterval="60000" responseTimeout="10000" doc:name="Quartz">
<quartz:endpoint-polling-job>
<quartz:job-endpoint ref="someQueue"/>
</quartz:endpoint-polling-job>
</quartz:inbound-endpoint>
</flow>

Related

Mule Quartz Connector terminates abruptly after running for hours

My Mule Quartz Connector runs for several hours then terminates abruptly. I want the job to run continuously every 30 minutes past the hour. However, after running for several hours it suddenly terminates. I suspect it is related to the cronExpression but I am not sure which part will cause that to happen. I feel that the cronExpression has something that is making this happen, as it simply shuts the application down terminating the process.
Please help!!
Here is the quartz configuration:
<quartz:connector name="updateQuartzConnector" validateConnections="true" doc:name="Quartz">
<receiver-threading-profile maxThreadsActive="1"/>
<quartz:factory-property key="org.quartz.scheduler.instanceName" value="updateQuartzScheduler"/>
<quartz:factory-property key="org.quartz.threadPool.class" value="org.quartz.simpl.SimpleThreadPool"/>
<quartz:factory-property key="org.quartz.threadPool.threadCount" value="1"/>
<quartz:factory-property key="org.quartz.scheduler.rmi.proxy" value="false"/>
<quartz:factory-property key="org.quartz.scheduler.rmi.export" value="false"/>
<quartz:factory-property key="org.quartz.jobStore.class" value="org.quartz.simpl.RAMJobStore"/>
</quartz:connector>
And here is the flow using the quartz configuration above:
<flow name="processClientData" tracking:enable-default-events="true" processingStrategy="synchronous">
<quartz:inbound-endpoint responseTimeout="10000" doc:name="30 minutes past hour" cronExpression="0 30/30 0/1 * * ?"
jobName="ProcessClientUpdates" repeatInterval="0" connector-ref="updateQuartzConnector">
<quartz:event-generator-job/>
</quartz:inbound-endpoint>
<flow-ref name="process.client.data" />
</flow>
I show part of the flow, although this part works fine:
<flow name="process.client.data" processingStrategy="synchronous">
<db:select config-ref="ORACLE_CONFIG" doc:name="Check Customer existence in Database">
<db:parameterized-query><![CDATA[SELECT first_name,last_name,email FROM contact
WHERE email=#[payload.email]]]></db:parameterized-query>
</db:select>
<enrich source="#[payload.size() > 0]" target="#[recordVars['exists']]"/>
<enrich source="#[payload]" target="#[recordVars['dbRecord']]"/>
.......
</flow>
Please help this issue is just perplexing
The cron expression should be 0 30/30 * * *?

How to reconnect a JMS connector after a regular interval

Requirement :- A JMS Connector with a Oracle AQ as inbound endpoint.
Problem statement :- How to reconnect a JMS connector after a regular interval so that when it reconnect it consumes the new messages in queue.
I have tried the below reconnect strategy
-- JMS Connector defined
<jms:connector name="AQJMS" validateConnections="true"
maxRedelivery="-1" numberOfConsumers="1" durable="true" doc:name="JMS"
username="X" password="X" connectionFactory-ref="OAQTopicConnectionFactoryBean">
-- reconnect at 5 sec
<reconnect-forever frequency="5000"/>
</jms:connector>
-- flow
<flow name="sendmessagetoqFlow">
-- jms inbound endpoint as oracle AQ
<jms:inbound-endpoint queue="QUEUE"
connector-ref="AQJMS" doc:name="AQJMS">
<jms:client-ack-transaction action="BEGIN_OR_JOIN"/>
</jms:inbound-endpoint>
<logger message="Log 1 - #[message.inboundProperties]" level="INFO" doc:name="Logger 1"/>
</flow>
But its not reconnecting after 5 seconds.
Could you please help me solve the problem.
Thanks in Advance.
Configuring Transactional Polling (Enterprise)
This works for me
<jms:connector ...cut...>
<service-overrides transactedMessageReceiver="com.mulesoft.mule.transport.jms.TransactedPollingJmsMessageReceiver" />
</jms:connector>
<jms:inbound-endpoint queue="my.queue">
<ee:multi-transaction action="ALWAYS_BEGIN" timeout="30000"/>
<properties>
<spring:entry key="pollingFrequency" value="5000" />
</properties>
</jms:inbound-endpoint>

How to set a Max number of parallel VM flows

I have a request-response flow that starts with VM. Is there a way to restrict a number of the requests that could be processed in parallel by the flow? I'm on 3.7. Thanks.
Ps. I've tried using maxThreadsActive in VM connector but it still runs in the "source" thread. This is how the VM connector in defined:
<vm:connector name="myvm" validateConnections="true" doc:name="VM">
<receiver-threading-profile maxThreadsActive="1"/>
<vm:queue-profile>
<default-in-memory-queue-store/>
</vm:queue-profile>
</vm:connector>
and then in the flow:
<vm:inbound-endpoint exchange-pattern="request-response" path="myqueue" connector-ref="myvm" doc:name="getevent">
<vm:transaction action="NONE"/>
</vm:inbound-endpoint>
This is how it's called from the "source" flow:
<vm:outbound-endpoint exchange-pattern="request-response" path="myqueue" connector-ref="myvm" doc:name="VM">
<vm:transaction action="NONE"/>
</vm:outbound-endpoint>
You can configure the number of receiver threads for the connector or your inbound-endpoint to one:
<vm:connector name="VM" validateConnections="true">
<receiver-threading-profile maxThreadsActive="1"/>
</vm:connector>
<flow name="testFlow1">
<vm:inbound-endpoint path="in" connector-ref="VM"/>
<echo-component/>
</flow>
You can control this with the threading profiles. For example:
<configuration >
<default-threading-profile maxBufferSize="100" maxThreadsActive="20" maxThreadsIdle="10" threadTTL="60000" poolExhaustedAction="RUN" />
</configuration>
You can read more here: https://docs.mulesoft.com/mule-user-guide/v/3.7/tuning-performance

Using Quartz with Mule in Clustered Environment

I have a scenario where , I am trying to read data from Yelp API and want to put it into a ActiveMQ queue after certain intervals, So I am using quartz scheduler for the same.My quartz scheduler runs after every 10 minutes and pushes the data to queue,
All is fine till here,
Now I want this to work in a clustered environment, where I will have 2 instances deployed and listening to same Yelp Endpoint , Now what is happening is, my quartz scheduler from 2 instances are executing at same instance and they extract same information from Yelp ,causing same messages to land up in ActiveMQ queue, that is DUPLICATES,(I want to use clustered environment for High availability purposes, i.e. if any node fails other node can takeover.)
So is there any configuration, in Mule which can promote one node as master and other as failover node.
Thanks for all the help!
This will trigger by the cron expression 0/10 * * * * ? (each 10th second) for one of all nodes running the same application and that connects to the same database (MySQL in this case). The Quartz setup is a bit messy. You need to configure the database etc, but I leave you studying the Quartz docs for that. You should look at version 1.8.x and not 2.x.
It's pretty much a budget alternative of clustering endpoints in Mule EE. It's useful when you do not want to cluster your EE nodes or need to run CE nodes.
<?xml version="1.0" encoding="UTF-8"?>
<mule xmlns:quartz="http://www.mulesoft.org/schema/mule/quartz" xmlns:file="http://www.mulesoft.org/schema/mule/file" xmlns:tracking="http://www.mulesoft.org/schema/mule/ee/tracking" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
xmlns:spring="http://www.springframework.org/schema/beans" version="EE-3.6.2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/file http://www.mulesoft.org/schema/mule/file/current/mule-file.xsd
http://www.mulesoft.org/schema/mule/ee/tracking http://www.mulesoft.org/schema/mule/ee/tracking/current/mule-tracking-ee.xsd
http://www.mulesoft.org/schema/mule/quartz http://www.mulesoft.org/schema/mule/quartz/current/mule-quartz.xsd">
<quartz:connector name="quartzConnector" validateConnections="true" doc:name="Quartz">
<quartz:factory-property key="org.quartz.scheduler.instanceName" value="QuartzScheduler" />
<quartz:factory-property key="org.quartz.scheduler.instanceId" value="AUTO" />
<quartz:factory-property key="org.quartz.jobStore.isClustered" value="true" />
<quartz:factory-property key="org.quartz.scheduler.jobFactory.class" value="org.quartz.simpl.SimpleJobFactory" />
<quartz:factory-property key="org.quartz.threadPool.class" value="org.quartz.simpl.SimpleThreadPool" />
<quartz:factory-property key="org.quartz.threadPool.threadCount" value="3" />
<quartz:factory-property key="org.quartz.scheduler.rmi.proxy" value="false" />
<quartz:factory-property key="org.quartz.scheduler.rmi.export" value="false" />
<quartz:factory-property key="org.quartz.jobStore.class" value="org.quartz.impl.jdbcjobstore.JobStoreTX" />
<quartz:factory-property key="org.quartz.jobStore.driverDelegateClass" value="org.quartz.impl.jdbcjobstore.StdJDBCDelegate" />
<quartz:factory-property key="org.quartz.jobStore.dataSource" value="quartzDataSource" />
<quartz:factory-property key="org.quartz.jobStore.tablePrefix" value="QRTZ_" />
<quartz:factory-property key="org.quartz.dataSource.quartzDataSource.driver" value="com.mysql.jdbc.Driver" />
<quartz:factory-property key="org.quartz.dataSource.quartzDataSource.URL" value="jdbc:mysql://localhost:3306/qrtz" />
<quartz:factory-property key="org.quartz.dataSource.quartzDataSource.user" value="root" />
<quartz:factory-property key="org.quartz.dataSource.quartzDataSource.password" value="" />
<quartz:factory-property key="org.quartz.dataSource.quartzDataSource.maxConnections" value="8" />
</quartz:connector>
<flow name="cFlow1">
<quartz:inbound-endpoint jobName="job1" cronExpression="0/10 * * * * ?" repeatInterval="0" connector-ref="quartzConnector" responseTimeout="10000" doc:name="Quartz">
<quartz:event-generator-job>
<quartz:payload>Job Trigger</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
<logger level="INFO" message="Got message" doc:name="Logger"/>
</flow>
</mule>
We use 3.5.2-Enterprise edition but not sure if there is restriction on the community edition as such. Can you try the following way and see if that works:
<!-- Quart Connector with one thread to ensure that we don't have duplicate processing at any point of time -->
<quartz:connector name="QuartzConnector" validateConnections="true">
<receiver-threading-profile maxThreadsActive="1" />
</quartz:connector>
Then refer this in your flow wherever you are planning to trigger this action.
<flow name="test">
<quartz:inbound-endpoint jobName="myQuartzJob" cronExpression="${my.job.cron.expression}" repeatInterval="${my.job.repeat.interval}" responseTimeout="${my.job.response.timeout}" connector-ref="QuartzConnector">
<quartz:event-generator-job>
<quartz:payload>blah</quartz:payload>
</quartz:event-generator-job>
</quartz:inbound-endpoint>
</flow>
Hope that works.

Mule flow with Jms connector, Threads blocking in dynamic outbound endpoint

I have a jms connector, i am receiving message from a queue processing the message in a flow, calling db to get the data based on some ids in the message and writing response output to files, i am using dynamic outbound endpoints to decide output location.
<jms:connector name="tibco" numberOfConsumers="20" ..... >
.....
</jms:connector>
<flow name="realtime" doc:name="ServiceId-8">
<jms:inbound-endpoint queue="${some.queue}" connector-ref="tibco" doc:name="JMS">
<jms:transaction action="ALWAYS_BEGIN"/>
</jms:inbound-endpoint>
<processor ref="proc1"></processor>
<processor ref="proc2"></processor>
<component doc:name="Java">
<spring-object bean="comp1"/>
</component>
<processor ref="proc3"></processor>
<collection-splitter doc:name="Collection Splitter"/>
<processor ref="endpointprocessor"></processor>
<foreach collection="#[message.payload.consumerEndpoints]" counterVariableName="endpoints" doc:name="Foreach">
<when expression="#[consumerEndpoint.getOutputType().equals('txt') and consumerEndpoint.getChannel().equals('file')]">
<processor-chain>
<file:outbound-endpoint path="#[consumerEndpoint.getPath()]" outputPattern="#[consumerEndpoint.getClientId()]-#[attributes['eventId']]%#[consumerEndpoint.getTicSeedCount()]-#[attributes['dateTime']].tic" responseTimeout="10000" doc:name="File"/>
</processor-chain>
</when>
<when expression="#[consumerEndpoint.getOutputType().equals('txt') and consumerEndpoint.getChannel().equals('ftp')]">
<processor-chain>
<ftp:outbound-endpoint path="#[consumerEndpoint.getPath()]" outputPattern="#[consumerEndpoint.getClientId()]-#[attributes['eventId']]%#[consumerEndpoint.getTicSeedCount()]-#[attributes['dateTime']].tic" host="#[consumerEndpoint.getHost()]" port="#[consumerEndpoint.getPort()]" user="#[consumerEndpoint.getChannelUser()]" password="#[consumerEndpoint.getChannelPass()]" responseTimeout="10000" doc:name="FTP"/>
</processor-chain>
</when>
</choice>
</foreach>
<rollback-exception-strategy doc:name="Rollback Exception Strategy">
<processor ref="catchExceptionCustomHandling"></processor>
</rollback-exception-strategy>
</flow>
Above is not complete flow. i pasted the important parts to understand.
Question 1. As i have not defined any thread strategy at any level, and connector has numberOfConsumers="20", if i drop 20 messages in queue how many threads will start.
prefetch size in the jms queue is set to 20.
Question 2: Do i need to configure threading strategy at receiver end and/or at flow level.
some time when the load is very high(let say 15k msgs in queue in a minute) i see message processing gets slow and thread dump shows some thing like below:
"TIBCO EMS Session Dispatcher (7905958)" prio=10 tid=0x00002aaadd4cf000 nid=0x3714 waiting for monitor entry [0x000000004af1e000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.mule.endpoint.DynamicOutboundEndpoint.createStaticEndpoint(DynamicOutboundEndpoint.java:153)
- waiting to lock <0x00002aaab711c0e0> (a org.mule.endpoint.DynamicOutboundEndpoint)
Any help and pointers will be appreciated.
Thanks-
Message processing is getting slow because of dynamic endpoint, I see thread congestion when dynamic outbound endpoint is created and used. I was using mule 3.3.x and after looking at mule 3.4.x code i realized that dynamic outbound endpoint creation is handled more appropriately. upgraded to 3.4 and the issue is almost gone.