I have two Mule instances subscribed to the same topic on a queue but I only want each message consumed once.
I can achieve this by funneling messages to unique queue and processing from there, but to reduce the operational complexity I want to set up the message consumer flow running on each Mule instance to defer to one of the instances.
This would be akin to an ActiveMQ failover setup (where only one instance is running at a time and idle instances only awaken when the running instance fails to respond) or a master/slave arrangement where I would grant one of the instances command over the others. Or like a VM transport that is inter-instance instead of intra-instance.
This would need to be done without any Mule Enterprise Edition components (relying only upon Mule Community Edition components) using Mule versions 3.4. or 3.5.
I was unable to find a convenient built-in way to do this. Instead I assume that each mule instance will run on a separate box and use the server.host value to determine which instance does the processing:
<mule xmlns:redis="http://www.mulesoft.org/schema/mule/redis"
xmlns="http://www.mulesoft.org/schema/mule/core"
version="CE-3.5.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.mulesoft.org/schema/mule/redis http://www.mulesoft.org/schema/mule/redis/3.4/mule-redis.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd">
<!-- define redis instance -->
<redis:config name="redis-instance" />
<flow name="topicConsumer">
<!-- listen to redis channel (topic) -->
<redis:subscribe config-ref="redis-instance">
<redis:channels>
<redis:channel>topic.channel</redis:channel>
</redis:channels>
</redis:subscribe>
<!-- save original payload (message from Redis) -->
<set-session-variable variableName="redisPayload" value="#[payload]" />
<!-- select processor -->
<flow-ref name="topicProcessorSelector"/>
<choice>
<when expression="#[sessionVars['subscriberProcessor'] == server.host]">
<logger level="INFO" message="processing on #[server.host]"/>
</when>
<otherwise>
<logger level="INFO" message="take no action"/>
</otherwise>
</choice>
</flow>
<flow name="topicProcessorSelector" processingStrategy="synchronous">
<!-- get key -->
<redis:get config-ref="redis-instance"
key="topic_processor"/>
<!-- if no key, then add this instance as the processor -->
<choice>
<when expression="#[payload instanceof org.mule.transport.NullPayload]">
<!-- set key -->
<redis:set config-ref="redis-instance"
key="topic_processor"
expire="10"
value="#[server.host]">
</redis:set>
<set-session-variable variableName="subscriberProcessor" value="#[server.host]" />
</when>
<otherwise>
<!-- use existing key -->
<byte-array-to-string-transformer/>
<set-session-variable variableName="subscriberProcessor" value="#[payload]" />
</otherwise>
</choice>
</flow>
</mule>
Related
I am using Mule ESB to design a process whereby one can post a message to a topic. Subscribers will listen to the topic and receive messages. Each subscriber will act on the messages differently. The goal here is to have the ability to post a test message to the topic from HTTP for testing subscribers.
Here is how I have the JMS connection configured:
<!-- JMS Topic connector -->
<jms:activemq-connector name="jmsTopicConnection" specification="1.1" brokerURL="tcp://localhost:61616" validateConnections="true" doc:name="Active MQ2" durable="true" numberOfConcurrentTransactedReceivers="2"/>
This is the flow:
<flow name="auditJMSServiceFlow">
<http:listener config-ref="HTTP" path="/Audit/Activity" responseStreamingMode="ALWAYS" doc:name="HTTP"/>
<set-variable variableName="#['id']" value="#[message.inboundProperties['id']]" doc:name="set dynamic id"/>
<set-payload value="===TOPIC===" doc:name="Set Payload" />
<request-reply storePrefix="mainFlow">
<jms:inbound-endpoint topic="Audit.Activity" connector-ref="jmsTopicConnection" doc:name="JMS Topic Audit.Activity" exchange-pattern="request-response" durableName="audit_activity">
<jms:transaction action="ALWAYS_BEGIN" />
<!-- Not required to explicitly have this element. Mule will put this in implicitly. -->
<!-- <jms:jmsmessage-to-object-transformer displayName="JmsMsg to Object"/> -->
</jms:inbound-endpoint>
</request-reply>
<json:object-to-json-transformer doc:name="transform JMS message to JSON"/>
<json:validate-schema schemaLocation="resource://AuditMsgSchema.json" doc:name="Validate Json Schema"/>
<component class="com.baml.panther.audit.service.impl.AuditServiceImpl" doc:name="Java"/>
<default-exception-strategy>
<commit-transaction exception-pattern="com.foo.ExpectedExceptionType"/>
<jms:outbound-endpoint queue="dead.letter" connector-ref="jmsConnection">
<jms:transaction action="JOIN_IF_POSSIBLE" />
</jms:outbound-endpoint>
</default-exception-strategy>
<logger message="=== #[message.payload] received #[org.mule.util.DateUtiles.getTimeStamp('dd-MM-yyyy_HH-mm-ss.SSS')]" level="INFO" doc:name="Logger"/>
When I am running through the test I get the following error:
Any suggestions would be greatly appreciated.
Russ
For the error: Your request-reply scope is missing an outbound endpoint. You only have the inbound-endpoint (jms:inbound-endpoint). You need to provide the outbound-endpoint as well.
<request-reply storePrefix="mainFlow">
<jms:inbound-endpoint topic="Audit.Activity" connector-ref="jmsTopicConnection" doc:name="JMS Topic Audit.Activity" exchange-pattern="request-response" durableName="audit_activity">
<jms:transaction action="ALWAYS_BEGIN" />
<!-- Not required to explicitly have this element. Mule will put this in implicitly. -->
<!-- <jms:jmsmessage-to-object-transformer displayName="JmsMsg to Object"/> -->
</jms:inbound-endpoint>
</request-reply>
Not sure what your aim there but if you put just a jms:outbound-enpoint (instead of the whole request-reply block), you can send a message to the JMS topic.
The problem is that you cannot put a message source as the first message processor in a request-reply. The request reply allows you a kind of synchronous call for async protocols like JMS.
If you want to send a message to the message broker at the point where you put the request-reply just put a JMS outbound-endpoint.
If what you want to do is consume a message from the JMS topic you have to put a JMS inbound endpoint as the first message processor in a flow.
I have 2 separate Amazon SQS queues; Queue and ResponseQueue.
SQS configurations:
<sqs:config name="Amazon_SQS_Consumer" accessKey="XXX" secretKey="XXX" queueName="Queue" doc:name="Amazon SQS">
<sqs:connection-pooling-profile maxActive="10" maxIdle="10" exhaustedAction="WHEN_EXHAUSTED_GROW" maxWait="12000" minEvictionMillis="60000" evictionCheckIntervalMillis="30000" initialisationPolicy="INITIALISE_ONE"/>
<reconnect count="5" frequency="1000"/>
</sqs:config>
<sqs:config name="Amazon_SQS_Response" accessKey="XXX" secretKey="XXX" queueName="ResponseQueue" doc:name="Amazon SQS">
<sqs:connection-pooling-profile maxActive="100" maxIdle="10" exhaustedAction="WHEN_EXHAUSTED_GROW" maxWait="12000" minEvictionMillis="60000" evictionCheckIntervalMillis="30000" initialisationPolicy="INITIALISE_ONE"/>
<reconnect count="5" frequency="1000"/>
</sqs:config>
I have no problem receiving messages from the first queue (Queue) via:
<flow name="consumer" doc:name="consumer">
<sqs:receive-messages config-ref="Amazon_SQS_Consumer" preserveMessages="true" doc:name="Amazon SQS (Streaming)" visibilityTimeout="300" />
<logger level="INFO" message="#[payload]" />
</flow>
I need to also receive messages from the second queue (ResponseQueue):
<flow name="response" doc:name="response">
<sqs:receive-messages config-ref="Amazon_SQS_Response" preserveMessages="true" doc:name="Amazon SQS (Streaming)" visibilityTimeout="300" />
<logger level="INFO" message="#[payload]" />
</flow>
However, whenever the second sqs:receive-messages is added, I get the following error:
Exception in thread "Receiving Thread" java.lang.LinkageError: loader (instance of org/mule/module/launcher/plugin/MulePluginsClassLoader): attempted duplicate class definition for name: "com/amazonaws/services/sqs/QueueUrlHandler"
Is it possible to read messages from 2 different queues in the same project?
I'm using 3.4.0 CE Mule Server Runtime and 2.4.4 Amazon SQS Connector. I need to stay at these versions. If I switch to 3.5.0 EE Mule Server Runtime, there is no problem in having multiple sqs:receive-messages; it works just as expected. However, it leads to another issue.
Are you using the same credentials in both sqs:config elements? If yes, then you only need one config element and then specify the queue name on the sqs:receive-messages elements.
<sqs:receive-messages queueName="Queue"
preserveMessages="true"
visibilityTimeout="300" />
Refer to the user guide: http://mulesoft.github.io/sqs-connector/2.5.0/mule/sqs-config.html#receive-messages
I have a jms connector, i am receiving message from a queue processing the message in a flow, calling db to get the data based on some ids in the message and writing response output to files, i am using dynamic outbound endpoints to decide output location.
<jms:connector name="tibco" numberOfConsumers="20" ..... >
.....
</jms:connector>
<flow name="realtime" doc:name="ServiceId-8">
<jms:inbound-endpoint queue="${some.queue}" connector-ref="tibco" doc:name="JMS">
<jms:transaction action="ALWAYS_BEGIN"/>
</jms:inbound-endpoint>
<processor ref="proc1"></processor>
<processor ref="proc2"></processor>
<component doc:name="Java">
<spring-object bean="comp1"/>
</component>
<processor ref="proc3"></processor>
<collection-splitter doc:name="Collection Splitter"/>
<processor ref="endpointprocessor"></processor>
<foreach collection="#[message.payload.consumerEndpoints]" counterVariableName="endpoints" doc:name="Foreach">
<when expression="#[consumerEndpoint.getOutputType().equals('txt') and consumerEndpoint.getChannel().equals('file')]">
<processor-chain>
<file:outbound-endpoint path="#[consumerEndpoint.getPath()]" outputPattern="#[consumerEndpoint.getClientId()]-#[attributes['eventId']]%#[consumerEndpoint.getTicSeedCount()]-#[attributes['dateTime']].tic" responseTimeout="10000" doc:name="File"/>
</processor-chain>
</when>
<when expression="#[consumerEndpoint.getOutputType().equals('txt') and consumerEndpoint.getChannel().equals('ftp')]">
<processor-chain>
<ftp:outbound-endpoint path="#[consumerEndpoint.getPath()]" outputPattern="#[consumerEndpoint.getClientId()]-#[attributes['eventId']]%#[consumerEndpoint.getTicSeedCount()]-#[attributes['dateTime']].tic" host="#[consumerEndpoint.getHost()]" port="#[consumerEndpoint.getPort()]" user="#[consumerEndpoint.getChannelUser()]" password="#[consumerEndpoint.getChannelPass()]" responseTimeout="10000" doc:name="FTP"/>
</processor-chain>
</when>
</choice>
</foreach>
<rollback-exception-strategy doc:name="Rollback Exception Strategy">
<processor ref="catchExceptionCustomHandling"></processor>
</rollback-exception-strategy>
</flow>
Above is not complete flow. i pasted the important parts to understand.
Question 1. As i have not defined any thread strategy at any level, and connector has numberOfConsumers="20", if i drop 20 messages in queue how many threads will start.
prefetch size in the jms queue is set to 20.
Question 2: Do i need to configure threading strategy at receiver end and/or at flow level.
some time when the load is very high(let say 15k msgs in queue in a minute) i see message processing gets slow and thread dump shows some thing like below:
"TIBCO EMS Session Dispatcher (7905958)" prio=10 tid=0x00002aaadd4cf000 nid=0x3714 waiting for monitor entry [0x000000004af1e000]
java.lang.Thread.State: BLOCKED (on object monitor)
at org.mule.endpoint.DynamicOutboundEndpoint.createStaticEndpoint(DynamicOutboundEndpoint.java:153)
- waiting to lock <0x00002aaab711c0e0> (a org.mule.endpoint.DynamicOutboundEndpoint)
Any help and pointers will be appreciated.
Thanks-
Message processing is getting slow because of dynamic endpoint, I see thread congestion when dynamic outbound endpoint is created and used. I was using mule 3.3.x and after looking at mule 3.4.x code i realized that dynamic outbound endpoint creation is handled more appropriately. upgraded to 3.4 and the issue is almost gone.
I have a Mule flow which processes files in an inbound folder that are named AAA_[id_number].dat. However, I need to configure Mule to only process this file when a corresponding file named [id_number].dat is also available. The second file indicates that the first is ready for processing.
Is there a way I can configure an inbound endpoint in Mule to only start processing the AAA_ file when it's counterpart is present? The [id_number].dat file is purely for notification purposes, it should not be processed by Mule. The inbound endpoint has a regex filter to look for a file in the format AAA...
<!-- Mule Requester Config -->
<mulerequester:config name="muleRequesterConfig" doc:name="Mule Requester"/>
<!-- File Connectors -->
<file:connector name="inputTriggerConnector" pollingFrequency="100" doc:name="File"/>
<file:connector name="inputFileConnector" doc:name="File"/>
<file:connector name="outputFileConnector" doc:name="File"/>
<!-- File Endpoints -->
<file:endpoint name="inputFileEndpoint" path="src/test/input" responseTimeout="10000" doc:name="File">
<file:filename-regex-filter pattern="\d{6}.dat" caseSensitive="true"/>
</file:endpoint>
<!-- Trigger Flow -->
<flow name="triggerFlow" doc:name="triggerFlow">
<file:inbound-endpoint ref="inputFileEndpoint" connector-ref="inputTriggerConnector" pollingFrequency="1000" doc:name="Input Trigger"/>
<flow-ref name="mainFlow_StockB2C" doc:name="Flow Reference"/>
</flow>
<!-- Main Flow -->
<flow name="mainFlow" doc:name="mainFlow">
<mulerequester:request config-ref="muleRequesterConfig" resource="file://.../AAA_#[message.inboundProperties.originalFilename]?connector=inputFileConnector" timeout="6000" doc:name="Mule Requester"/>
<DO SOMETHING WITH AAA_ FILE>
<file:outbound-endpoint connector-ref="outputFileConnector" path="src/test/output" outputPattern="#[function:dateStamp].csv" responseTimeout="6000" doc:name="Output File"/>
</flow>
Why not filter set a file inbound filter for the [id_number].dat files (or one that excludes the AAA_ files), if those are only used for notification? Would make more sense in my opinion. You can then grab the file to be processed with the requester module inside the flow, based on the originalFileName property.
Just in case this might help someone who needs it, you can create a custom filter and include your own filtering logic in there. More details from this blog here
I have a mule flow as under
<flow name="flow1" doc:name="f1">
<file:inbound-endpoint path="C:\input" responseTimeout="10000"
doc:name="File" />
</flow>
<flow name="flow2" doc:name="f2">
<http:inbound-endpoint address="http://localhost:8080"
doc:name="HTTP" exchange-pattern="request-response" />
<flow-ref name="flow1" doc:name="Flow Reference" />
<file:outbound-endpoint path="C:\outputfile"
responseTimeout="10000" doc:name="File" />
</flow>
I am trying to move/upload multiple files from source to destination (can be anything e.g. FTP or File outbound etc..) by using the flow.
The reason for doing in this way is that I want to invoke the job from CLI(Command Line Interface) using CURL.
But it is not working....
Edited
I need to pick up some files(multiple files) from a particular folder located in my hard drive. And then move those to some outbound process which can be FTP site or some other hard drive location.
But this flow needs to be invoked from CLI.
Edited (Based on David's answer)
I now have the flow as under
<flow name="filePickupFlow" doc:name="flow1" initialState="stopped">
<file:inbound-endpoint path="C:\Input" responseTimeout="10000" doc:name="File"/>
<logger message="#[message.payloadAs(java.lang.String)]" level="ERROR" />
</flow>
<flow name="flow2" doc:name="flow2">
<http:inbound-endpoint address="http://localhost:8080/file-pickup/start" doc:name="HTTP" exchange-pattern="request-response"/>
<expression-component>
app.registry.filePickupFlow.start();
</expression-component>
<file:outbound-endpoint path="C:\outputfile" responseTimeout="10000" doc:name="File"/>
</flow>
I am getting couple of problems
a) I am getting an error that - Attribute initialState is not defined as a valid property of flow
However, if I remove that attribute, the flow continues without waiting for "http://localhost:8080/file-pickup/start" to fire up.
b) The files are not moved to the destination folder
So how can I do so?
You can't reference a flow that has an inbound endpoint in it because such a flow is already active and consuming events from its inbound endpoint so you can't invoke it on demand.
The following, tested on Mule 3.3.1, shows how to start a "file pickup flow" on demand from an HTTP request.
<flow name="filePickupFlow" initialState="stopped">
<file:inbound-endpoint path="///tmp/mule/input" />
<!-- Do something with the file: here we just log its content -->
<logger message="#[message.payloadAs(java.lang.String)]" level="ERROR" />
</flow>
<flow name="filePickupStarterFlow">
<http:inbound-endpoint address="http://localhost:8080/file-pickup/start"
exchange-pattern="request-response" />
<expression-component>
app.registry.filePickupFlow.start();
</expression-component>
<set-payload value="File Pickup successfully started" />
</flow>
HTTP GETting http://localhost:8080/file-pickup/start would then start the filePickupFlow, which in turn will process the files in /tmp/mule/input.
Note that it is up to you to configure the file:connector for what behavior it must have for files it processes, either deleting them or moving them to another directory are two main options.
I guess in this case a File inbound to read a file on demand will not be helpful.
Please try if the follwoing way.
<flow name="flow1" doc:name="f2">
<http:inbound-endpoint address="http://localhost:8080"
doc:name="HTTP" exchange-pattern="request-response" />
<component>
<spring-object bean="fileLoader"></spring-object>
</component>
<file:outbound-endpoint path="C:\outputfile"
responseTimeout="10000" doc:name="File" />
</flow>
So the Custom component will be a Class which reads the file from your specified location.
Hope this helps.
You can use Mule Requester for a clean solution. See the details in the blog entry Introducing the Mule Requester.