Purge messages from rabbitMQ queue in mule 3 using http requester - rabbitmq

My requirement is to clear all the messages from queue(not delete the queue only purge the messages from queue) before processing the flow or publishing anything in the queue. We are using rabbitMQ and due to some reason messages are stucked in the queue and because of that we are facing some issue when we are counting the queue based on the messages. so for the next time before processing we have to clear the queue. Here we have multiple queue like slave1, slave2,slave3 and when api will be triggered in the process section we have to clear the queue.
Mule 3 has a generic AMQP connector so i think we have to use http requester to do the same.
Kindly help to get the HTTP API to connect to rabbitMQ and do the purging. I have gone through some forum but not getting the clear idea. How do I delete all messages from a single queue using the CLI?

If I understand correctly you need help to implement the same HTTP request that is performed using curl in the previous answers, but using Mule 3.
The answer shows how to use RabbitMQ management plugin's REST API to delete the contents of a queue:
curl -i -u guest:guest -XDELETE http://localhost:15672/api/queues/vhost_name/queue_name/contents
In Mule 3 something equivalent would be:
<http:request-config name="HTTP_Request_Configuration" host="httpbin.org" port="80" doc:name="HTTP Request Configuration" >
<http:basic-authentication username="${user}" password="${password}"/>
</http:request-config>
<flow name="deleteQueueFlow">
<http:request method="DELETE" doc:name="Request" config-ref="HTTP_Request_configuration" path="/api/queues/${vhostname}/${queuename}/contents"/>
</flow>

Related

clear messages from rabbitMQ queue in mule 3

My requirement is to clear all the messages from queue before processing the flow or publishing anything in the queue.
We are using rabbitMQ and due to some reason messages are stucked in the queue and because of that we are facing some issue when we are counting the queue based on the messages. so for the next time before processing we have to clear the queue.
Here we have multiple queue like slave1, slave2,slave3 and when api will be triggered in the process section we have to clear the queue.
Kindly suggest how we can do this in mule3.
Mule 3 has a generic AMQP connector. It does not support administrative commands from a specific implementation like RabbitMQ, so you can't use the connector.
You could use RabbitMQ REST API and call it using the HTTP Request connector. See this previous answer to see how to delete queues with Curl, then implement the same request with HTTP Request: https://stackoverflow.com/a/29148299/721855

use an inbound VM or JMS endpoint with transactional delivery to perform retries on the outbound FTP endpoint

Alternatively, you could remove until-successful and use an inbound VM or JMS endpoint with transactional delivery to perform retries on the outbound FTP endpoint.
could you please provide me the example on this with VM?
this related to below question
how to make until successful as synchronous to retry mechanism for FTP Outbound in mule 3.4.2
How about try something like the below
<flow name="transactionalVM">
<vm:inbound-endpoint path="orders" exchange-pattern="one-way">
<vm:transaction action="ALWAYS_BEGIN"/>
</vm:inbound-endpoint>
<file:outbound-endpoint ref="receivedOrders"/> <!-- replace this with your FTP endpoint -->
</flow>
Look at transaction management in Mule to figure out how the retries can be controlled, right now its going to try endlessly
Now if you are wondering how the file outbound is part of the transaction have a look at this (Configuration Tips and Tricks) and this, hint here

how to make until successful as synchronous to retry mechanism for FTP Outbound in mule 3.4.2

I used retry mechanism for outbound FTP by using Untill successful. It is working fine but it is working as asynchronously in mule 3.4.2.I have seen synchronous option is avaliable in 3.5. is it possible to make untill successful scope working as synchronous in 3.4.2 version? if possible could u please provide me the solution? or else any other solution to use retry mechanism for outbound FTP?
<until-successful objectStore-ref="objectStore" maxRetries="3" secondsBetweenRetries="1" doc:name="Until Successful">
<ftp:outbound-endpoint host="10.10.10.10" port="7055" path="#[flowVars.FTPConfig.getPath()]" user="user" password="password" outputPattern="${filename}" responseTimeout="20000" doc:name="FTP" connector-ref="FTP"/>
</until-successful>
No, it is not possible to make until-successful synchronous in Mule 3.4. You need to upgrade to version 3.5, or even 3.6 which is available at this writing.
Alternatively, you could remove until-successful and use an inbound VM or JMS endpoint with transactional delivery to perform retries on the outbound FTP endpoint.

mule until-successful router: is objectStore mandatory?

I am consuming a message from JMS queue and submitting it to a SOAP base web service. I want to make sure that I provide guaranteed delivery of the message to the web service where I am submitting the message.
I'm looking at two options
1. Use until-successful router (preferred) and if unable to transmit the message put it in Dead Letter Queue.
2. Use JMS transactions, and if the transmission of message to the web service fails rollback the transaction and the JMS message stays in the queue.
<jms:inbound-endpoint queue="ws.message"/>
<until-successful objectStore-ref="objectStore"
dlqEndpoint-ref="dlqChannel"
maxRetries="3"
secondsBetweenRetries="10">
...
</until-successful>
I am more inclined towards using until–successful router but the concern that I have is it requires a mandatory ObjectStore. I do not want to store the message in database/object store, instead push it to some JMS queue/a dead letter queue and consume it from there.
Any helpful tips or suggestions to handle the situation are appreciated.
If you use until-successful, a reference to an objectStore is mandatory. Mule uses this object to store messages between retries. Also, you should use a persistent object store to avoid message loss in case of the Mule server or your application crashes.
Until-successful router should be the preferred method for this use case IMO. It will make the config easier to read and maintain vs using just jms queues.

ActiveMQ loadbalancing in Mule

I have an issue with ActiveMQ loadbalancing with Mule. I am using Mule 3.2.0 and ActiveMQ 5.5.1
I have a Mule FLow application which uses JMS inbound Endpoint to listen to queues in ActiveMQ.
I have 2 instances of AMQ running each having a queue called "MyQueue".
Lets name them AMQ1 and AMQ2.
I also have 2 mule instances running - each having the same app. Let's name them Mule1 and Mule2.
Now I want each mule instance to pick up messages from either of the AMQ queues. So say a Message Sender sends the message to the queue MyQueue in either AMQ1 or AMQ2 (Message sender load balances using failover transport supported by ActiveMQ - and that bit works fine). Say it reached AMQ1. Now ideally I would like to have 10 consumers each of Mule1 and Mule2 registered in each AMQ instance. So both of them are listening for incoming messages in both the queues. One of them should pick up the message from the queue and process it.
This is the config I am using in Mule to connect to both the AMQ brokers.
<jms:activemq-connector name="Active_MQ" brokerURL="failover:tcp://10.0.64.158:61616,tcp://10.0.64.160:61616)?randomize=true" eagerConsumer="true" numberOfConsumers="10" dynamicNotification="true" validateConnections="true" clientId="MuleInstance1" doc:name="Active MQ">
<reconnect count="5" frequency="3000" blocking="false"/>
</jms:activemq-connector>
kindly note clientId is different for different Mule instances. Also currently AMQ 1 and Mule1 share the same machine and AMQ2 and Mule2 share another machine.
However I am noticing soem random behaviour. AT times all consumers(both of Mule1 and Mule2) register only to one AMQ instance. At times Mule1 is registering only to AMQ1 and Mule 2 to AMQ2.
What ideally I want is cosnumers of both Mule1 and Mule2 to register to both AM1 and AMQ2
I followed the instructions here
http://www.mulesoft.org/documentation-3.2/display/MULE3USER/ActiveMQ+Integration
Basically I wanted to use the network of broker architecture so that there is no loss of service whether a Mule instance or AMQ instance goes down or has to be restarted.
Not sure the randomize=true query param in helping in this case.
Can someone kindly advise how to achieve the above using Mule 3.2.0 and Active MQ 5.5.1?
If there isn't a solution sadly I may have to make Mule1 listen to AMQ1 and Mule2 listen to only AMQ2 and it won't really be clustered :(
Thanks in advance.
Got it working.
Got a suggestion the Mule forum itself.
http://forum.mulesoft.org/mulesoft/topics/activemq_loadbalancing_with_mule
So basically instead of relying on AMQ load balancing for the consumer I used 2 AMQ connectors and used a composite source in each mule app listenign to 2 Inbound EndPoints. And it works a treat. Bringing up/Shutting down Mule and AMQ instances - all worked a treat. here's teh config
<jms:activemq-connector name="Active_MQ_1" brokerURL="failover: (tcp://10.0.64.158:61616)" eagerConsumer="true" numberOfConsumers="10" dynamicNotification="true" validateConnections="true" clientId="MuleInstance1" doc:name="Active MQ">
<reconnect count="5" frequency="3000" blocking="false"/>
</jms:activemq-connector>
<jms:activemq-connector name="Active_MQ_2" brokerURL="failover:(tcp://10.0.64.160:61616)" eagerConsumer="true" numberOfConsumers="10" dynamicNotification="true" validateConnections="true" clientId="MuleInstance1" doc:name="Active MQ">
<reconnect count="5" frequency="3000" blocking="false"/>
</jms:activemq-connector>
Now refer to that from within your flow with composite-source
<flow name="MyAutomationFlow" doc:name="MyAutomationFlow">
<composite-source>
<jms:inbound-endpoint queue="MyOrderQ" connector-ref="Active_MQ1" doc:name="JMS Inbound Endpoint"/>
<jms:inbound-endpoint queue="MyOrderQ" connector-ref="Active_MQ2" doc:name="JMS Inbound Endpoint"/>
</composite-source>
........
Worked a treat!