I am building the cluster with RabbitQM servers . I use shovel plugin to deliver mesages from one rabbitmq to other (different machines)
It works fine . But i want to test how it will behave when no network connection between servers.
On each server i have local queue. I push messages to it and then shovel delivers a message to remote rabbitmq queue
To emulate network problems i did
iptables -D OUTPUT -d xx.xx.xx.xx -j DROP
to disable connection to remote server xx.xx.xx.xx
then i push message to local queue, it disappears from the queue but is not on remote server!
How can it be? does shovel check if remote queue is available before remove a message from a queue?
How to make it to work correctly? I want shovel doesn't remove a message from q queue till ensured ti si delivered to remote queue.
I have found the solution for my problem.
I changed settings os a shovel.
There was the option
ask_mode,on_publish
I changed to
ask_mode, on_confirm
and it started to work correctly.
Related
I'm trying to copy all the messages in queue (Q1) to another queue (Q2) running on a different machine.
I'm using the shovel plugin and both nodes are running amqp 091. I've tested the connection and if I set the destination queue to a non-existing one, it does indeed create a new queue on the separate machine so I know the connection works.
rabbitmqctl set_parameter shovel test '{"src-uri": "amqp://guest:guest#localhost:5672", "src-queue": "q1", "ack-mode": "on-confirm", "dest-uri": "amqp://guest:guest#host:5672", "dest-queue": "q2"}'
I expected the plugin to transfer all existing messages to Q2, however they're not being transferred. Does the shovel plugin not do this?
It's because the messages were not in the Ready state. I had to kill my celery worker and then the messages transferred successfully.
I have 2 Brokers and in specific scenario i want to force the client to connect the a specific broker.
How can I achieve it without dropping the other Broker while using the Failover mechanism ?
You can use the priority backup feature of the failover URI to indicate a preference for a specific broker which it try and stay connected to, if that broker goes down it will fail to any other broker that you have configured and keep trying to reconnect to the priority backup in the background.
I have the same requirement, frequently. For clarity, I use failover with two brokers A and B, A is currently the primary and I have an issue requiring a restart. I want to cause all sending clients to connect to B while I leave the consumers to empty the queues on A, when the queues are empty I restart A.
The only way I've found to do this is to close the activeMQ port on A then my sending clients connect to B and my consumer on A (running on the same machine fortunately) can empty the queues. As well as closing the port it seemed I had to also execute
iptables -I INPUT -p tcp --dport -j REJECT
YMMV
I wish to run an experiment in which the publisher loses connection with the broker and then enqueues messages in its own queue and then when it regains connectivity it sends all its queued messages to the broker. How can I I do this since if I call close connection, I can no longer send(raises an exception). A trick that I can think of is to use a network of two brokers and simulate the above by breaking the connection between the two brokers. Is there an API call that I can use to do the above?
This is very much like facebook messenger or whatsapp acting as a publisher and enqueuing our to-send messages if we are offline and sending them once we are connected.
There is plenty of solutions you could use to break the connection in order to test, here is a non-comprehensive list :
Make a script that can set/unset a firewall rule on your environement blocking the connection port
If you are working with VMs, you can suspend/resume the one running Activemq, you can even automate it with tools like vagrant (vagrant suspend, then vagrant up)
Tweak the connection manualy accessing the activemq jmx
Develop an activemq plugin able to trash connections on demand (or maybe there is one ?)
Now in order to have the behavior you wish to obtain there is two options :
1) Make sure your connection is failover so it can be reestablished, and store your message on disk before sending them with your producer.
2)Produce to a local broker embbeded in your app, and connect this one to the remote broker.
I try to build a system with multiple servers messages exchange.
I have server called Master and another server called Slave.
Master sends messages to Slave and Slave sends messages To Master asynchronously .
I have rabbitmq server on both servers and use federation plugin on both of them to get messages.
So publishers and consumers on both servers communicate only with local rabbitmq server. And all messages exchanges between servers are done only with rabbitmq .
It works fine. When both servers are online.
My requirement is that when there is no network connection between servers then messages should be accomulated until a connection is back.
And it doesn't work with federation plugin . If federation connection is not active then messages are not stored on local rabbitmq.
What should i do to have a model where messages can wait for connection to be delivered to other rabbitmq server?
Do i need to provide more info on my current model?
There is simpler description
RabbitMQ1 has exchange MASTER. RabbitMQ2 created federation with a link to RabbitMQ1 and assigned permissions to the exchange MASTER
Publisher writes to RabbitMQ1 to exchange MASTER with routing key 'myqueue'
Consumer listens RabbitMQ2 on exchange MASTER and queue 'myqueue'.
If there is connection then all works fine
if no connection then messages posted to RabbitMQ1 are not delivered to RabbitMQ2 when connection is back.
How to solve this?
I found the solution for this. Federation is not good plugin for such solution
I used shovel . It does exactly what i need
We are using Apache ActiveMQ 5.5.
We have a broker (let us call it Main Broker) running at tcp://0.0.0.0:61616. This broker does a store and forward message to a remote broker. To do that, we have a network connection from this broker to two remote brokers. We want one of the remote brokers to serve as primary and other as failover. This is the network connect URI that we are using
static:(failover://(tcp://<b>server1</b>:61617,tcp://<b>server2</b>:61617)?randomize=false)
We are using spring DefaultMessageListenerContainer to listen for the messages
failover://(tcp://<b>server1</b>:61617,tcp://<b>server2</b>:61617)?randomize=false
In the normal scenario when all the brokers are up and running and a message is sent to Main Broker, it is getting forwarded to server1 and is consumed by the listener.
If we stop the broker on server1, the fail over is happening successfully and the messages are getting forwarded to server2 and successfully consumed by the listener. The problem is when we bring the server1 back up, the messages continue to be forwarded by the main broker to server2. Our requirement is that once the server1 is up and running, the Main broker should start forwarding the messages to server1 and the listener should connect back to server1 and consume messages. We cannot change randomize to true because we want only one of the servers1 or server2 to be active at a time.
Please let me know whether it is possible and how.
You need to set to true the option "priorityBackup". You URI will become:
static:(failover://(tcp://server1:61617,tcp://server2:61617)?randomize=false&priorityBackup=true)
This will make server1 (the first in the list of servers) priority backup. When server1 goes down, he will failover to server2, but constantly try to reconnect to server1. Hence, when it goes back up again, he will switch back to server1. This option is only available in version 5.6
The complete details are here:
http://activemq.apache.org/failover-transport-reference.html
There is also an interesting blog here:
http://bsnyderblog.blogspot.com/2010/10/new-features-in-activemq-54-automatic.html