How to set up a mirrored queue that will work when the master node goes down? - rabbitmq

I have a cluster of two rabbitmq servers in my dev environment and I want to make it so that the queue and all of the messages will be available when the original master goes down.
I made a durable queue on a durable exchange with the following attributes:
ha-mode : all
ha-sync-mode : automatic
x-queue-master-locator : min-masters
I also published a persistant message to the queue.
When I bring down the host that is the master for the queue the state changes to down. I expected that ha-mode all would copy the queue and its messages to all nodes, that ha-sync-mode would keep the nodes synced, and that x-queue-master-locator would move the queue to the other node or in production to the node with the least queues. How do I set up a queue so that I can achieve this?
Edit(More info):
Server info:
rmq: 3.7.17
Erlang: 22.0.7
My config for both nodes:
vm_memory_high_watermark.relative = 0.65
vm_memory_high_watermark_paging_ratio = 0.8
disk_free_limit.relative = 2.0
channel_max = 32
num_acceptors.tcp = 20
num_acceptors.ssl = 0
handshake_timeout = 10000
frame_max = 160000
mirroring_sync_batch_size = 1024
background_gc_enabled = true
background_gc_target_interval = 300000

These attributes mean nothing when you create a queue with them. You need to create a Policy that adds these attributes to the queue.

Related

Celery fanout signals with `send_task`

I'm trying to establish communication between different processes running celery. I successfully sent tasks from one process to others using app.send_task on a celery instance. I am struggling now to broadcast tasks through a fanout rabbitmq exchange to all other instances (basically a publish-subscribe pattern for celery).
It must be related to the routing across exchanges and queues but I simply can't make it work.
This is the master emitter which broadcasts a task named signal through the default exchange of type fanout:
from celery import Celery
from kombu import Exchange, Queue
app = Celery('emitter',
broker='pyamqp://test#localhost//',
backend='db+sqlite:///results.db')
default_queue_name = 'default'
default_exchange_name = 'default'
default_routing_key = 'default'
default_exchange = Exchange(default_exchange_name, type='fanout')
default_queue = Queue(
default_queue_name,
default_exchange,
routing_key=default_routing_key)
app.conf.task_queues = (
default_queue,
)
app.conf.task_default_queue = default_queue_name
app.conf.task_default_exchange = default_exchange_name
app.conf.task_default_routing_key = default_routing_key
if __name__ == '__main__':
app.send_task(name='signal', exchange='default')
To my understanding the routing, queue and exchange setup on the other app needs to be identical. Thus, this is a very similarly looking piece of code but defining a task that gets called:
from celery import Celery
from kombu import Exchange, Queue
app = Celery('CLIENT_A',
broker='pyamqp://test#localhost//',
backend='db+sqlite:///results.db')
default_queue_name = 'default'
default_exchange_name = 'default'
default_routing_key = 'default'
default_exchange = Exchange(default_exchange_name, type='fanout')
default_queue = Queue(
default_queue_name,
default_exchange,
routing_key=default_routing_key)
app.conf.task_queues = (
default_queue,
)
app.conf.task_default_queue = default_queue_name
app.conf.task_default_exchange = default_exchange_name
app.conf.task_default_routing_key = default_routing_key
#app.task(name='signal')
def signal():
print('client_a signal')
return 'signal'
The second client will look exactly the same as the first except for the name and the print message:
# [...]
app = Celery('CLIENT_B', ...
# [...] identical to the part above
#app.task(name='signal')
def signal():
print('client_b signal')
I'm starting both client workers with different node names (otherwise celery will complain):
celery -A client_a worker -n node_a
celery -A client_b worker -n node_b
If I then call the emitter (first piece of code) I see the signals being triggered alternately by client_a and client_b but never both as I would like them to be triggered.
The rabbitmq management platform looks as expected with the default exchange defined as fanout and the routing looks alright.
I'm not sure if I'm on the completely wrong track here but that's what I imagined should be possible with correct routing.

How to keep receiving messages when the master node goes down? [RabbitMQ]

I have two machines running RabbitMQ, Master(ip1) and Slave(ip2), and I mirrored them, added policy, synced a queue and exchange, but when I'm sending messages and the master goes down the slave stop responding and my application is not able to send messages anymore.
I have a virtual-host, durable queue and exchange, a policy to promote the slave.
The policy
Pattern ^test
Apply to all
Definition
ha-mode: exactly
ha-params: 5
ha-promote-on-failure: always
ha-promote-on-shutdown: always
ha-sync-mode: automatic
queue-master-locator: random
Priority 0
This is the Application that I'm using to publish messages (EDIT: The queue is already set on RabbitMQ UI)
import pika
import time
credentials = pika.PlainCredentials('usr', 'pass')
parameters = pika.ConnectionParameters('ip1',
5672,
'virtual_host',
credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.exchange_declare(
exchange="test_exchange",
exchange_type="direct",
passive=False,
durable=True,
auto_delete=False)
##channel.queue_declare(queue='test', durable=True, exclusive=False, auto_delete=False)
input('Press ENTER to begin')
for i in range(10000):
channel.basic_publish(exchange='',
routing_key='test',
body=('Hello World! '+ str(i)))
time.sleep(0.001)
print(" [x] Sent 'Hello World!'")
I wish that when the master goes down, the slave becomes promoted and receives the messages.

Apache Ignite - Distibuted Queue and Executors

I am planning to use Apache Ignite Distributed Queue.
I am using Ignite with a spring boot application. So, on bootup, I will be adding 20 names in a queue. But, since there are 3 servers in a cluster, the same 20 names gets added 3 times. But, i want to add them only once in the queue.
Ignite ignite = Ignition.ignite();
IgniteQueue<String> queue = ignite.queue(
"queueName", // Queue name.
0, // Queue capacity. 0 for unbounded queue.
null // Collection configuration.
);
Distributed executors, will be able to poll from the queue and run the task. Here, the executor is expected to poll, run the task and then add the same name to the queue. Trying to achieve round robin here.
Only one executor should be running the same task at any point of time, though there are multiple servers in a cluster.
Any suggestion for this.
You can launch ignite cluster singleton service https://apacheignite.readme.io/docs/cluster-singletons which will fill data to queue. Also you can adding data from coordinator node (oldest node in cluster) ignite.cluster().forOldest().node().isLocal()
I fixed bootup time duplicate cache loading issue this way:
final IgniteAtomicLong cacheLoadCnt = ignite.atomicLong(cacheName + "Cnt", 0, true);
if (cacheLoadCnt.get() == 0) {
loadCache();
cacheLoadCnt.addAndGet(1);
}

RabbitMQ - Scheduled Queue - Dead Letter Queue - Good practise

we have setup some workflow environment with Rabbit.
It solves our needs but I like to know if it is also good practise to do it like we do for scheduled tasks.
Scheduling means no mission critical 100% adjusted time. So if a job should be retried after 60 seconds, it does mean 60+ seconds, depends on when the queue is handled.
I have created one Q_WAIT and made some headers to transport settings.
Lets do it like:
Worker is running subscribed on Q_ACTION
If the action missed (e.g. smtp server not reachable)
-> (Re-)Publish the message to Q_WAIT and set properties.headers["scheduled"] = time + 60seconds
Another process loops every 15 seconds through all messages in Q_WAIT by method pop() and NOT by subscribed
q_WAIT.pop(:ack => true) do |delivery_info,properties,body|...
if (properties.headers["scheduled"] has reached its time)
-> (Re-)Publish the message back to Q_ACTION
ack(message)
after each loop, the connection is closed so that the NOT (Re-)Published are left in Q_WAIT because they were not acknowledged.
Can someone confirm this as a working (good) practise.
Sure you can use looping process like described in original question.
Also, you can utilize Time-To-Live Extension with Dead Letter Exchanges extension.
First, specify x-dead-letter-exchange Q_WAIT queue argument equal to current exchange and x-dead-letter-routing-key equal to routing key that Q_ACTION bound.
Then set x-message-ttl queue argument set or set message expires property during publishing if you need custom per-message ttl (which is not best practice though while there are some well-known caveats, but it works too).
In this case your messages will be dead-lettered from Q_WAIT to Q_ACTION right after their ttl expires without any additional consumers, which is more reliable and stable.
Note, if you need advanced re-publish logic (change message body, properties) you need additional queue (say Q_PRE_ACTION) to consume messages from, change them and then publish to target queue (say Q_ACTION).
As mentioned here in comments I tried that feature of x-dead-letter-exchange and it worked for most requirements. One question / missunderstandig is TTL-PER-MESSAGE option.
Please look on the example here. From my understanding:
the DLQ has a timeout of 10 seconds
so first message will be available on subscriber 10 seconds after publishing.
the second message is posted 1 second after the first with a message-ttl (expiration) of 3 seconds
I would expect the second message should be prounounced after 3 seconds from publishing and before first message.
But it did not work like that, both are available after 10 seconds.
Q: Shouldn't the message expiration overrule the DLQ ttl?
#!/usr/bin/env ruby
# encoding: utf-8
require 'bunny'
B = Bunny.new ENV['CLOUDAMQP_URL']
B.start
DELAYED_QUEUE='work.later'
DESTINATION_QUEUE='work.now'
def publish
ch = B.create_channel
# declare a queue with the DELAYED_QUEUE name
q = ch.queue(DELAYED_QUEUE, :durable => true, arguments: {
# set the dead-letter exchange to the default queue
'x-dead-letter-exchange' => '',
# when the message expires, set change the routing key into the destination queue name
'x-dead-letter-routing-key' => DESTINATION_QUEUE,
# the time in milliseconds to keep the message in the queue
'x-message-ttl' => 10000,
})
# publish to the default exchange with the the delayed queue name as routing key,
# so that the message ends up in the newly declared delayed queue
ch.basic_publish('message content 1 ' + Time.now.strftime("%H-%M-%S"), "", DELAYED_QUEUE, :persistent => true)
puts "#{Time.now}: Published the message 1"
# wait moment before next publish
sleep 1.0
# puts this with a shorter ttl
ch.basic_publish('message content 2 ' + Time.now.strftime("%H-%M-%S"), "", DELAYED_QUEUE, :persistent => true, :expiration => "3000")
puts "#{Time.now}: Published the message 2"
ch.close
end
def subscribe
ch = B.create_channel
# declare the destination queue
q = ch.queue DESTINATION_QUEUE, durable: true
q.subscribe do |delivery, headers, body|
puts "#{Time.now}: Got the message: #{body}"
end
end
subscribe()
publish()
sleep

RabbitMQ Prefetch

Up until now, my RabbitMQ consumer clients have used a prefetch value of 1. I'm looking to increase the value in order to gain performance. If I set the value to 2, will the RabbitMQ server send each consumer 2 messages at once such that I will need to parse the two messages and store the second one in a List until the first is processed and acknowledged? Or will the API handle this behind the scenes?
I'm using the Java AMQP client library:
ConnectionFactory factory = new ConnectionFactory();
...
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.basicQos(2);
QueueingConsumer consumer = new QueueingConsumer(channel);
channel.basicConsume(CONSUME_QUEUE_NAME, false, consumer);
while (!Thread.currentThread().isInterrupted()) {
try {
QueueingConsumer.Delivery delivery = consumer.nextDelivery();
String m = new String(delivery.getBody(), "UTF-8");
// Will m contain two messages? Will I have to each message and keep track of them within a List?
...
}
The api handles this behind the scenes, so there are no worries there for you.
Regarding which message gets where, RMQ will just deliver by using round robin, that is if you have the queue: 1 2 3 4 5 6 and consumer1 and consumer2.
consumer1 will have 1 3 5
consumer2 will have 2 4 6
Should the connection die to any of your consumers the prefetched messages will be redelivered to the active consumers using the same delivery method.
This should be interesting reading and a good starting point to figure more exactly what happens:
Tutorial no.2 which I'm sure you've read
Reliability
The api internally queue messages in a blocking queue.
Setting the prefetch count more than 1 is actually a good idea since your worker need not wait for each and every message to arrive. It can read up to N messages (where N is the prefetch count). It can start working on a message as soon as it has finished the previous one.
Also, you have the option to acknowledge multiple messages at once instead of acknowledging individually.
channel.basicAck(lastDeliveryTag, true);
boolean true indicates to acknowledge all the messages upto and including the supplied lastDeliveryTag