Why don't sentinels subscribe channel "__sentinel__:hello" in sentinelReconnectInstance - redis

When looking into Redis's source code, I find when sentinelRedisInstance is SRI_SENTINEL, sentinelReconnectInstance will not initialize its link->pc and will not subscribe channel "__sentinel__:hello", as the following code shows.
void sentinelReconnectInstance(sentinelRedisInstance *ri) {
...
if ((ri->flags & (SRI_MASTER|SRI_SLAVE)) && link->pc == NULL) {
...
retval = redisAsyncCommand(link->pc,
sentinelReceiveHelloMessages, ri, "%s %s",
sentinelInstanceMapCommand(ri,"SUBSCRIBE"),
SENTINEL_HELLO_CHANNEL);
...
As a result, I think sentinels can't get any message from channel "__sentinel__:hello".
However, in redis's doc, it says
Every Sentinel is subscribed to the Pub/Sub channel sentinel:hello of every master and replica, looking for unknown sentinels. When new sentinels are detected, they are added as sentinels of this master.
I think it means that all sentinels actually subscribe to channel "__sentinel__:hello", but I can't see any corresponding implementation in redis's source code.

Correct me if I'm wrong.
sentinelRedisInstance of type SRI_MASTER connects to master node that this sentinel is monitoring, sentinelRedisInstance of type SRI_SLAVE connects to slave node, and sentinelRedisInstance of type SRI_SENTINEL connects to other sentinels.
There's no need to subscribe to sentinel's channel, instead, sentinels only need to subscribe to channels of master and slave nodes. If there's a new sentinel, it will publish hello message to master and slave's channel. So that other sentinels will discover them.

Related

It is possible for only the Redis master instances to handle all reads and writes, and for the slaves to be used only for failover?

My work system consists of Spring web applications and it uses Redis as a transaction counter and it conditionally blocks transaction requests.
The transaction is as follows:
Check whether or not data exists. (HGET)
If it doesn't, saves new one with count 0 and set expiration time. (HSET, EXPIRE)
Increases a count value. (INCRBY)
If the increased count value reaches a specific configured limit, it sets the transaction to 'blocked' (HSET)
The limit value is my company's business policy.
Such read and write operations are requested one after another, immediately.
Currently, I use one Redis instance at one machine. (only master, no replications.)
I want to get Redis HA, so I need slave insntaces but at the same time, I want to have all reads and writes to Redis only to master insntaces because of slave data relication latency.
After some research, I found that it is a good idea to have a proxy server to use Redis HA. However, with proxy, it seems impossible to use only the master instances to receive requests and the slaves only for failover.
Is it possible??
Thanks in advance.
What you need is Redis Sentinel.
With Redis Sentinel, you can get the master address from sentinel, and read/write with master. If master is down, Redis Sentinel will do failover, and elect a new master. Then you can get the address of the new master from sentinel.
As you're going to use Lettuce for Redis cluster driver, you should set read preference to Master and things should be working fine a sample code might look like this.
LettuceClientConfiguration lettuceClientConfiguration =
LettuceClientConfiguration.builder().readFrom(ReadFrom.MASTER).build();
RedisClusterConfiguration redisClusterConfiguration = new RedisClusterConfiguration();
List<RedisNode> redisNodes = new ArrayList<>();
redisNodes.add(new RedisNode("127.0.0.1", 9000));
redisNodes.add(new RedisNode("127.0.0.1", 9001));
redisNodes.add(new RedisNode("127.0.0.1", 9002));
redisNodes.add(new RedisNode("127.0.0.1", 9003));
redisNodes.add(new RedisNode("127.0.0.1", 9004));
redisNodes.add(new RedisNode("127.0.0.1", 9005));
redisClusterConfiguration.setClusterNodes(redisNodes);
LettuceConnectionFactory lettuceConnectionFactory =
new LettuceConnectionFactory(redisClusterConfiguration, lettuceClientConfiguration);
lettuceConnectionFactory.afterPropertiesSet();
See in action at Redis Cluster Configuration

How to keep receiving messages when the master node goes down? [RabbitMQ]

I have two machines running RabbitMQ, Master(ip1) and Slave(ip2), and I mirrored them, added policy, synced a queue and exchange, but when I'm sending messages and the master goes down the slave stop responding and my application is not able to send messages anymore.
I have a virtual-host, durable queue and exchange, a policy to promote the slave.
The policy
Pattern ^test
Apply to all
Definition
ha-mode: exactly
ha-params: 5
ha-promote-on-failure: always
ha-promote-on-shutdown: always
ha-sync-mode: automatic
queue-master-locator: random
Priority 0
This is the Application that I'm using to publish messages (EDIT: The queue is already set on RabbitMQ UI)
import pika
import time
credentials = pika.PlainCredentials('usr', 'pass')
parameters = pika.ConnectionParameters('ip1',
5672,
'virtual_host',
credentials)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.exchange_declare(
exchange="test_exchange",
exchange_type="direct",
passive=False,
durable=True,
auto_delete=False)
##channel.queue_declare(queue='test', durable=True, exclusive=False, auto_delete=False)
input('Press ENTER to begin')
for i in range(10000):
channel.basic_publish(exchange='',
routing_key='test',
body=('Hello World! '+ str(i)))
time.sleep(0.001)
print(" [x] Sent 'Hello World!'")
I wish that when the master goes down, the slave becomes promoted and receives the messages.

kombu not reconnecting to RabbitMQ

I have two servers, call them A and B. B runs RabbitMQ, while A connects to RabbitMQ via Kombu. If I restart RabbitMQ on B, the kombu connection breaks, and the messages are no longer delivered. I then have to reset the process on A to re-establish the connection. Is there a better approach, i.e. is there a way for Kombu to re-connect automatically, even if the RabbitMQ process is restarted?
My basic code implementation is below, thanks in advance! :)
def start_consumer(routing_key, incoming_exchange_name, outgoing_exchange_name):
global rabbitmq_producer
incoming_exchange = kombu.Exchange(name=incoming_exchange_name, type='direct')
incoming_queue = kombu.Queue(name=routing_key+'_'+incoming_exchange_name, exchange=incoming_exchange, routing_key=routing_key)#, auto_delete=True)
outgoing_exchange = kombu.Exchange(name=outgoing_exchange_name, type='direct')
rabbitmq_producer = kombu.Producer(settings.rabbitmq_connection0, exchange=outgoing_exchange, serializer='json', compression=None, auto_declare=True)
settings.rabbitmq_connection0.connect()
if settings.rabbitmq_connection0.connected:
callbacks=[]
queues=[]
callbacks.append(callback)
# if push_queue:
# callbacks.append(push_message_callback)
queues.append(incoming_queue)
print 'opening a new *incoming* rabbitmq connection to the %s exchange for the %s queue' % (incoming_exchange.name, incoming_queue.name)
incoming_exchange(settings.rabbitmq_connection0).declare()
incoming_queue(settings.rabbitmq_connection0).declare()
print 'opening a new *outgoing* rabbitmq connection to the %s exchange' % outgoing_exchange.name
outgoing_exchange(settings.rabbitmq_connection0).declare()
with settings.rabbitmq_connection0.Consumer(queues=queues, callbacks=callbacks) as consumer:
while True:
settings.rabbitmq_connection0.drain_events()
On the consumer side, kombu.mixins.ConsumerMixin handles reconnecting when the connection goes away (and also does heartbeats, etc., and lets you write less code). There doesn't seem to be a ProducerMixin, unfortunately but you could potentially dig into the code and adapt it...?

In RabbitMQ how to consume multiple message or read all messages in a queue or all messages in exchange using specific key?

I want to consume multiple messages from specific queue or a specific exchange with a given key.
so the scenario is as follow:
Publisher publish message 1 over queue 1
Publisher publish message 2 over queue 1
Publisher publish message 3 over queue 1
Publisher publish message 4 over queue 2
Publisher publish message 5 over queue 2
..
Consumer consume messages from queue 1
get [message 1, message 2, message 3] all at once and handle them in one call back
listen_to(queue_name , num_of_msg_to_fetch or all, function(messages){
//do some stuff with the returned list
});
the messages are not coming at the same time, it is like events and i want to collect them in a queue, package them and send them to a third party.
I also read this post:
http://rabbitmq.1065348.n5.nabble.com/Consuming-multiple-messages-at-a-time-td27195.html
Thanks
Don't consume directly from the queue as queues follow round robin algorithm(an AMQP mandate)
Use shovel to transfer the queue contents to a fanout exchange and consume messages right from this exchange. You get all messages across all connected consumers. :)
If you want to consume multiple messages from specific queue, you can try as below.
channel.queueDeclare(QUEUE_NAME, false, false,false, null);
Consumer consumer = new DefaultConsumer(channel){
#Override
public void handleDelivery(String consumerTag,
Envelope envelope,
AMQP.BasicProperties properties,
byte[] body)
throws IOException {
String message = new String(body, "UTF-8");
logger.info("Recieved Message --> " + message);
}
};
You might need to conceptually separate domain-message from RMQ-message. As a producer you'd then bundle multiple domain messages into a single RMQ-message and .produce() it to RMQ. Remember this kind of design introduces timeouts and latencies due to the existence of a window (you might take some impression from Kafka that does bundling to optimize I/O at the cost of latency).
As a consumer then, you'd have a consumer, with typical .handleDelivery implementation that would transform the received body for the processing: byte[] -> Set[DomainMessage] -> your listener.

get queuename on activemq server to push message

I have got 10 queues on activemq server.
I have producer which want to push messages on one of the queue (the producer will select the queue randomly run time to put message on queue), how can I pass destination name in createProducer method.
I understand that I need to pass an object of type Destination. the producer would know the queues name on the server. Is it possible to pass (or convert) a string to Destination object type and pass that to createproducer method.
Thanks
If I understand your problem correctly;
If you're running Java and have a valid session, you could use Session.createQueue();
// Create a Destination using the queue name
Destination destination = session.createQueue("queue name");
// Create a MessageProducer from the Session to the Queue
MessageProducer producer = session.createProducer(destination);
Here is a complete example of doing this at the Apache site.