Redis PubSub message order in cluster is not guaranteed? - redis

Is the message order of pubsub messages in a redis cluster in any way guaranteed?
We are using a Redis cluster (v3.2.8) with 5 master nodes, each with one slave connected & we noticed that we sometimes get pubsub messages in wrong order when publishing to one specific master for one specific channel and being subscribed to slave nodes for that channel.
I could not find any statements related to pubsub message order in cluster on redis.io nor on the redis-github repo.

First of all, if you are using PUBLISH, then it is blocking and returns only after messages have been delivered, so yes the order is guaranteed.
There are 2 problematic cases that I see: Pipelining and Client disconnection.
Pipelining
From the documentation
While the client sends commands using pipelining, the server will be forced to queue the replies, using memory.
So, if a queue is used, the order should be guaranteed.
Client disconnection
I can't find it in the documentation, but if the client is not connected or subscribed when the message is published, then it wont receive anything. So in this case, there is no guarantee.
If you need to persist messages, you should use an a list instead.

Related

View values of Redis PubSub channel

I have few questions, I could not find the answers from Redis tutorial
1) how can I view/check values of a Redis PubSub channel? Monitor command is there to debug Redis, but I want to check what previously has pushed to channel.
2) What is the exact differece between channel and a queue?
3) how can I monitor Redis cluster in a free web based application?
1) You cannot view/check values that were published on a channel in the past. You can think of pubsub as fire and forget. Redis publishes a messages on a channel to clients which have subscribed to it, but does not persist the message for future reference. You can only monitor the messages published in realtime
2) Channel is a reference used by Redis to know which clients have subscribed to received messages published on that channel.
Queue is a data structure which stores values, these values can be accessed in future in FIFO order. So if you are using a queue for messaging, the messages will remain in the queue until you explicitly delete them
3) IMO there isn't any great free monitoring tool for Redis out there. See some available options here
As a side, regarding questions 1) and 2) : in case you are looking for reliable messaging, check out Redis Streams.

Redis keyspace notifications subscriptions in distributed environment using ServiceStack

We have some Redis keys with a given TTL that we would like to subscribe to and take action upon once the TTL expires (a la job scheduler).
This works well in a single-host environment, and when you subscribe in ServiceStack, using its Redis client, to '__keyspace#0__:expired', that service will pick it and take action. That's fantastic...
... until you have a high-availability topology set up, with more than one API instance in that cluster. Then every single host appears to be picking up on that message and potentially doing things with it.
I know keyspace notifications don't work exactly the same as traditional pub/sub or messaging-layer events, but is there a way to perform some kind of acknowledgement on these kinds of events, so that, at the end of the day, only one host will carry on with the task?
Otherwise, is there a way to delay a message publishing?
Thanks!
As describe in https://redis.io/topics/notifications
very node of a Redis cluster generates events about its own subset of the keyspace as described above. However, unlike regular Pub/Sub communication in a cluster, events' notifications are not broadcasted to all nodes. Put differently, keyspace events are node-specific. This means that to receive all keyspace events of a cluster, clients need to subscribe to each of the nodes.
So client should create separate connection to each node to get redis keyspace notification.
My understanding of your question: You need an event based unicast notification whenever a key is expired.
This solution will be helpful to you if above assumption is correct. It's kind of crude solution but works!
Solution:
You need to put(may be using a service/thread) the expired keys in the Redis List/queue. Then blocking B*POP operation from the client instances on this list/queue will give you what you want!
How does it work?
Let's assume, a single background thread will continuously push the expired keys into a redis list/queue. The cluster of API instances will be calling blocking pop on this list/queue.
Since, blocking pop operation on each item of redis list will be consumed by only one client, only one API instance will the get the notification of expired key!!!
Ref:
List pop operation: https://redis.io/commands/lpop
Similar problem with pub/sub: Competing Consumer on Redis Pub/Sub supported?

Redis publish-subscribe: Is Redis guaranteed to deliver the message even under massive stress?

Provided that both the client subscribed and the server publishing the message retain the connection, is Redis guaranteed to always deliver the published message to the subscribed client eventually, even under situations where the client and/or server are massively stressed? Or should I plan for the possibility that Redis might ocasionally drop messages as things get "hot"?
Redis does absolutely not provide any guaranteed delivery for the publish-and-subscribe traffic. This mechanism is only based on sockets and event loops, there is no queue involved (even in memory). If a subscriber is not listening while a publication occurs, the event will be lost for this subscriber.
It is possible to implement some guaranteed delivery mechanisms on top of Redis, but not with the publish-and-subscribe API. The list data type in Redis can be used as a queue, and as the the foundation of more advanced queuing systems, but it does not provide multicast capabilities (so no publish-and-subscribe).
AFAIK, there is no obvious way to easily implement publish-and-subscribe and guaranteed delivery at the same time with Redis.
Redis does not provide guaranteed delivery using its Pub/Sub mechanism. Moreover, if a subscriber is not actively listening on a channel, it will not receive messages that would have been published.
I previously wrote a detailed article that describes how one can use Redis lists in combination with BLPOP to implement reliable multicast pub/sub delivery:
http://blog.radiant3.ca/2013/01/03/reliable-delivery-message-queues-with-redis/
For the record, here's the high-level strategy:
When each consumer starts up and gets ready to consume messages, it registers by adding itself to a Set representing all consumers registered on a queue.
When a producers publishes a message on a queue, it:
Saves the content of the message in a Redis key
Iterates over the set of consumers registered on the queue, and pushes the message ID in a List for each of the registered consumers
Each consumer continuously looks out for a new entry in its consumer-specific list and when one comes in, removes the entry (using a BLPOP operation), handles the message and moves on to the next message.
I have also made a Java implementation of these principles available open-source:
https://github.com/davidmarquis/redisq
These principles have been used to process about 1,000 messages per second from a single Redis instance and two instances of the consumer application, each instance consuming messages with 5 threads.

RabbitMQ Message Sequence Guarantee

I have a project that involves rabbitmq. The problem that I have is illustrated as follows:
So now, let me describe the scenario. I have n number of queues which subscribed to topic1.
Now my question is if I publish 3 messages in sequence, which are shown as 1, 2 and 3 into broker called Exchange, will rabbitmq Guarantee the sequence of those messages in all queues?
The only thing that I found was in rabbitmq documentation Message ordering guarantees which was taking about
Section 4.7 of the AMQP 0-9-1 core specification explains the conditions under which ordering is guaranteed: messages published in one channel, passing through one exchange and one queue and one outgoing channel will be received in the same order that they were sent. RabbitMQ offers stronger guarantees since release 2.7.0.
So can anyone help me out and point me to the right doc or example that shows whether it is guaranteed or not?
Thanks
As the other poster mentioned, your scenario should work fine assuming a simple/basic consumer setup. But here's some additional info that might explain why.
I wasn't sure quite what nuances might have been wrapped up in that section of the documentation either, until I looked up exactly what a Channel was. A connection to RabbitMQ can have multiple "mini-connections" within it called channels. Each of these channels are independent and thus you could send multiple messages to the broker via multiple channels.
So as long as the messages in your scenario are sent on a single channel (you'd have to explicitly try to use multiple channels), they'll arrive in the queue in the same order you sent them. As long as the messages are consumed via a single channel, they'd arrive on the consumer in the same order they arrived in the queue (also being the same order they were sent).
From: https://www.rabbitmq.com/tutorials/amqp-concepts.html
Some applications need multiple connections to an AMQP broker. However, it is undesirable to keep many TCP connections open at the same time because doing so consumes system resources and makes it more difficult to configure firewalls. AMQP 0-9-1 connections are multiplexed with channels that can be thought of as "lightweight connections that share a single TCP connection".
What you have quoted answers your question perfectly. The only question is what your consumer set up looks like. If you have each queue connected to its own channel and that consumer is running in its own thread, that thread will see each message in order as they were published.

Redis Pub/Sub with Reliability

I've been looking at using Redis Pub/Sub as a replacement to RabbitMQ.
From my understanding Redis's pub/sub holds a persistent connection to each of the subscribers, and if the connection is terminated, all future messages will be lost and dropped on the floor.
One possible solution is to use a list (and blocking wait) to store all the message and pub/sub as just a notification mechanism. I think this gets me most of the way there, but I still have some concerns about the failure cases.
what happens when a subscriber dies, and comes back online, how should it process all it's pending messages?
when a malformed message comes though the system, how do you handle those exceptions? DeadLetter Queue?
is there a standard practice to implementing a retry policy?
When a subscriber (consumer) dies, your list will continue to grow until the client returns. Your producer could trim the list (from either side) once it reaches a specific limit, but that is something you would need to handle at the application level. If you include a timestamp within each message, your consumer can then act on the age of a message, assuming you have application logic you want to enforce on message age.
I'm not sure how a malformed message would enter the system, as the connection to Redis is usually TCP with the its integrity assurances. But if this happens, perhaps due to a bug in message encoding at the producer layer, you could provide a general mechanism for handling errors by keeping a queue-per-producer that received consumer's exception messages.
Retry policies will depend greatly on your application needs. If you need 100% assurance that a message has been received and processed, then you should consider using Redis transactions (MULTI/EXEC) to wrap the work done by a consumer, so you can ensure that a client doesn't remove a message unless it has completed its work. If you need explicit acknowlegement, then you could use an explicit ACK message on a queue dedicated to the producer process(es).
Without knowing more about your application needs, it's hard to know how to choose wisely. Generally, if your messages require full ACID protection, then you probably also need to use redis transactions. If your messages are only meaningful when they are timely, then transactions may not be needed. It sounds as though you can't tolerate dropped messages, so your approach of using a list is good. If you need to implement a priority queue for your messages, you can use the sorted set (the Z-commands) to store your messages, using their priority as the score value, along with a polling consumer.
If you want a pub/sub system where subscribers won't lose messages when they die, consider using Redis Streams instead of Redis Pub/sub.
Redis Streams have their own architecture and pros/cons to Redis Pub/sub. With Redis Streams, a subscriber can issue the command:
the last message I received was X, now give me the next message;
if there is no new message, then wait for one to arrive.
Antirez's article linked above is a good intro to Redis streams with more info.
What I did is use a sorted set using the timestamp as the score and the key to the data as the member value. I use the score from the last item to retrieve the next few ones and then get the keys. Once the work is done I wrap both the zrem and the del in a MULTI/EXEC transaction.
Essentially what Edward said, but with the twist of storing the keys in the sorted set, as my messages can be pretty big.
Hope this helps!