I am new to Redis. I have a use case that a Redis stream shall be consumed by every member of a group of clients. However, currently the concept of consumer group in Redis stream means a message shall be consumed only once by any of the client in the group.
Does Redis stream has built in support for this particular use case?
Thank you
---Patrick
Related
I'm working on messaging project, where I need to have a large number of Redis PubSub channel subscriptions as shown in the graphic below . I'm on NestJS and use 'ioredis' library. I have few questions regarding this design,
Assuming there is only one subscriber client, do all channel subscriptions get multiplexed through a single Redis connection?
Is there a limit to how many channel subscriptions a client can have assuming Redis cluster is able to scale.
Thanks.
Currently, I have a redis stream which have many entries produced. I want to process stream parallelly but make sure each entry is processed only once. I search the official document about redis stream. It seems that consumer group is one solution.
So I try to create one consumer group and let multi consumer in that group consume the same stream parallelly(maybe multi consumer instances on different servers or multi threads in the same server).
Can Redis consumer group guarantee that multi-consumers running parallelly consume different exclusive subset from the same stream to make sure each entry is processed only once?
If it can be guaranteed, for each consumer, when reading from stream, is xreadgroup group mygroup consumer1 [count 1000] streams mystream > enough?
Yes. Each consumer in the group reads a mutually-exclusive subset of the stream's entries. Each message will be handled by a single consumer - the one that had read it - unless it is XCLAIMed.
Only once is an entirely different matter. It is up to the consumer to make sure of that.
We are shifting from Monolithic to Microservice Architecture for our e-commerce marketplace application. We chosen Redis pub/sub for microservice to microservice communication and also for some push notification purpose. Push notification strategy is like below:
Whenever an order is created (i,e customer creates an order), the backend publishes an event in respective channel (queue) and the specific push-notification-microservice consumes this event (json message) and sends push notification to the seller mobile.
For the time being we are using redis-server installed in our ubuntu machine without any hassle. But the headache is in future when millions of order will be generated in a point of time then how can we handle this situation ? That means, we need to scale the Redis Queue, right ?
My exact clean question (regardless the above scenario) is:
How can I horizontally scale Redis Queue instead of increasing the RAM in same machine ?
Whenever an order is created (i,e customer creates an order), the
backend publishes an event in respective channel (queue) and the
specific push-notification-microservice consumes this event (json
message) and sends push notification to the seller mobile.
IIUC you're sending a message over Redis PUB/SUB, that's not durable that means if the only producer is up and other services/consumers are down then consumers will miss messages. Any services that are down will lose all those messages that are sent when the said service was down.
Now let's assume, you're using Redis LIST and other combinations of data structures to solve the missing events issue.
Scaling Redis queue is a little bit tricky since entire data is stored in a list, that resides on a single Redis machine/host. What you can do is create your own partitioning scheme and design your Redis keys as per the partitioning scheme as Redis does internally when we add a new master in the cluster, creating consistent hashing would require some efforts.
Very simple you can distribute loads based on the userId for example if userId is between 0 and 1000 then use queue_0, 1000-2000 queue_1, and so on. This is a manual process that you can be automated using some script. Whenever a new queue is added to the set all consumers have to be notified and the publisher will be updated as well.
Dividing based on the number is a range partition scheme, you can use a hash partition scheme as well, either you use a range or hash partitioning scheme, whenever a new queue is added to the queue set the consumers must be notified for potential updates. Consumers can spawn a new worker for the new queue, removing a queue could be tricky as all consumers must have drained their respective queue.
You might consider using Rqueue
I'm learning on how to get data from redis using seneca js but seneca provides multiple plugins to connect to redis. and available plugins are the ones mentioned in the title. which should I use just to fetch a couple of keys from redis? and what is the difference between the two?
seneca-redis-pubsub-transport and seneca-redis-queue-transport are both used for transporting messages between services using redis.
seneca-redis-pubsub-transport is a broadcast transport. All subscribed services will receive all messages. seneca-redis-queue-transport on the other hand is a queue transport. Messages are sent to only one of possibly multiple subscribed services.
If you only want to get/set some values that take a look at seneca-redis-store. This plugin allows you to get and set values using redis.
Provided that both the client subscribed and the server publishing the message retain the connection, is Redis guaranteed to always deliver the published message to the subscribed client eventually, even under situations where the client and/or server are massively stressed? Or should I plan for the possibility that Redis might ocasionally drop messages as things get "hot"?
Redis does absolutely not provide any guaranteed delivery for the publish-and-subscribe traffic. This mechanism is only based on sockets and event loops, there is no queue involved (even in memory). If a subscriber is not listening while a publication occurs, the event will be lost for this subscriber.
It is possible to implement some guaranteed delivery mechanisms on top of Redis, but not with the publish-and-subscribe API. The list data type in Redis can be used as a queue, and as the the foundation of more advanced queuing systems, but it does not provide multicast capabilities (so no publish-and-subscribe).
AFAIK, there is no obvious way to easily implement publish-and-subscribe and guaranteed delivery at the same time with Redis.
Redis does not provide guaranteed delivery using its Pub/Sub mechanism. Moreover, if a subscriber is not actively listening on a channel, it will not receive messages that would have been published.
I previously wrote a detailed article that describes how one can use Redis lists in combination with BLPOP to implement reliable multicast pub/sub delivery:
http://blog.radiant3.ca/2013/01/03/reliable-delivery-message-queues-with-redis/
For the record, here's the high-level strategy:
When each consumer starts up and gets ready to consume messages, it registers by adding itself to a Set representing all consumers registered on a queue.
When a producers publishes a message on a queue, it:
Saves the content of the message in a Redis key
Iterates over the set of consumers registered on the queue, and pushes the message ID in a List for each of the registered consumers
Each consumer continuously looks out for a new entry in its consumer-specific list and when one comes in, removes the entry (using a BLPOP operation), handles the message and moves on to the next message.
I have also made a Java implementation of these principles available open-source:
https://github.com/davidmarquis/redisq
These principles have been used to process about 1,000 messages per second from a single Redis instance and two instances of the consumer application, each instance consuming messages with 5 threads.