"Archiving" publish/subscribe message in Redis - redis

I am using Redis' publish/subscribe feature. So the server is publishing 10 items then the client gets those 10 items.
Now however, a new client subscribes to the feed. I would like them to get the previous 10 items as well as any new items.
Does Redis have a way of doing this using the publish and subscribe functionality? Is a feed history stored anywhere in the database? Is there an easy way of doing this? Is the best way to also store the messages in a list and have the client do an LRANGE my_list 0 10 on the list?

I'd keep a separate archive of the data and have events added to both. New clients can subscribe and queue the real time events, read the archive until it's up to date with the first published event, then catch up with the published events. That way you shouldn't miss any published events while switching between the archive and real time events.

Stumbled on this during some research. I know it is old but I wanted to add that with the Redis Streams data structure it is not overly complex to implement persistent messaging.
The publisher would publish messages to a Stream and a subscriber would just get the latest message if that is all it cared about. You can also create user groups to limit how many subscribers can get the message and then mark them as acknowledged to avoid duplicate processing. This is good when you want a message to be handled only once and need a way to confirm that.

I ended up creating a nodejs app for this sort of purpose. In my case, user data was published to the redis server which i wanted to store, I subscribed to the redis channel with a nodejs app and then saved the details to a database, ive played around with mysql and mongo so far, let me know if this is of any interest and ill paste some code, there are some similarities in trying to store a publish history...
Cheers

Related

Can you update a message on google nearby message API?

I have 2 simple questions that I did not find by reading the official documentation about google nearby message API
https://developers.google.com/nearby/messages/android/pub-sub
If you publish multiple messages with the publish method (on the same instance of an app), the messages are saved as several different messages or are updated and overwritten (on cloud console)?.
Is it possible with the publish method to update a message?
I'm building an application where each user sees what others are posting, but I just need to know the most up-to-date data of each user, I don't need all the messages.
Thank you.
With PubSub, you publish message in a queue. Then you can't update or delete them, they are published.
On the consumer side, the messages are usually distributed in the order, but without any guarantee. In each message, you have a published timestamp.
In your use case, it could be interesting to keep in memory the userID and the latest processed timestamp. If you application is distributed, the best is to store these data in memorystore.
Like that, when a message comes in
Either it is newer than the value in memory store and your process it
Or, it's older and your trash it.

What is the diff between data-sync and pub-sub in Deepstream

All:
I am pretty new to deepstream, on its website, it described in core concepts section as:
data-sync Interactive JSON documents that can be edited and observed.
Changes are persisted and synced across clients.
and
publish-subscribe Many clients can subscribe to topics and receive
data whenever other clients publish it to the same topic
I wonder what is the diff between its data-sync and pub-sub in terms of their purpose, in anther way, what task can one do while the other can not?
Thanks
PubSub is a way for clients and servers to send messages to each other. These messages can contain all sorts of data, but once the message is delivered its gone - there's no storage or statefulness. If you're familiar with EventEmitters in e.g. JavaScript you're already familiar with the pattern.
Data-Sync on the other hand is stateful, persistent data. Clients can request JSON documents called records, update them and subscribe to changes made by other records. Records can be arranged in lists and lists can be referenced by records, allowing for data-sync to become the realtime backbone for all the data that drives your app.

Implementing a "Snapshot and Subscribe" in Redis

I wish to use Redis to create a system which publishes stock quote data to subscribers in an internal network. The problem is that publishing is not enough, as I need to find a way to implement an atomic "get snapshot and then subscribe" mechanism. I'm pretty new to Redis so I'm not sure my solution is the "proper way".
In a given moment each stock has a book of orders which contains at most 10 bids and 10 asks. The publisher receives data for the exchange and should publish them to subscribers.
While the publishing of changes in the order book can be easily done using publish and subscribe, each subscriber that connects also needs to get the snapshot of the current order book of the stock and only then subscribe to changes in the order book.
As I understand, Redis channel never saves information, so the publisher also needs to maintain the complete order book in a hash key (Or a sorted set. I'm not sure which is more appropriate) in addition to publishing changes.
I also understand that a Redis client cannot issue any commands except subscribing and unsubscribing once it subscribes to the first channel.
So, once the subscriber application is up, it needs first to get the key which contains the complete order book and then subscribe to changes in that book. However, this may result in a race condition. A change in the book order can be made after the client got the key containing the current snapshot but before it actually subscribed to changes, resulting a change which it will never see.
As it is not possible to use subscribe and then use get in a single connection, the client application needs two connections to the Redis server. At this point I started thinking that I'm probably not doing things in the proper way if I need more than one connection in the same application. Anyway, my idea is that the client will have a subscribing connection and a query connection. First, it will use the subscribing connection to subscribe to changes in order book, but still won't not enter the loop which process events. Afterwards, it will use the query connection to get the complete snapshot of the book. Finally, it will enter the loop which process events, but as he actually subscribed before taking the snapshot, it is guaranteed that it will not miss any changed that occurred after the snapshot was taken.
Is there any better way to accomplish my goal?
I hope you found your way already, if not here we goes a personal suggestion:
If you are in javascript land i would recommend having a look on Meteor.js they do somehow achieve the goal you want to achieve, with the default setup you will end up writing to mongodb in order to "update" the GUI for the "end user".
In any case, you might be interested in reading about how meteor's ddp protocol works: https://meteorhacks.com/introduction-to-ddp/ and https://www.meteor.com/ddp

RabbitMQ Message Lifetime Replay Message

We are currently evaluating RabbitMQ. Trying to determine how best to implement some of our processes as Messaging apps instead of traditional DB store and grab. Here is the scenario. We have a department of users who perform similar tasks. As they submit work to the server applications we would like the server app to send messages back into a notification window saying what was done - to all the users, not just the one submitting the work. This is all easy to do.
The question is we would like these message to live for say 4 hours in the Queue. If a new user logs in or say a supervisor they would get all the messages from the last 4 hours delivered to their notification window. This gives them a quick way to review what has recently happened and what is going on without having to ask others, "have you talked to John?", "Did you email him is itinerary?", etc.
So, how do we publish messages that have a lifetime of x hours from the time they were published AND any new consumers that connect will get all of these messages delivered in chronological order? And preferably the messages just disappear after they have expired from the queue.
Thanks
There is Per-Queue Message TTL and Per-Message TTL in RabbitMQ. If I am right you can utilize them for your task.
In addition to the above answer, it would be better to have the application/client publish messages to two queues. Consumer would consume from one of the queues while the other queue can be configured using per queue-message TTL or per message TTL to retain the messages.
Queuing messages you do to get a message from one point to the other reliable. So the sender can work independently from the receiver. What you propose is working with a temporary persistent store.
A sql database would fit perfectly, but also a mongodb would work nicely. You drop a document in mongo, give it a ttl and let the database handle the expiration.
http://docs.mongodb.org/master/tutorial/expire-data/

Redis publish/subscribe: see what channels are currently subscribed to

I am currently interested in seeing what channels are subscribed to in a Redis pub/sub application I have. When a client connects to our server, we register them to a channel that looks like:
user:user_id
The reason for this is I want to be able to see who's "online". I currently blindly fire off messages to a channel without knowing if a client is online since it's not critical that they receive these types of messages.
In an effort to make my application smarter, I'd like to be able to discover if a client is online or not using the pub/sub API, and if they are offline, cache their messages to a separate redis queue which I can push to them when they get back online.
This does not have to be 100% accurate, but the more accurate it is, the better. I'm assuming a generic key does not get created when a channel gets subscribed to, so I cannot do something as trivial as:
redis-cli keys user* to find all online users.
The other strategy I've thought of is just maintaining my own Redis Set whenever a user published or removes themselves from a channel (which the client automatically handles when they hop online and close the app). That would be an additional layer of complexity that I need to manage and I'm hoping there is a more trivial approach with the data that's already available.
As of Redis 2.8 you can do:
PUBSUB CHANNELS [pattern]
The PUBSUB CHANNELS command has O(N) complexity, where N is the number of active channels.
So in your case:
redis-cli PUBSUB CHANNELS user*
would give you want you want.
There is currently no command for showing what channels "exist" by way of being subscribed to, but there is and "approved" issue and a pull request that implements this.
https://github.com/antirez/redis/issues/221
https://github.com/antirez/redis/pull/412
Due to the nature of this call, it is not something that can scale, and is thus a "DEBUG" command.
There are a few other ways to solve your problem, however.
If you have reason to believe that a channel may be subscribed to, you can send it a message and look at the result. The result is the number of subscribers that got the message. If you got 0, you know that they're not there.
Assuming that your user_ids are incremental, you might be interested in using SETBIT to set a 1 or 0 to a user's offset bit to track presence. You can then do cool things like the new BITCOUNT to see how many users are online, and GETBIT to determine if a specific user is online.
The way I have solved your problem more specifically in the past is by signaling a subscription manager that I have subscribed to a channel. The manager then "pings" the channel by sending a blank message to confirm that there is a subscriber, and occasionally pings the channel thereafter to determine if the user is still online. Not ideal, but better than using DEBUG CHANNELS in production.
From version 2.8.0 redis has a pubsub command that would help in this case:
http://redis.io/commands/pubsub
Remark: currently the state of 2.8.0 is not stable yet (RC2)
I am unaware of any specific way to query what channels are being subscribed to, and you are correct that there isn't any key created when this happens. Also, I wouldn't use the KEYS command in production anyway, as it's really a debugging command.
You have the right idea about using a set to add the user when they're online, and then query this with SISMEMBER <set> <user_id> to determine if the messages should be sent to them or added to a Redis list for processing once they do come online.
You will need to figure out when a user logs off so you can remove them from the list of online users, but I don't know enough about your system to know exactly how you would go about that.
If the connected clients have the ability to send a message back to inform the server that the message(s) were consumed, you could use this to keep track of which messages should be stored for later retrieval.
Cheers,
Mike
* PUBSUB NUMSUB [channel-1 ... channel-N]
Returns the number of subscribers (not counting clients subscribed to patterns) for the specified channels.
https://redis.io/commands/pubsub