I'm trying to solve the following problem in Redis.
I have a list that contains various available keys:
List MASTER:
111A
222B
333C
444D
555E
I'd like to be able to pop an element off of the list and use it as a key with an expires.
After the expires is up, I'd like to be able to push this number back onto MASTER for future use. I don't see any obvious way to do this, so I'm soliciting for a creative one.
The best method would be to get called back by Redis when the key expires and then take action.
However, callbacks support is still to be added (http://code.google.com/p/redis/issues/detail?id=360).
You can either use a Redis version that contains a custom/community modification to support this feature (like the last one in the link I've posted), or worse :): start tracking keys and timeouts in your client app.
Related
I'm working on creating DB with Redis.
One of my recruitments is that all the clients in the system will be able to listen to set events and get information about both key and value change.
I know that publishing value may be big(512 MB) but I know that in my system the size of value will not be more than 100 chars.
I have 3 possible solutions and I wonder which one will be better or consider other solutions:
1) After each set operation client will also publish it (PUB/SUB)
2)Edit setGenericCommand function to publish the value as well and use keyspace binding.
3)After client receive keyspace notification it will get the value with get operation.
I would like to understand which approach will be better?
Thank you!
So, 1st and foremost, remember that PubSub is at-most-once delivery. If you really need to process every change in the client, you should consider a more resilient way to do so.
That said, assuming you're ok with PubSub's promises, 1 is the simplest and I'd go with that. At most, I'd provide the clients with a Lua wrapper that combines the SET and PUBLISH commands. This, of course, removes the need to actually listen to Keyspace notifications as you basically implementing it yourself.
2 means hacking Redis, which is great but means you'll have to maintain your own which is meh--;
3 is also simple enough, but with 1 you get away with a single round trip instead of 2.
Another (4) approach is to write a custom module, but IMO too complex for this need. Go with 1 and Lua, and may the force be with you.
I'm curious about best practices as far as expiring secondary indexes in Redis.
For instance, say I have object IDs with positions that are only considered valid for 5 seconds before they expire. I have:
objectID -> hash of object info
I set these entries to automatically expire after 5 seconds.
I also want to look up objects by their zip code, so I have another mapping:
zipCode -> set or list of objectID
I know there's no way to automatically expire elements of a set, but I'd like to automatically remove objectIDs from that zip code mapping when they expire.
What is the standard practice for how to do this? If there were an event fired upon expiration I could find the associated zip code mapping and remove the objectID, but not sure there is (I will be using Go).
If there were an event fired upon expiration I could find the associated zip code mapping and remove the objectID, but not sure there is
YES, there is an expiration notification, check this for an example.
You can have a client subscribing to the expiration event, and delete items from the zipCode set or list.
However, this solution has two problems:
The notification is NOT reliable. If the client disconnects from Redis, or just crashes, client will miss some notifications. In order to make it more reliable, you can have multiple clients listening to the notification. If one client crashes, others can still remove items from zipCode.
It's NOT atomic. There's a time window between the key expiration and removing items from zipCode.
If you're fine with these 2 problems, you can try this solution.
I'm trying to implement a Tagging using Redis. This is how it looks like:
mykey (my item)
mykey:tags (a set with the tags associated to that item)
tags:tag1 (a set with references to all items tagged with "tag1")
...
I'm planning on using Redis Keyspace Notifications to prevent expired keys to stay on my tag sets forever (even when every item in the cache has a default TTL set, I don't like to keep stale data around).
These are the options I'm considering:
1) Subscribe to all "expired" events.
psubscribe '__keyevent#*:expired'
Pros:
Only 1 subscriber.
Cons:
Since not all items contain tags, I will have to check for mykey:tags
and if exists get the tags and remove the item from each tag set.
The contention on this method will increase with the amount of keys
in the store.
2) Subscribe to all events for those keys containing tags only.
psubscribe '__keyspace#*:mykey'
Pros:
Subscriptions will be created for those items with tags only.
Cons:
There must be overhead associated with each subscriber.
The number of subscriber can grow pretty fast depending on the number
of tagged items in the store.
Questions:
Which option should I implement? Should I be concerned about the
number of subscribers on 2) or is the contention on 1) a bigger
deal? I couldn't find any recommendations about this subject.
The end game is to implement this on Redis Cluster. Does this add
any extra concern to the implementation?
Update 1:
This is a generic implementation for tagging on top of our cache. I'm not sure at this point about how we ended up using it. This is more like a PoC I'm working on. Some numbers trying to answer some questions in the comments:
Volume: We have tens of millions of unique visitors per day. Not all items stored in cache for each visitor has tags though. But this changes constantly.
Tags: Tags are managed. There are currently a couple of dozen of tags. We are considering supporting free text tags in the future.
I haven't tested any of the two approaches I'm suggesting here. I was hoping that one of the options were so bad that was not even an option :)
Update 2:
After some trials and errors and some more research I discarded 2). There is a limit for redis clients as well as for the Output Buffers which makes this option a no go. You can find more information here and here.
I tried 1) and it works just fine. I even set the expiration of the keys 5ms apart from each other and the code handle it properly. This can be an alternative to go.
Another option can be the one suggested by #thepirat000. I'm marking this answer as the accepted one but I'm also adding a little tweak to his suggestion: I don't want to do maintenance in the tags on every tag operation, instead I can randomly determine when to do it. This is a good enough approach which doesn't use pub/sub nor the keyspace notifications.
There will be probably too much overhead by using Keyspace Notifications for this.
Why don't you do the clean-up as a scheduled or recurring task, or even when the keys are retrieved by tag?
I've worked on something similar on CachingFramework.Redis where the cleanup is optionally run when retrieving the keys related to a tag. Also the tag set TTL is the MAX(TTL) of the keys it contains.
I have set an expiration value to a key in redis, and want to get the opportunity to run a piece of code before the key will be deleted by redis. Is it possible, and if so how...?
Thanks
My solution was to create a new key, with the same name as the one I wanted to hook, only I added a prefix for it indicating it's a key for timeouts usage ("TO") - something like:
set key1 data1
set TO_key1 ""
expire TO_key1 20
In the example above, as soon as "TO_key1" will expire, it will notify my program and I'll get the opportunity to run my code before I will manually delete "key1".
I found this link very useful for creating the listener for redis: Redis Key expire notification with Jedis
This isn't possible with standard OS Redis... yet. There is a way, however, to do something similar without too much hassle. If you stop using Redis' expiry (at least for those keys that you're interesting in "hooking") and manage expiry "manually" in your code, you can do anything you want before/during/after the expiry event.
Since Redis offers key-level expiry out of the box, people are usually content with it. In some cases, i.e.g. expiring members in a Set, the only way to go is the manual approach but that approach is still valid for regular keys when you need finer control.
I understand that, roughly speaking, Trello uses Redis for a transient data store.
Is anyone able to elaborate further on the part it plays in the application?
We use Redis on Trello for ephemeral data that we would be okay with losing. We do not persist the data in Redis to disk, and we use it allkeys-lru, so we only store things there can be kicked out at any time with only very minor inconvenience to users (e.g. momentarily seeing an incorrect user status). That being said, we give it more than 5x the space it needs to store its actual working set and choose from 10 keys for expiry, so we really never see anything get kicked out that we're using.
It's our pubsub server. When a user does something to a board or a card, we want to send a message with that delta to all websocket-connected clients that are subscribed to the object that changed, so all of our Node processes are subscribed to a pubsub channel that propagates those messages, and they propagate that out to the appropriately permissioned and subscribed websockets.
We SORT OF use it to back socket.io, but since we only use the websockets, and since socket.io is too chatty to scale like we need it to at the moment, we have a patch that disables all but the one channel that is necessary to us.
For our users who don't have websockets, we have to keep a list of the actions that have happened on each object channel since the user's last poll request. For that we use a list which we cap at the most recent 100 elements, and an auxilary counter of how many elements have been added to the list since it was created. So when we're answering a poll request from such a browser, we can check the last element it reports that it has seen, and only send down any messages that have been added to the queue since then. So that gets a poll request down to just a permissions check and a single Redis key check in most cases, which is very fast.
We store some ephemeral data about the active status of connected users in Redis, because that data changes frequently and it is not necessary to persist it to disk.
We store short-lived keys to support OAuth logins in Redis.
We love Redis; once you have an instance of it up and running, you want to use it for all kinds of things. The only real trouble we have had with it is with slow-consuming clients eating up the available space.
We use MongoDB for our more traditional database needs.
Trello uses Redis with Socket.IO (RedisStore) for scaling, with the following two features:
key-value store, to set and get values for a connected client
as a pub-sub service
Resources:
Look at the code for RedisStore in Socket.IO here: https://github.com/LearnBoost/socket.io/blob/master/lib/stores/redis.js
Example of Socket.IO with RedisStore: http://www.ranu.com.ar/2011/11/redisstore-and-rooms-with-socketio.html