Redis: Expire Values in a List or Set - redis

Allow me to preface this by saying I'm fairly new to Redis. I have used Redis in the context of Resque.
I have a service that dispatches jobs to multiple other services. Those jobs either succeed or fail. Regardless of this, I'd like to send the results of what happened with a given job to a client that can then store they jobs in some sort of logical way, for example: JobType1Success, JobTypeOneFailure, etc.
I know I can create lists or sets with Redis and easily add some string representation of the data as values to the lists. Additionally, I know with traditional key/string values in Redis you can set an expiration in seconds. In my ideal world I would create several lists such as the ones mentioned above. Each job would then be prepended to it's appropriate list type, and after 7 days in the list a value would expire. Right now it seems fairly trivial to add the string values to a given list. However I am unable to find anything on whether or not I am able to expire values of a certain age from a list, and if so how to do that. I am working with a Node stack and using the Node Redis library. Any help here would be enormously appreciated.

Related

Max value size for Redis

I've been trying to make replay system. So basically when player moves, system saves his datas(movements, location, animation etc.) into JSON file. In the end of the record, JSON file may be over 50 MB. I'd want to save this data into Redis with expire date (24-48 hours).
My questions are;
Is it bad to save over 50 MB into Redis with expire date?
How many datas that over 50 MB can Redis handle without performance loss?
If players make 500 records in 48 hours, may it be bad for Redis?
How many milliseconds does it takes 50 MB data from Redis with average VDS/VPS?
Storing a large object(in terms of size) is not a good practice. You may read it from here. One of the problem is network. You need to send 50MB payload to a redis server in a single call. Also if you save them as one big object, then while retrieving, updating it (a single field, element etc), you need to get 50 MB back from server and parse it to get a single field, update it back end send back to server. That's a serious problem in terms of network.
Instead of redis strings, you may prefer sorted sets or lists depending on your use case. If you are going to store them with timestamps and get the range of events between these timestamps, then sorted sets may be an ideal solution for you. It's good for pagination etc. One of the crucial drawback is the complexity of adding a new element is O(log(N)).
lists may also provide a good playground for your case. You may use LPUSH/RPUSH to add new events to your list, and since Redis lists are implemented with linked lists, both adding a message to the beginning or end of the list is same, O(1), which is great.
Whenever an event happens, you either call ZADD or RPUSH/LPUSH to send the events to redis. If you need to query those to you may use available functions such as ZRANGEBYSCORE or LRANGE depending on your choice.
While designing your keys you may use an identifier such as user-id just like you mentioned in the comments. You will not have the problems with lists/sorted sets like you will have in strings. But choosing which one is most suitable for your depends on your use case for reads/writes or business rules.
Here some useful links to read;
Redis data types intro
Redis data types
Redis labs documentation about data types

Am I guaranteed that the requests to Redis will be executed in order?

all. I write a data item to Redis. Then later, I read the data item out of Redis.
Since there may be multiple servers taking these Redis requests and satisfying them, if I make the write request 1 ms before I make the read request (suppose they're both being done by the same process) am I assured that the read request won't be processed first, and I get a response back like "that data item doesn't exist"?
Assuming that the commands are being issued in sequence, you can assume that they will be atomic and single-threaded operations. Read more about this in this stackoverflow answer.
The above is True for a single Redis server, and not guaranteed for a Cluster behavior (thanks #mwp). In that situation, I'd recommend adding a check at the client level. If key doesn't exist when Redis makes its GET call, by default a nil value is returned.
Last note: depending on your implementation you may want to look into storing your redis items in a list, LPUSH-ing the write requests and BRPOP-ing the out values so you would always be guaranteed a value exists...

Akka.NET ConsistentHashingPool: create routee per hash

Is it possible to force ConsistentHashingPool to create routee per hash? I want one routee actor to process only messages of the same hash. And if new hash comes in, then new routee is created.
I tried looking into Resizer class, but I was not able to figure out the way to achieve the thing I wanted.
I think you're misunderstanding the ConsistentHashRouter (CHR) a bit. It already does what you've stated—consistently routes messages whose keys fall in a given hash range to the same routee.
Routees are added to / removed from the CHR routee table as new nodes/virtual nodes join the cluster. Then, the hash range will be rebalanced to account for the new nodes in the cluster and the CHR will route messages to the node that is now responsible for the part of the hash range the key falls into. This may be the same node that was responsible for it before, or it may shift from one node to another. Essentially you're sharding the hash range across the cluster.
UPDATE: as of writing this (October 2015) this management process must be done manually. There is a module coming called Akka.Cluster.Sharding that will do the rebalancing of shards for you across nodes. It is currently available on the JVM.
(From a newbie perspective...)
I agree with Oliver, this is too simple a use-case to require things called clustering and sharding.
Consider an actor holding some state for a user or a session or something - obviously each actor must receive only the messages for that entity-instance-id.
From having read a few docs I'm pretty sure it's trivial to code yourself: You just write a parent actor which checks for the existence of a child for a given id, creates it if it doesn't exist, then routes the message to it.
I also expected there to be something like a create-unique-actors setting on the ConsistentHashingRouter to do this automatically for you. (Maybe it's not generally useful since you need to consider when and how to terminate the actors to prevent them from living for ever?)

Redis keeping a hash of individual items that expire

I'm looking for a way to maintain a hash map of values however I want the values (not the whole map) to expire individually after a a period of time.
The reason I'd like to accomplish this is because it drastically simplifies and minimizes my dependency on a database and time checking. Basically I'd like to define a list of resources to poll for 10 minutes after they are first requested.
So say my list is: ItemA, ItemB, ItemC.
ItemA is now older than 10 minutes, it should be knocked off the list so that the new list when I request it is only ItemB and ItemC.
I'm using a Node package with the standard redis library, so I'm looking for a way to do this easily with those packages. If not I'll fall back to the db method, but getting this working with Redis would be really great.
I'm already successfully using this to expire session tokens and from what I've read you can't do this directly with hashmap values. Just curious what some workarounds could look like.
Thanks.

Redis set vs hash

In many Redis tutorials (such as this one), data is stored in a set, but with multiple values combined together in a string (i.e. a user account might be stored in the set as two entries, "user:1000:username" and "user:1000:password").
However, Redis also has hashes. It seems that it would make more sense to have a "user:1000" hash, which contains a "username" entry and a "password" entry. Rather than concatenating strings to access a particular value, you just access them directly in the hash.
So why isn't it used as much? Are these just old tutorials? Or do Redis hashes have performance issues?
Redis hashes are good for storing more complex data, like you suggest in your question. I use them for exactly that - to store objects with multiple attributes that need to be cached (specifically, inventory data for a particular product on an e-commerce site). Sure, I could use a concatenated string - but that adds unneeded complexity to my client code, and updating an individual field is not possible.
You may be right - the tutorials may simply be from before Hashes were introduced. They were clearly designed for storing Object representations: http://oldblog.antirez.com/post/redis-weekly-update-1.html
I suppose one concern would be the number of commands Redis must service when a new item is inserted (n number of commands, where n is the number of fields in the Hash) when compared to a simple String SET command. I haven't found this to be a problem yet on a service which hits Redis about 1 million times per day. Using the right data structure to me is more important than a negligible performance impact.
(Also, please see my comment regarding Redis Sets vs. Redis Strings - I think your question is referring to Strings but correct me if I'm wrong!)
Hashes are one of the most efficient methods to store data in Redis, even going so far as to recommending them for use whenever effectively possible.
http://redis.io/topics/memory-optimization
Use hashes when possible
Small hashes are encoded in a very small space, so you should try representing your data using hashes every time it is possible. For instance if you have objects representing users in a web application, instead of using different keys for name, surname, email, password, use a single hash with all the required fields.
Use case comparison:
Sets provide with a semantic interface to store data as a set in Redis server. The use
cases for this kind of data would be more for an analytics purpose, for example
how many people browse the product page and how many end up purchasing
the product.
Hashes provide a semantic interface to store simple and complex data objects in the
Redis server. For example, user profile, product catalog, and so on.
Ref: Learning Redis
Use cases for SETS
Uniqueness:
We have to enforce our application to make sure every username can be used by one single person. If someone signup with a username, we first look up set of usernames
SISMEMBER setOfUsernames newUsername
Creating relationships between different records:
Imagine you have Like functionality in your app. you might have a separate set for every single user and store the ID's of the images that user has liked so far.
Find common attributes that people like
In dating apps, users usually pick different attributes, and those attributes are stored in sets. And to help people match easily, our app might check the intersection of those common attributes
SINTER user#45:likesSet user#34:likesSet
When we have lists of items and order does not matter
For example, if you want to restrict API addresses that want to reach your app or block emails to send you emails, you can store them in a set.
Use cases for Hash
Redis Hashes are usually used to store complex data objects: sessions, users etc. Hashes are more memory-optimized.