Redis - query by more than key - redis

I use redis to store user sessions by a guid i generate when the log in. I use that as the key in their session object.
If i was to lock a user's account I currently have to go through all sessions and check if they are the user's then delete them.
Is there a way to also query by the user id? Should I be using the sorted set insured of just standard key value?

Going through all keys is probably not the best idea. What you could do is store every user sessions' guids in another key - the set data type seems to be the best choice for that - and add/remove from it as the user opens/closes a session. So, when a user opens a new session you will:
SET session:<guid> <session_object>
SADD user_sessions:<user_id> <session_guid>
and when the session is closed, you'll do:
DEL session:<guid>
SREM user_sessions:<user_id> <session_guid>
To find which session guids are a user's, e.g. for an account lock down, do:
SMEMBERS sessions:<user_id>

Related

How to really deal with indexing in Redis and correctly implement indexes

I am moving some "live" data structures from MySQL to REDIS. Using StackExchange C# Redis Client, I'm writing (due to some very project-specific restrictions) my own microORM code to store and retrieve object class entities from a Redis Database.
I am pushing c# object as hash keys in Redis.
My general question is about indexing on fields other than the "primary key".
Ok, I've read all the theory of sets and sorted sets, and how to add and remove members from sets, and so on.
I've added some code to correctly create set keys which contain entities hash keys, so that I can lookup those objects by simple indexes or sorted indexes.
However I cannot find or figure out a good strategy for solving the following problems:
1. Index maintenance on expiration
I'd like to add expiration to some object (hash) keys, so that old entities get purged automatically by Redis. However I cannot find a reilable way to update/purge relevant indexes besides running periodically a background task that scans index set keys for expired members and removes them (notification is not good for me)
2. Index updating when some object fields change
In some cases I need to update only a small fraction of hash key values, not the whole entity. If the fields being updated are part of one or more index set keys, I cannot figure out the best way to properly update the set keys.
For example, let's say I need to store a "Session" entity whose primary key is its ID (simple numerical integer), and I need to add an index on the "Node" string field (Node being the reference to the server currently serving the session):
class Session {
[RedisKey]
public int ID { get; set; }
public string RemoteIP { get; set; }
[RedisSimpleIndex]
public string Node { get; set; }
}
RedisKey and RedisSimpleIndex are attributes I use to extract via reflection which fields are used as primary key and which are used for indexing.
Let's suppose I have an instance of Session like this:
{ ID = 2, RemoteIP = "1.2.3.4", Node = "Server10" }
My routines are creating the following keys in Redis:
Hash key: "obj:Session:2"
Hash values: "ID" = "1", "RemoteIP" = "1.2.3.4", "Node" = "Server10"
Set key "idx:Session:Node:Server10"
Set members: "obj:Session:2"
which is fine for looking up all sessions on Server10.
However, if the very same session needs to be moved to a different server (e.g. Server8)and I want to update only the Node field in the Hash set, how can I update indexes too?
The only way I found so far is to SCAN all index keys with pattern idx:Session:Node:* and remove from them any member obj:Session:2, then create/update the index key for the new node (idx:Session:Node:Server8).
Moreover the SCAN command is not available in IDatabase or ITransaction interfaces, and in a HA Clustered environment things get worse since I need to determine which Redis server is holding relevant keys to make this procedure work.
Is there a better way to build/represent simple indexes in Redis? Is my approach wrong?
I'd like to add expiration to some object (hash) keys, so that old entities get purged automatically by Redis. However I cannot find a reilable way to update/purge relevant indexes besides running periodically a background task that scans index set keys for expired members and removes them (notification is not good for me)
You cannot expire individual KV pairs within a hash. This is was discussed in #167. There don't appear plans to change this.
I think, you should be able to use keyspace notifications to subscribe to expire events. You would have to have some worker that subscribes for them and updates all relevant indices accordingly. However, you might get some inconsistent data. For example, your worker might crash and leave the stale indices behind. Also the indices wouldn't be updated instantaneously, so you'd end up with a bit of stale data regardless.
Probably not the best idea, but you could also hack in some custom indexing logic into expire.c. The code seems fairly straightforward. The C module API by contrast doesn't appear to provide any way to hook into the eviction logic.
Another option is to not rely on Redis when it comes to handling expiration logic. So... you would still have a background job, but it would actually issue corresponding DEL commands for expired KV-pairs. This would also allow you to keep the index 100% up to date via transactions.
In some cases I need to update only a small fraction of hash key values, not the whole entity. If the fields being updated are part of one or more index set keys, I cannot figure out the best way to properly update the set keys.
I'm not sure which Redis client you're using, but I found the following pattern to be quite useful in the past:
You have some form of "Updater" class for each hash. It has setters for all relevant fields that could be updated (setFirstName, setLastName etc.).
When you set a field, you mark that particular field as "dirty" (e.g. via a separate boolean).
When you call "save", you update indices for fields that were marked as dirty.
The only way I found so far is to SCAN all index keys with pattern idx:Session:Node:* and remove from them any member obj:Session:2, then create/update the index key for the new node (idx:Session:Node:Server8).
This is cumbersome, but seems like the way to go. Sadly I don't think there is a better solution for this. You might want to consider maintaining a separate set with keys of index KV-pairs that would have to be updated though, as that way you'd avoid going over a bunch of keys that aren't relevant.
You might also want to check out an article about how to maintain those indices. As you already alluded to, there are basically two options: real-time using MULTI transactions or using batch jobs. Once you get into the territory of using key expiration, you are more or less forced to use the batch approach.

Redis: how to use it similar to multi-tables

It seems that Redis has no any entity corresponding to "table" in relational database.
For instance, I have to store:
(token, user_id)
(cart_id, token, [{product_id, count}])
If it doesn't separate store those two, the get method would search from both, which would cause chaos.
By the way, (cart_id, token, [{product_id, count}]) is a shopping cart, how to design such data structure in redis?
It seems that Redis has no any entity corresponding to "table" in relational database.
Right, because it is not a relational database. It is a data structure server which is very different and requires a different approach to be used well.
Ultimately to use Redis in the way it is intended you need to not think in relational terms, but think of the data structures you use in the code. More specifically, how do you need the data when you want to consume it? That will be the most likely way to store it in Redis.
In this case there are a few options, but the hash method works incredibly well for this one so I'll detail it here.
First, create a hash, call it users:to:tokens. Store as the key in the hash the user id, and the value the token. Next create the inverse, a hash called 'tokens:to:users'. You will probably be wanting both of these - the ability to look one up from the other - and this foundation will provide that.
Next, for your carts. This, too, will be a hash: carts:cart_id. In this hash you have the product_id and the count.
Finally up is your third hash token:to:cart which builds an index of tokens to cart id. I'd go a step further and do user:to:cart to be able to pull carts by user as well.
Now as to whether to store the keynote in the map or not, I tend to go with "no". By just storing the ID you can easily build the Redis cart key and not store the key's full path in the data store as well the saving memory usage.
Indeed, if you can do so use integers for all of your IDs. By using integers you can take advantage of Redis' integer storage optimizations to keep memory usage down. Hashes storing integers are quite efficient and very fast.
If needed you can use Redis to build your IDs. You can use the INCR command to build a counter for each data type such as userid:counter, cartid:counter, and tokenid:counter. As INCR returns the new value you make a single call to increment and get the new ID and get cartid:counter will always give you the largest ID if you wanted to quickly see how many carts have been created. Kinda neat , IMO.
Now, where it gets tricky is if you want to use expiration to automatically expire carts as opposed to leaving them to "lie around" until you want to clean things up. By setting an expiration on the cart hash (which has the product,count mapping) your carts will automatically expire. However, their references will still be hanging out in the token:to:cart hash. Removing that is a simple periodic task which treats over the members of token:to:cart and does an exists check on the cart's key. If it doesn't exist delete it from the hash.
Redis is a key-value storage. From redis.io:
Redis is an open source (BSD licensed), in-memory data structure
store, used as database, cache and message broker. It supports data
structures such as strings, hashes, lists, sets, sorted sets with
range queries, bitmaps, hyperloglogs and geospatial indexes with
radius queries.
So if you want to store two diffetent types (tokens and carts) you will need to store two keys for different datatypes. For example:
127.0.0.1:6379> hset tokens.token_id#123 user user123
(integer) 1
127.0.0.1:6379> hget tokens.token_id#123 user
"user123"
Where tokens is a namespace for tokens only. It is stored as Redis-Hash:
Redis Hashes are maps between string fields and string values, so they
are the perfect data type to represent objects
To store lists I would do the following:
127.0.0.1:6379> hmset carts.cart_1 token token_id#123 cart_contents cart_contents_key1
OK
127.0.0.1:6379> hmget carts.cart_1 token cart_contents
1) "token_id#123"
2) "cart_contents_key1" # cart_contents is a list of receipts.
cart_contents are represented as a Redis-List:
127.0.0.1:6379> rpush cart_contents.cart_contents_key1 receipt_key1
(integer) 1
127.0.0.1:6379> lrange cart_contents.cart_contents_key1 0 -1
1) "receipt_key1"
Receipt is Redis-Hash for a tuple (product_id, count):
127.0.0.1:6379> hmset receipts.receipt_key1 product_id 43 count 2
OK
127.0.0.1:6379> hmget receipts.receipt_key1 product_id count
1) "43" # Your final product id.
2) "2"
But do you really need Redis in this case?

How to set expire when using Redis geoadd

I am using the new geospatial features on Redis.
I know that behind the scene it's using ZSET.
I am adding new entries this way:
GEOADD" "report-geo-set" "4.78335244" "32.07223969" "jossef"
How could I add an expire to a specific records(in my case: "jossef")
on my set?
If the API doesnt provide it is there any workaround for this?
Thanks,
ray.
Regrettably no - Redis expires entire keys and not the values in their respective data structures. Geo Hashes are implemented on top Sorted Sets and expiration of individual members isn't supported.
What you could do is maintain an additional Sorted Set and for each member in it store the expiration timestamp as score. Then, periodically, fetch the members that need to be expired from it based on ZRANGEBYSCORE and "manually" ZREM the respective members from your Geo Hash.

Redis, session expiration, and reverse lookup

I'm currently bulding a web app and would like to use Redis to store sessions. At login, the session is inserted into Redis with a corresponding user id, and expiration set at 15 minutes. I would now like to implement reverse look-up for sessions (get sessions with a certain user id). The problem here is, since I can't search the Redis keyspace, how to implement this. One way would be to have a redis set for each userId, containing all session ids. But since Redis doesn't allow expiration of an item in a set, and sessions are set to expire, there would be a ton of inexistent session ids in the sets.
What would be the best way to remove ids from sets on key expiration? Or, is there a better way of accomplishing what I want (reverse look-up)?
On the current release branch of Redis (2.6), you cannot have notifications when items are expired. It will probably change with the next versions.
In the meantime, to support your requirement, you need to manually implement expiration notification support. So you have:
session:<sessionid> -> a hash storing your session data - one of the field is <userid>
user:<userid> -> a set of <sessionid>
You need to remove sessionid from the user set when the session expires. So you can maintain a additional sorted set whose score is a timestamp.
When you create session 10 for user 100:
MULTI
HMSET session:10 userid:100 ... other session data ...
SADD user:100 10
ZADD to_be_expired <current timestamp + session timeout> 10
EXEC
Then, you need to build a daemon which will poll the zset to identify the session to expire (ZRANGEBYSCORE). For each expired session, it has to maintain the data structure:
pop the session out of the zset (ZREMRANGEBYRANK)
retrieve session userid (HMGET)
delete session (DEL)
remove session from userid set (SREM)
The main difficulty is to ensure there is no race conditions when the daemon polls and processes the items. See my answer to this question to see how it can be implemented:
how to handle session expire basing redis?
In more recent versions of Redis (2.8.0 and up) Keyspace Notifications for expired events are supported. I.e. when a key with a TTL expires this event is triggered.
This is what to subscribe to:
'__keyevent#0__:expired'
So subscribing to this event allows you to have a single index for all sessions and you can remove the key from the index when the key expires.
Example:
Use a sorted set as a secondary index with the uid as the weight:
ZADD "idx-session-uid" <uid> <sessionkey>
Search for sessionkeys for a specific user with:
ZRANGEBYSCORE "idx-session-uid" <uid> <uid>
When a session is deleted or expired we do:
ZREM "idx-session-uid" <sessionkey>

Redis Db - Watch if key exists or created

I'm trying Unique Index implemantation with Redis db (ServiceStack Client)
Normally
Check Unique Index Duplication
If Unique Index Exists RETURN WITH WARNING
WATCH for Unique Index (for race-condition)
Open Transaction
Insert new record, Insert new records unique index
Close Transaction
How can I get rid of 1st step?
WATCH for existence. I'm not related with changing of key. I'm related with creation or existance. (surely out of my transaction)
If you are trying to use redis just for checking duplicated then use hashset:
http://redis.io/commands#hash
how do you use the servicestack client? with native client? typed client? (then i can show you how to do that)
and use that command: http://redis.io/commands/hsetnx