Redis how to persist multiple keys at once - redis

In Redis, is it possible to call PERSIST on multiple keys at once? For example, say I have the following commands:
MULTI
SETEX mykey 10 "foo"
SETEX myotherkey 10 "bar"
EXEC
// wait x number of seconds
PERSIST mykey
PERSIST myotherkey
I want to guarantee that either mykey and myotherkey are both persisted, or are both expired.
In theory at least, in the example above if the two PERSIST commands were run after waiting almost exactly 10 seconds, then it's possible that only one of them succeed in persisting the key, right? And I doubt that wrapping the two PERSIST commands in MULTI/EXEC helps since the commands themselves will still succeed even if only one of the keys is actually persisted.

persist returns
1: if the timeout is removed
0: if the key doesn't exists or doesn't have a TTL.
In the transaction even one of the key doesn't have an expire time, the other key will be persisted.
I set firstkey with the TTL of 300 seconds, secondkey without the TTL.
127.0.0.1:6379> setex firstkey 300 val
OK
127.0.0.1:6379> set secondkey val
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379> persist firstkey
QUEUED
127.0.0.1:6379> persist secondkey
QUEUED
127.0.0.1:6379> exec
1) (integer) 1
2) (integer) 0
127.0.0.1:6379> ttl firstkey
(integer) -1
127.0.0.1:6379> ttl secondkey
(integer) -1
127.0.0.1:6379>
When exec is issued, the order of responses are the same as the commands issued. firstkey is persisted(1) and secondkey already persisted(0) since it didn't have a timeout. Both doesn't have any timeout since the TTL returns -1.

Related

Redis LRANGE Pop Atomicity

I am having a redis data store in which there are unique keys stored. Now my app server will send multiple requests to redis to get some 100 keys from start and I am planning to use LRANGE command for the same.
But my requirement is that each request should receive unique set of keys,which means that if one request goes to redis for 100 keys then those keys will never be returned to any request in future.
As I saw that redis operations are atomic, so can i assume that if there multiple requests coming from app server at same time to redis, as redis is single thrreaded, so it will execute LRANGE mylist 0 100 and once it is completed (means once 100 keys taken and removes from List), only then next request will be processed, so atomicity is inbuild,is it correct?
Is it ever possible under any circumstance that two requests can get same 100 keys?
It sounds like the command you actually want is LPOP, since LRANGE doesn't remove anything from the list.
LPOP mylist 101
And, yes, this command is atomic, so no two clients will receive the same elements.

session data in Redis - key/value or hash type?

We are moving our session storage from on the app server to redis.
The session value is about 2k (it is huge I know, it will be shrunk drastically in the future).
We will have about 10 million session stored in there.
The question is should we store each one as key/value or save them all in one hash object? Is there benefit one over the other?
I don't really get the "one hash object" part. With hashes, the thing is you can't get the data back.
Just jsonify your session object, and store it as key / val, the key being the session_id stored in a cookie on the client.
From my point of view, the most difference between key/value and hash is using the TTL.
So you can use TTL on key/value. YOU CAN'T USE TTL WITH HASH TYPE CHILD
The other thing to consider is the manual page which says the hashes type are good for representing the objects. but you can use them as well.
You can use hashes as each user session object and if you need to set TTL set it for the hash key.
I don't Think you need to select between these two types, you can combine them! expect of saving a json string to the key/value pair.
for example, this is how I set the user 123 session data which it will expire after 5 minutes (300 seconds):
redis> HSET SESSION:123 firstname "Ali" lastname "Malek" credentials "P#sSw0rd"
1
redis> EXPIRE SESSION:123 300
1
redis> HGET SESSION:123 username
"Ali"
You can change SESSION:123 with SESSION:{session_id}

How can i find out time when my key is stored in Redis Cache/db?

Is there any redis command which tells when key is stored in redis?
I know there is TTL command.
Depending on time, i want to take different action. like if
live since last 1 min then do x,
live since last 2 min then do y,
etc...
There's no such command. However, you can achieve your goal with the EXPIRE and TTL commands.
For each key, set its TTL to be 1000000000, i.e. EXPIRE key 1000000000. So that the key will be expired after about 32 years. It's should be long enough.
When you want to find out how long the key has been stored, just get the key's TTL, i.e. TTL key. And the key has been stored since 1000000000 - TTL seconds.

Redis, partial match keys with end of line

This is a 2 part question.
I have a redis db storing items with the following keys:
record type 1: "site_id:1_item_id:3"
record type 2: "site_id:1_item_id:3_user_id:6"
I've been using KEYS site_id:1_item_id:* to grab record type 1 items (in this case for site 1)
Unfortunately, it returns all type 1 and type 2 items.
Whats the best way to grab all "site_id:1_item_id:3" type records? While avoiding the ones including user_id? Is there an EOL match I can use?
Secondly, I've read using KEYS is a bad choice, can anyone recommend a different approach here? I'm open to editing the key names if I must.
First thing first: unless your are the only redis user on your local developpment machine, you are right using KEYS is wrong. It blocks the redis instance until completed, so anyone querying it while you are using keys will have to wait for you keys query to be finished. Use SCAN instead.
SCAN will iterate over the entries in a non blocking way, and you are guaranteed to get all of them.
I don't know which language you use to query it, but in python it is quite easy to query the keys with scan and filter them on the fly.
But let's say you would like to use keys anyway. Looks to me like either doing KEYS site_id:1_item_id:? or KEYS site_id:1_item_id:3 does the trick.
wether you want the keys finishing with "3" or not (I am not sure I completely understood your question here).
Here is an example that I tried on my local machine:
redis 127.0.0.1:6379> flushall
OK
redis 127.0.0.1:6379> set site_id:1_item_id:3 a
OK
redis 127.0.0.1:6379> set site_id:1_item_id:3_user_id:6 b
OK
redis 127.0.0.1:6379> set site_id:1_item_id:4 c
OK
// ok so we have got the database cleaned and set up
redis 127.0.0.1:6379> keys *
1) "site_id:1_item_id:3"
2) "site_id:1_item_id:4"
3) "site_id:1_item_id:3_user_id:6"
// gets all the keys like site_id:1_item_id:X
redis 127.0.0.1:6379> keys site_id:1_item_id:?
1) "site_id:1_item_id:3"
2) "site_id:1_item_id:4"
// gets all the keys like site_id:1_item_id:3
redis 127.0.0.1:6379> keys site_id:1_item_id:3
1) "site_id:1_item_id:3"
Don't forget that Redis KEYS uses GLOB style pattern, which is not exactly like a regex.
You can check out the keys documentation examples to make sure you understand
The correct approach here, is to use an index of keys - maintained by you. Redis should not be queried in any conventional sense.

Which maxmemory policies allow expiration in Redis?

Which maxmemory policies are compatible with redis expiration mechanisms?
Is it only volatile-ttl? Does noeviction stop old records from death?
See here from redis.conf:
MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
is reached. You can select among five behaviors:
volatile-lru -> remove the key with an expire set using an LRU algorithm
allkeys-lru -> remove any key according to the LRU algorithm
volatile-random -> remove a random key with an expire set
allkeys-random -> remove a random key, any key
volatile-ttl -> remove the key with the nearest expire time (minor TTL)
noeviction -> don't expire at all, just return an error on write operations
Note: with any of the above policies, Redis will return an error on write
operations, when there are no suitable keys for eviction.
At the date of writing these commands are: set setnx setex append
incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
getset mset msetnx exec sort
The default is:
maxmemory-policy noeviction
If you keep the policy at the default 'noeviction' or if you choose any of the volatile-* ones without actually having expirable keys in the database, the data will remain in Redis indefinitely. Do remember, however, that if you do not delete data from Redis and keep adding more, you'll eventually run out of memory.