Is MULTI supposed to work on Redis clustered? - redis

I'm using Redis on a clustered db (locally). I'm trying the MULTI command, but it seems that it is not working. Individual commands work and I can see how the shard moves.
Is there anything else I should be doing to make MULTI work? The documentation is unclear about whether or not it should work. https://redis.io/topics/cluster-spec
In the example below I just set individual keys (note how the port=cluster changes), then trying a multi command. The command executes before EXEC is called
127.0.0.1:30001> set a 1
-> Redirected to slot [15495] located at 127.0.0.1:30003
OK
127.0.0.1:30003> set b 2
-> Redirected to slot [3300] located at 127.0.0.1:30001
OK
127.0.0.1:30001> MULTI
OK
127.0.0.1:30001> HSET c f val
-> Redirected to slot [7365] located at 127.0.0.1:30002
(integer) 1
127.0.0.1:30002> HSET c f2 val2
(integer) 1
127.0.0.1:30002> EXEC
(error) ERR EXEC without MULTI
127.0.0.1:30002> HGET c f
"val"
127.0.0.1:30002>

MULTI transactions, as well as any multi-key operations, are supported only within a single hashslot in a clustered Redis deployment.

Related

Understanding transactions and WATCH

I am reading https://redis.io/docs/manual/transactions/, and I am not sure I fully understand the importance of WATCH in a Redis transaction. From https://redis.io/docs/manual/transactions/#usage, I am led to believe EXEC is atomic:
> MULTI
OK
> INCR foo
QUEUED
> INCR bar
QUEUED
> EXEC
1) (integer) 1
2) (integer) 1
But between MULTI and EXEC, the values at keys foo and bar could have been changed (e.g. in response to a request from another client). Is that why WATCH is useful? To error out the transaction if e.g. foo or bar changed after MULTI and before EXEC?
Do I have it correct? Anything I am missing?
I am led to believe EXEC is atomic
YES
Is that why WATCH is useful? To error out the transaction if e.g. foo or bar changed after MULTI and before EXEC?
NOT exactly. Redis fails the transaction if some key the client WATCHed before the transaction has been changed (after WATCH and before EXEC). The watched key might even not used in transaction.
MULTI-EXEC ensures commands between them runs atomically, while WATCH provides check-and-set(CAS) behavior.
With MULTI-EXEC, Redis ensures running INCR foo and INCR bar atomically, otherwise, Redis might run some other commands between INCR foo and INCR bar (these commands might and might not change foo or bar). If you watch some key (either foo, or bar, or even any other keys) before the transaction, and the watched key has been modified since the WATCH command, the transaction fails.
UPDATE
The doc you referred has a good example of using WATCH: implement INCR with GET and SET. In order to implement INCR, you need to 1) GET the old counter from Redis, 2) incr it on client side, and 3) SET the new value to Redis. In order to make it work, you need to ensure no other clients update/change the counter after you run step 1, otherwise, you might write incorrect value and should fail the transaction. 2 clients both get the old counter, i.e. 1, and incr it locally, and client 1 update it to 2, then client 2 update it to 2 again (the counter should be 3). In this case, WATCH is helpful.
WATCH mykey
val = GET mykey
val = val + 1
MULTI
SET mykey $val
EXEC
If some other client update the counter after we watch it, the transaction fails.
When do watched keys no longer watched?
EXEC is called.
UNWATCH is called.
DISCARD is called.
connection is closed.

Redis Gears events in cluster

I have a redis cluster with the following configuration :
91d426e9a569b1c1ad84d75580607e3f99658d30 127.0.0.1:7002#17002 myself,master - 0 1596197488000 1 connected 0-5460
9ff311ae9f413b48578ff0519e97fef2ced57b1e 127.0.0.1:7003#17003 master - 0 1596197490000 2 connected 5461-10922
4de4d36b968bd0b5b5dc8023cb00a5a2ab62effc 127.0.0.1:7004#17004 master - 0 1596197492253 3 connected 10923-16383
a32088043c31c5d3f20828bfe06306b9f0717635 127.0.0.1:7005#17005 slave 91d426e9a569b1c1ad84d75580607e3f99658d30 0 1596197490251 1 connected
b5e9ec7851dfd8dc5ab0cf35c230a0e716dd934c 127.0.0.1:7006#17006 slave 9ff311ae9f413b48578ff0519e97fef2ced57b1e 0 1596197489000 2 connected
a34cc74321e1c75e4cf203248bc0883833c928c7 127.0.0.1:7007#17007 slave 4de4d36b968bd0b5b5dc8023cb00a5a2ab62effc 0 1596197492000 3 connected
I want to create a set with all keys in the cluster by listening key operations with redis gears and store key names in a redis set called keys.
To do thant, I run this redis gears command
RG.PYEXECUTE "GearsBuilder('KeysReader').foreach(lambda x: execute('sadd', 'keys', x['key'])).register(readValue=False)"
It work, but only if the updated key is store on the same node of the key keys
Example :
With my cluster configuration, the key keys is store un node 91d426e9a569b1c1ad84d75580607e3f99658d30 (the first node).
If i run :
SET foo bar
SET bar foo
SMEMBERS keys
I have the following result :
127.0.0.1:7002> SET foo bar
-> Redirected to slot [12182] located at 127.0.0.1:7004
OK
127.0.0.1:7004> SET bar foo
-> Redirected to slot [5061] located at 127.0.0.1:7002
OK
127.0.0.1:7002> SMEMBERS keys
1) "bar"
2) "keys"
127.0.0.1:7002>
The first key name foo is not saved in the set keys.
Is it possible to have key names on other nodes saved in the keys set with redis gears ?
Redis version : 6.0.6
Redis gears version : 1.0.1
Thanks.
If the key was written to a shard that does not contain the 'keys' key you need to make sure to move it to another shard with the repartition operation (https://oss.redislabs.com/redisgears/operations.html#repartition), so this should work:
RG.PYEXECUTE "GearsBuilder('KeysReader').repartition(lambda x: 'keys').foreach(lambda x: execute('sadd', 'keys', x['key'])).register(readValue=False)"
The repartition operation will move the record to the correct shard and the 'sadd' will succeed.
Another option is to maintain a set per shard and collect them using another Gear function. To do that you need to use the hashtag function (https://oss.redislabs.com/redisgears/runtime.html#hashtag) to make sure the set created belongs to the current shard. So the following registration will maintain a set per shard:
RG.PYEXECUTE "GearsBuilder('KeysReader').foreach(lambda x: execute('sadd', 'keys{%s}' % hashtag(), x['key'])).register(mode='sync', readValue=False)"
Notice that the sync mode tells RedisGears not to start a distributed execution and it should be much faster to follow the keys this way.
Then to collect all the values:
RG.PYEXECUTE "GB('ShardsIDReader').flatmap(lambda x: execute('smembers', 'keys{%s}' % hashtag())).run()"
The first approach is good for read-intensive use cases and the second approach is good for write-intensive use cases. Depends on your use case you need to chose the right approach.

How to set TTL for race conditions in Redis cache

I am using Redis for cache in my application which is configured in spring beans, spring-data-redis 1.7.1, jedis 2.9.0.
I would like to know how to set the race condition ttl in the configuration.
Please comment if you have any suggestions.
If I understand you right, you want the same as that Ruby repo, but in Java.
For that case you may want to put a technical lock key along the one you need.
get yourkey
(nil)
get <yourkey>::lock
// if (nil) then calculate, if t then wait. assuming (nil) here
setex <yourkey>::lock 30 t
OK
// calcultions
set <yourkey> <result>
OK
del <yourkey>::lock
(integer) 1
Here with setex you set a lock key with TTL of 30 sec. You can put another TTL if you want.
There is one problem with the code above - some time will pass before checking a lock and aquiring it. To properly aquire a lock EVAL can be used: eval "local lk=KEYS[1]..'::lock' local lock=redis.call('get',lk) if (lock==false) then redis.call('setex',lk,KEYS[2],'t') return 1 else return 0 end" 2 <yourkey> 30 This either returns 0 if there is no lock or puts a lock and returns 1.

redis cluster: delete keys from smembers in lua script

The below function delete keys from smembers, they are not passed by eval arguments, is it proper in redis cluster?
def ClearLock():
key = 'Server:' + str(localIP) + ':UserLock'
script = '''
local keys = redis.call('smembers', KEYS[1])
local count = 0
for k,v in pairs(keys) do
redis.call('delete', v)
count = count + 1
end
redis.call('delete', KEYS[1])
return count
'''
ret = redisObj.eval(script, 1, key)
You're right to be worried using those keys that aren't passed by an eval argument.
Redis Cluster won't guarantee that those keys are present in the node that's running the lua script, and some of those delete commands will fail as a result.
One thing you can do is mark all those keys with a common hashtag. This will give you the guarantee that any time node re balancing isn't in progress, keys with the same hash tag will be present on the same node. See the sections on hash tags in the the redis cluster spec. http://redis.io/topics/cluster-spec
(When you are doing cluster node re balancing this script can still fail, so you'll need to figure out how you want to handle that)
Perhaps add the local ip for all entries in that set as the hash tag. The main key could become:
key = 'Server:{' + str(localIP) + '}:UserLock'
Adding the {} around the ip in the string will have redis read this as the hashtag.
You would also need to add that same hashtag {"(localIP)"} as part of the key for all entries you are going to later delete as part of this operation.

Nested multi-bulk replies in Redis

In the redis protocol specification, under the "Multi-bulk replies section":
A Multi bulk reply is used to return an array of other replies. Every element of a Multi Bulk Reply can be of any kind, including a nested Multi Bulk Reply.
However, I can't figure out a way to get Redis to return such output. Can anyone provide an example?
Only certain commands (especially those returning list of values) return multi-bulk replies, you can try by using LRANGE for example but you can check the command reference for more details.
Usually multi-bulk replies are only 1-level deep but some Redis commands can return nested multi-bulk replies (max 2 levels), notably EXEC (depending on the commands executed while inside the transaction context) and both EVAL / EVALSHA (depending on the value returned by the Lua script).
Here is an example using EXEC:
redis 127.0.0.1:6379> MULTI
OK
redis 127.0.0.1:6379> LPUSH metavars foo foobar hoge
QUEUED
redis 127.0.0.1:6379> LRANGE metavars 0 -1
QUEUED
redis 127.0.0.1:6379> EXEC
1) (integer) 4
2) 1) "hoge"
2) "foobar"
3) "foo"
4) "metavars"
The second element of the multi-bulk reply to EXEC is a multi-bulk itsef.
PS: I added a clarification in the comments regarding the actual maximum level of nesting of multi-bulk replies when using Lua scripts. tl;dr: there's basically no limit.