redis cluster: delete keys from smembers in lua script - redis

The below function delete keys from smembers, they are not passed by eval arguments, is it proper in redis cluster?
def ClearLock():
key = 'Server:' + str(localIP) + ':UserLock'
script = '''
local keys = redis.call('smembers', KEYS[1])
local count = 0
for k,v in pairs(keys) do
redis.call('delete', v)
count = count + 1
end
redis.call('delete', KEYS[1])
return count
'''
ret = redisObj.eval(script, 1, key)

You're right to be worried using those keys that aren't passed by an eval argument.
Redis Cluster won't guarantee that those keys are present in the node that's running the lua script, and some of those delete commands will fail as a result.
One thing you can do is mark all those keys with a common hashtag. This will give you the guarantee that any time node re balancing isn't in progress, keys with the same hash tag will be present on the same node. See the sections on hash tags in the the redis cluster spec. http://redis.io/topics/cluster-spec
(When you are doing cluster node re balancing this script can still fail, so you'll need to figure out how you want to handle that)
Perhaps add the local ip for all entries in that set as the hash tag. The main key could become:
key = 'Server:{' + str(localIP) + '}:UserLock'
Adding the {} around the ip in the string will have redis read this as the hashtag.
You would also need to add that same hashtag {"(localIP)"} as part of the key for all entries you are going to later delete as part of this operation.

Related

Redis LUA script eval flags: user_script:1: unexpected symbol near '#'

I have a LUA script that fetches a SET data-type and iterates through it's members. This set contains a list of other SET data-types (which ill call the secondary sets for clarity).
For context, the secondary SETs are lists of cache keys, majority of cache keys within these secondary sets reference cache items that no longer exist, these lists grow exponentially in size so my goal is to check each of these cache keys; within each secondary set to see if they have expired and if so, remove them from the secondary set, thus reducing the size and conserving memory.
local pending = redis.call('SMEMBERS', KEYS[1])
for i, key in ipairs(pending) do
redis.call('SREM', KEYS[1], key)
local keys = redis.call('SMEMBERS', key)
local expired = {}
for i, taggedKey in ipairs(keys) do
local ttl = redis.call('ttl', taggedKey)
if ttl == -2 then
table.insert(expired, taggedKey)
end
end
if #expired > 0 then
redis.call('SREM', key, unpack(expired))
end
end
The script above works perfectly, until one of the secondary set keys contain a different hash-slot, the error I receive is:
Lua script attempted to access keys of different hash slots
Looking through the docs, I noticed that Redis allows this to be bypassed with an EVAL flag allow-cross-slot-keys, so following that example I updated my script to the following
#!lua flags=allow-cross-slot-keys
local pending = redis.call('SMEMBERS', KEYS[1])
for i, key in ipairs(pending) do
redis.call('SREM', KEYS[1], key)
local keys = redis.call('SMEMBERS', key)
local expired = {}
for i, taggedKey in ipairs(keys) do
local ttl = redis.call('ttl', taggedKey)
if ttl == -2 then
table.insert(expired, taggedKey)
end
end
if #expired > 0 then
redis.call('SREM', key, unpack(expired))
end
end
I'm now strangely left with the error:
"ERR Error compiling script (new function): user_script:1: unexpected symbol near '#'"
Any help appreciated, have reached out to the pocket burner that is Redis Enterprise but still awaiting a response.
For awareness, this is to clean up the shoddy Laravel Redis implementation where they create these sets to manage tagged cache but never clean them up, over time they amount to gigabytes in wasted space, and if your eviction policy requires allkeys-lfu then all your cache will be pushed out in favour of this messy garbage Laravel leaves behind, leaving you with a worthless caching system or thousands of dollars out of pocket to increase RAM.
Edit: It would seem we're on Redis 6.2 and those flags are Redis 7+ unlikely there is a solution suitable for 6.2 but if there is please let me know.

Redis Gears events in cluster

I have a redis cluster with the following configuration :
91d426e9a569b1c1ad84d75580607e3f99658d30 127.0.0.1:7002#17002 myself,master - 0 1596197488000 1 connected 0-5460
9ff311ae9f413b48578ff0519e97fef2ced57b1e 127.0.0.1:7003#17003 master - 0 1596197490000 2 connected 5461-10922
4de4d36b968bd0b5b5dc8023cb00a5a2ab62effc 127.0.0.1:7004#17004 master - 0 1596197492253 3 connected 10923-16383
a32088043c31c5d3f20828bfe06306b9f0717635 127.0.0.1:7005#17005 slave 91d426e9a569b1c1ad84d75580607e3f99658d30 0 1596197490251 1 connected
b5e9ec7851dfd8dc5ab0cf35c230a0e716dd934c 127.0.0.1:7006#17006 slave 9ff311ae9f413b48578ff0519e97fef2ced57b1e 0 1596197489000 2 connected
a34cc74321e1c75e4cf203248bc0883833c928c7 127.0.0.1:7007#17007 slave 4de4d36b968bd0b5b5dc8023cb00a5a2ab62effc 0 1596197492000 3 connected
I want to create a set with all keys in the cluster by listening key operations with redis gears and store key names in a redis set called keys.
To do thant, I run this redis gears command
RG.PYEXECUTE "GearsBuilder('KeysReader').foreach(lambda x: execute('sadd', 'keys', x['key'])).register(readValue=False)"
It work, but only if the updated key is store on the same node of the key keys
Example :
With my cluster configuration, the key keys is store un node 91d426e9a569b1c1ad84d75580607e3f99658d30 (the first node).
If i run :
SET foo bar
SET bar foo
SMEMBERS keys
I have the following result :
127.0.0.1:7002> SET foo bar
-> Redirected to slot [12182] located at 127.0.0.1:7004
OK
127.0.0.1:7004> SET bar foo
-> Redirected to slot [5061] located at 127.0.0.1:7002
OK
127.0.0.1:7002> SMEMBERS keys
1) "bar"
2) "keys"
127.0.0.1:7002>
The first key name foo is not saved in the set keys.
Is it possible to have key names on other nodes saved in the keys set with redis gears ?
Redis version : 6.0.6
Redis gears version : 1.0.1
Thanks.
If the key was written to a shard that does not contain the 'keys' key you need to make sure to move it to another shard with the repartition operation (https://oss.redislabs.com/redisgears/operations.html#repartition), so this should work:
RG.PYEXECUTE "GearsBuilder('KeysReader').repartition(lambda x: 'keys').foreach(lambda x: execute('sadd', 'keys', x['key'])).register(readValue=False)"
The repartition operation will move the record to the correct shard and the 'sadd' will succeed.
Another option is to maintain a set per shard and collect them using another Gear function. To do that you need to use the hashtag function (https://oss.redislabs.com/redisgears/runtime.html#hashtag) to make sure the set created belongs to the current shard. So the following registration will maintain a set per shard:
RG.PYEXECUTE "GearsBuilder('KeysReader').foreach(lambda x: execute('sadd', 'keys{%s}' % hashtag(), x['key'])).register(mode='sync', readValue=False)"
Notice that the sync mode tells RedisGears not to start a distributed execution and it should be much faster to follow the keys this way.
Then to collect all the values:
RG.PYEXECUTE "GB('ShardsIDReader').flatmap(lambda x: execute('smembers', 'keys{%s}' % hashtag())).run()"
The first approach is good for read-intensive use cases and the second approach is good for write-intensive use cases. Depends on your use case you need to chose the right approach.

Is MULTI supposed to work on Redis clustered?

I'm using Redis on a clustered db (locally). I'm trying the MULTI command, but it seems that it is not working. Individual commands work and I can see how the shard moves.
Is there anything else I should be doing to make MULTI work? The documentation is unclear about whether or not it should work. https://redis.io/topics/cluster-spec
In the example below I just set individual keys (note how the port=cluster changes), then trying a multi command. The command executes before EXEC is called
127.0.0.1:30001> set a 1
-> Redirected to slot [15495] located at 127.0.0.1:30003
OK
127.0.0.1:30003> set b 2
-> Redirected to slot [3300] located at 127.0.0.1:30001
OK
127.0.0.1:30001> MULTI
OK
127.0.0.1:30001> HSET c f val
-> Redirected to slot [7365] located at 127.0.0.1:30002
(integer) 1
127.0.0.1:30002> HSET c f2 val2
(integer) 1
127.0.0.1:30002> EXEC
(error) ERR EXEC without MULTI
127.0.0.1:30002> HGET c f
"val"
127.0.0.1:30002>
MULTI transactions, as well as any multi-key operations, are supported only within a single hashslot in a clustered Redis deployment.

Is there any ways to evict keys from Redis just after accessing it?

As per this answer [https://stackoverflow.com/a/17099452/8804776][1]
"You might not know it, but Redis is actually single-threaded, which
is how every command is guaranteed to be atomic. While one command is
executing, no other command will run."
Redis is single threaded. My requirement is to store a key in Redis and as soon as a thread access it it should evict.
eg:
HSET bucket-1 name justin
Thread A and B accessing the same key
HGET bucket-1 name
Only one thread should get the data at any given point.
Is there any particular settings that i can do to achieve this?
The term "eviction" refers to keys that have an expiry set (TTL). While there is no dedicated command to achieve what you want, you can use a transaction such as:
WATCH bucket-1
HGET bucket-1 name
(pseudo: if not nil)
MULTI
HDEL bucket-1 name
EXEC
If the EXEC fails it means you're in thread B (assuming that A got there first).
Alternatively, the above can be compacted into an idiomatic Lua script - as suggested by #The_Dude - such as (newlines added for readability):
EVAL "local v=redis.call('HGET', KEYS[1], ARGV[1])
redis.call('HDEL', KEYS[1], ARGS[1])
return v"
1 bucket-1 name
A nil reply means you're B.
There isn't a command to do that with hashes. You could use a Lua script to handle this.
You could also use GETSET instead, where you can reset a key to a value that denotes it has been used by another consumer.

How do I delete everything in Redis?

I want to delete all keys. I want everything wiped out and give me a blank database.
Is there a way to do this in Redis client?
With redis-cli:
FLUSHDB – Deletes all keys from the connection's current database.
FLUSHALL – Deletes all keys from all databases.
For example, in your shell:
redis-cli flushall
Heads up that FLUSHALL may be overkill. FLUSHDB is the one to flush a database only. FLUSHALL will wipe out the entire server. As in every database on the server. Since the question was about flushing a database I think this is an important enough distinction to merit a separate answer.
Answers so far are absolutely correct; they delete all keys.
However, if you also want to delete all Lua scripts from the Redis instance, you should follow it by:
SCRIPT FLUSH
The OP asks two questions; this completes the second question (everything wiped).
FLUSHALL
Remove all keys from all databases
FLUSHDB
Remove all keys from the current database
SCRIPT FLUSH
Remove all the scripts from the script cache.
If you're using the redis-rb gem then you can simply call:
your_redis_client.flushdb
you can use flushall in your terminal
redis-cli> flushall
This method worked for me - delete everything of current connected Database on your Jedis cluster.
public static void resetRedis() {
jedisCluster = RedisManager.getJedis(); // your JedisCluster instance
for (JedisPool pool : jedisCluster.getClusterNodes().values()) {
try (Jedis jedis = pool.getResource()) {
jedis.flushAll();
}
catch (Exception ex){
System.out.println(ex.getMessage());
}
}
}
One more option from my side:
In our production and pre-production databases there are thousands of keys. Time to time we need to delete some keys (by some mask), modify by some criteria etc. Of course, there is no way to do it manually from CLI, especially having sharding (512 logical dbs in each physical).
For this purpose I write java client tool that does all this work. In case of keys deletion the utility can be very simple, only one class there:
public class DataCleaner {
public static void main(String args[]) {
String keyPattern = args[0];
String host = args[1];
int port = Integer.valueOf(args[2]);
int dbIndex = Integer.valueOf(args[3]);
Jedis jedis = new Jedis(host, port);
int deletedKeysNumber = 0;
if(dbIndex >= 0){
deletedKeysNumber += deleteDataFromDB(jedis, keyPattern, dbIndex);
} else {
int dbSize = Integer.valueOf(jedis.configGet("databases").get(1));
for(int i = 0; i < dbSize; i++){
deletedKeysNumber += deleteDataFromDB(jedis, keyPattern, i);
}
}
if(deletedKeysNumber == 0) {
System.out.println("There is no keys with key pattern: " + keyPattern + " was found in database with host: " + host);
}
}
private static int deleteDataFromDB(Jedis jedis, String keyPattern, int dbIndex) {
jedis.select(dbIndex);
Set<String> keys = jedis.keys(keyPattern);
for(String key : keys){
jedis.del(key);
System.out.println("The key: " + key + " has been deleted from database index: " + dbIndex);
}
return keys.size();
}
}
Writing such kind of tools I find very easy and spend no more then 5-10 min.
Use FLUSHALL ASYNC if using (Redis 4.0.0 or greater) else FLUSHALL.
https://redis.io/commands/flushall
Note: Everything before executing FLUSHALL ASYNC will be evicted. The changes made during executing FLUSHALL ASYNC will remain unaffected.
FLUSHALL Deletes all the Keys of All exisiting databases .
FOr Redis version > 4.0 , FLUSHALL ASYNC is supported which runs in a background thread wjthout blocking the server
https://redis.io/commands/flushall
FLUSHDB - Deletes all the keys in the selected Database .
https://redis.io/commands/flushdb
The time complexity to perform the operations will be O(N) where N being the number of keys in the database.
The Response from the redis will be a simple string "OK"
Open redis-cli and type:
FLUSHALL
You can use FLUSHALL which will delete all keys from your every database.
Where as FLUSHDB will delete all keys from our current database.
Stop Redis instance.
Delete RDB file.
Start Redis instance.
redis-cli -h <host> -p <port> flushall
It will remove all data from client connected(with host and port)
After you start the Redis-server using:service redis-server start --port 8000 or redis-server.
Use redis-cli -p 8000 to connect to the server as a client in a different terminal.
You can use either
FLUSHDB - Delete all the keys of the currently selected DB. This command never fails. The time-complexity for this operation is O(N), N being the number of keys in the database.
FLUSHALL - Delete all the keys of all the existing databases, not just the currently selected one. This command never fails. The time-complexity for this operation is O(N), N being the number of keys in all existing databases.
Check the documentation for ASYNC option for both.
If you are using Redis through its python interface, use these two functions for the same functionality:
def flushall(self):
"Delete all keys in all databases on the current host"
return self.execute_command('FLUSHALL')
and
def flushdb(self):
"Delete all keys in the current database"
return self.execute_command('FLUSHDB')
i think sometimes stop the redis-server and delete rdb,aof files。
make sure there’s no data can be reloading.
then start the redis-server,now it's new and empty.
You can use FLUSHDB
e.g
List databases:
127.0.0.1:6379> info keyspace
# Keyspace
List keys
127.0.0.1:6379> keys *
(empty list or set)
Add one value to a key
127.0.0.1:6379> lpush key1 1
(integer) 1
127.0.0.1:6379> keys *
1) "key1"
127.0.0.1:6379> info keyspace
# Keyspace
db0:keys=1,expires=0,avg_ttl=0
Create other key with two values
127.0.0.1:6379> lpush key2 1
(integer) 1
127.0.0.1:6379> lpush key2 2
(integer) 2
127.0.0.1:6379> keys *
1) "key1"
2) "key2"
127.0.0.1:6379> info keyspace
# Keyspace
db0:keys=2,expires=0,avg_ttl=0
List all values in key2
127.0.0.1:6379> lrange key2 0 -1
1) "2"
2) "1"
Do FLUSHDB
127.0.0.1:6379> flushdb
OK
List keys and databases
127.0.0.1:6379> keys *
(empty list or set)
127.0.0.1:6379> info keyspace
# Keyspace
Your questions seems to be about deleting entire keys in a database. In this case you should try:
Connect to redis. You can use the command redis-cli (if running on port 6379), else you will have to specify the port number also.
Select your database (command select {Index})
Execute the command flushdb
If you want to flush keys in all databases, then you should try flushall.
you can use following approach in python
def redis_clear_cache(self):
try:
redis_keys = self.redis_client.keys('*')
except Exception as e:
# print('redis_client.keys() raised exception => ' + str(e))
return 1
try:
if len(redis_keys) != 0:
self.redis_client.delete(*redis_keys)
except Exception as e:
# print('redis_client.delete() raised exception => ' + str(e))
return 1
# print("cleared cache")
return 0
This works for me: redis-cli KEYS \* | xargs --max-procs=16 -L 100 redis-cli DEL
It list all Keys in redis, then pass using xargs to redis-cli DEL, using max 100 Keys per command, but running 16 command at time, very fast and useful when there is not FLUSHDB or FLUSHALL due to security reasons, for example when using Redis from Bitnami in Docker or Kubernetes. Also, it doesn't require any additional programming language and it just one line.
If you want to clear redis in windows:
find redis-cli in
C:\Program Files\Redis
and run FLUSHALL command.
Its better if you can have RDM (Redis Desktop Manager).
You can connect to your redis server by creating a new connection in RDM.
Once its connected you can check the live data, also you can play around with any redis command.
Opening a cli in RDM.
1) Right click on the connection you will see a console option, just click on it a new console window will open at the bottom of RDM.
Coming back to your question FLUSHALL is the command, you can simply type FLUSHALL in the redis cli.
Moreover if you want to know about any redis command and its proper usage, go to link below.
https://redis.io/commands.
There are different approaches. If you want to do this from remote, issue flushall to that instance, through command line tool redis-cli or whatever tools i.e. telnet, a programming language SDK. Or just log in that server, kill the process, delete its dump.rdb file and appendonly.aof(backup them before deletion).
If you are using Java then from the documentation, you can use any one of them based on your use case.
/**
* Remove all keys from all databases.
*
* #return String simple-string-reply
*/
String flushall();
/**
* Remove all keys asynchronously from all databases.
*
* #return String simple-string-reply
*/
String flushallAsync();
/**
* Remove all keys from the current database.
*
* #return String simple-string-reply
*/
String flushdb();
/**
* Remove all keys asynchronously from the current database.
*
* #return String simple-string-reply
*/
String flushdbAsync();
Code:
RedisAdvancedClusterCommands syncCommands = // get sync() or async() commands
syncCommands.flushdb();
Read more: https://github.com/lettuce-io/lettuce-core/wiki/Redis-Cluster
For anyone wondering how to do this in C# it's the same as the answer provided for Python for this same question.
I am using StackExchange.Redis v2.2.88 for a dot net (core) 5 project. I only need to clear my keys for integration testing and I have no purpose to do this in production.
I checked what is available in intellisense and I don't see a stock way to do this with the existing API. I imagine this is intentional and by design. Luckily the API does expose an Execute method.
I tested this by doing the following:
Opened up a command window. I am using docker, so I did it through docker.
Type in redis-cli which starts the CLI
Type in KEYS * and it shows me all of my keys so I can verify they exist before and after executing the following code below:
//Don't abuse this, use with caution
var cache = ConnectionMultiplexer.Connect(
new ConfigurationOptions
{
EndPoints = { "localhost:6379" }
});
var db = _cache.GetDatabase();
db.Execute("flushdb");
Type in KEYS * again and view that it's empty.
Hope this helps anyone looking for it.