I am reading https://redis.io/docs/manual/transactions/, and I am not sure I fully understand the importance of WATCH in a Redis transaction. From https://redis.io/docs/manual/transactions/#usage, I am led to believe EXEC is atomic:
> MULTI
OK
> INCR foo
QUEUED
> INCR bar
QUEUED
> EXEC
1) (integer) 1
2) (integer) 1
But between MULTI and EXEC, the values at keys foo and bar could have been changed (e.g. in response to a request from another client). Is that why WATCH is useful? To error out the transaction if e.g. foo or bar changed after MULTI and before EXEC?
Do I have it correct? Anything I am missing?
I am led to believe EXEC is atomic
YES
Is that why WATCH is useful? To error out the transaction if e.g. foo or bar changed after MULTI and before EXEC?
NOT exactly. Redis fails the transaction if some key the client WATCHed before the transaction has been changed (after WATCH and before EXEC). The watched key might even not used in transaction.
MULTI-EXEC ensures commands between them runs atomically, while WATCH provides check-and-set(CAS) behavior.
With MULTI-EXEC, Redis ensures running INCR foo and INCR bar atomically, otherwise, Redis might run some other commands between INCR foo and INCR bar (these commands might and might not change foo or bar). If you watch some key (either foo, or bar, or even any other keys) before the transaction, and the watched key has been modified since the WATCH command, the transaction fails.
UPDATE
The doc you referred has a good example of using WATCH: implement INCR with GET and SET. In order to implement INCR, you need to 1) GET the old counter from Redis, 2) incr it on client side, and 3) SET the new value to Redis. In order to make it work, you need to ensure no other clients update/change the counter after you run step 1, otherwise, you might write incorrect value and should fail the transaction. 2 clients both get the old counter, i.e. 1, and incr it locally, and client 1 update it to 2, then client 2 update it to 2 again (the counter should be 3). In this case, WATCH is helpful.
WATCH mykey
val = GET mykey
val = val + 1
MULTI
SET mykey $val
EXEC
If some other client update the counter after we watch it, the transaction fails.
When do watched keys no longer watched?
EXEC is called.
UNWATCH is called.
DISCARD is called.
connection is closed.
SPOP [set] [count] was introduced in Redis v3.2 - https://redis.io/commands/spop , my REDIS version is 2.7.
How can I atomically pop several items from a SET using cli commands?
Is it possible to do something like...?
MULTI
a = SPOP myset //It would be nice if I could store this in a variable?
b = SPOP myset
...
SREM a
SREM b
...
EXEC
//
Yes, MULTI combined with a sequence of SPOPs would return the results as part of the EXEC call:
each element being the reply to each of the commands in the atomic transaction
source: https://redis.io/commands/exec
MULTI
SPOP myset
SPOP myset
EXEC
As an alternative, you could also use a Lua script, with EVAL being introduced in Redis 2.6: this would allow you to use variables (hosted within the scope of the script itself, which is being run on the Redis process) and alike but may be more complex, possibly overkill for your scenario.
As a side note, SPOP is already removing items, so there is no need to SREM them.
Is it possible to have one Redis Lua script hit more than one database? I currently have information of one type in DB 0 and information of another type in DB 1. My normal workflow is doing updates on DB 1 based on an API call along with meta information from DB 0. I'd love to do everything in one Lua script, but can't figure out how to hit multiple dbs. I'm doing this in Python using redis-py:
lua_script(keys=some_keys,
args=some_args,
client=some_client)
Since the client implies a specific db, I'm stuck. Ideas?
It is usually a wrong idea to put related data in different Redis databases. There is almost no benefit compared to defining namespaces by key naming conventions (no extra granularity regarding security, persistence, expiration management, etc ...). And a major drawback is the clients have to manually handle the selection of the correct database, which is error prone for clients targeting multiple databases at the same time.
Now, if you still want to use multiple databases, there is a way to make it work with redis-py and Lua scripting.
redis-py does not define a wrapper for the SELECT command (normally used to switch the current database), because of the underlying thread-safe connection pool implementation. But nothing prevents you to call SELECT from a Lua script.
Consider the following example:
$ redis-cli
SELECT 0
SET mykey db0
SELECT 1
SET mykey db1
The following script displays the value of mykey in the 2 databases from the same client connection.
import redis
pool = redis.ConnectionPool(host='localhost', port=6379, db=0)
r = redis.Redis(connection_pool=pool)
lua1 = """
redis.call("select", ARGV[1])
return redis.call("get",KEYS[1])
"""
script1 = r.register_script(lua1)
lua2 = """
redis.call("select", ARGV[1])
local ret = redis.call("get",KEYS[1])
redis.call("select", ARGV[2])
return ret
"""
script2 = r.register_script(lua2)
print r.get("mykey")
print script2( keys=["mykey"], args = [1,0] )
print r.get("mykey"), "ok"
print
print r.get("mykey")
print script1( keys=["mykey"], args = [1] )
print r.get("mykey"), "misleading !!!"
Script lua1 is naive: it just selects a given database before returning the value. Its usage is misleading, because after its execution, the current database associated to the connection has changed. Don't do this.
Script lua2 is much better. It takes the target database and the current database as parameters. It makes sure that the current database is reactivated before the end of the script, so that next command applied on the connection still run in the correct database.
Unfortunately, there is no command to guess the current database in the Lua script, so the client has to provide it systematically. Please note the Lua script must reset the current database at the end whatever happens (even in case of previous error), so it makes complex scripts cumbersome and awkward.
Could you please explain me following example from "The Little Redis Book":
With the code above, we wouldn't be able to implement our own incr
command since they are all executed together once exec is called. From
code, we can't do:
redis.multi()
current = redis.get('powerlevel')
redis.set('powerlevel', current + 1)
redis.exec()
That isn't how Redis transactions work. But, if we add a watch to
powerlevel, we can do:
redis.watch('powerlevel')
current = redis.get('powerlevel')
redis.multi()
redis.set('powerlevel', current + 1)
redis.exec()
If another client changes the value of powerlevel after we've called
watch on it, our transaction will fail. If no client changes the
value, the set will work. We can execute this code in a loop until it
works.
Why we can't execute increment in transaction that can't be interrupted by other command? Why we need to iterate instead and wait until nobody changes value before transaction starts?
There are several questions here.
1) Why we can't execute increment in transaction that can't be interrupted by other command?
Please note first that Redis "transactions" are completely different than what most people think transactions are in classical DBMS.
# Does not work
redis.multi()
current = redis.get('powerlevel')
redis.set('powerlevel', current + 1)
redis.exec()
You need to understand what is executed on server-side (in Redis), and what is executed on client-side (in your script). In the above code, the GET and SET commands will be executed on Redis side, but assignment to current and calculation of current + 1 are supposed to be executed on client side.
To guarantee atomicity, a MULTI/EXEC block delays the execution of Redis commands until the exec. So the client will only pile up the GET and SET commands in memory, and execute them in one shot and atomically in the end. Of course, the attempt to assign current to the result of GET and incrementation will occur well before. Actually the redis.get method will only return the string "QUEUED" to signal the command is delayed, and the incrementation will not work.
In MULTI/EXEC blocks you can only use commands whose parameters can be fully known before the begining of the block. You may want to read the documentation for more information.
2) Why we need to iterate instead and wait until nobody changes value before transaction starts?
This is an example of concurrent optimistic pattern.
If we used no WATCH/MULTI/EXEC, we would have a potential race condition:
# Initial arbitrary value
powerlevel = 10
session A: GET powerlevel -> 10
session B: GET powerlevel -> 10
session A: current = 10 + 1
session B: current = 10 + 1
session A: SET powerlevel 11
session B: SET powerlevel 11
# In the end we have 11 instead of 12 -> wrong
Now let's add a WATCH/MULTI/EXEC block. With a WATCH clause, the commands between MULTI and EXEC are executed only if the value has not changed.
# Initial arbitrary value
powerlevel = 10
session A: WATCH powerlevel
session B: WATCH powerlevel
session A: GET powerlevel -> 10
session B: GET powerlevel -> 10
session A: current = 10 + 1
session B: current = 10 + 1
session A: MULTI
session B: MULTI
session A: SET powerlevel 11 -> QUEUED
session B: SET powerlevel 11 -> QUEUED
session A: EXEC -> success! powerlevel is now 11
session B: EXEC -> failure, because powerlevel has changed and was watched
# In the end, we have 11, and session B knows it has to attempt the transaction again
# Hopefully, it will work fine this time.
So you do not have to iterate to wait until nobody changes the value, but rather to attempt the operation again and again until Redis is sure the values are consistent and signals it is successful.
In most cases, if the "transactions" are fast enough and the probability to have contention is low, the updates are very efficient. Now, if there is contention, some extra operations will have to be done for some "transactions" (due to the iteration and retries). But the data will always be consistent and no locking is required.
I want to delete all keys. I want everything wiped out and give me a blank database.
Is there a way to do this in Redis client?
With redis-cli:
FLUSHDB – Deletes all keys from the connection's current database.
FLUSHALL – Deletes all keys from all databases.
For example, in your shell:
redis-cli flushall
Heads up that FLUSHALL may be overkill. FLUSHDB is the one to flush a database only. FLUSHALL will wipe out the entire server. As in every database on the server. Since the question was about flushing a database I think this is an important enough distinction to merit a separate answer.
Answers so far are absolutely correct; they delete all keys.
However, if you also want to delete all Lua scripts from the Redis instance, you should follow it by:
SCRIPT FLUSH
The OP asks two questions; this completes the second question (everything wiped).
FLUSHALL
Remove all keys from all databases
FLUSHDB
Remove all keys from the current database
SCRIPT FLUSH
Remove all the scripts from the script cache.
If you're using the redis-rb gem then you can simply call:
your_redis_client.flushdb
you can use flushall in your terminal
redis-cli> flushall
This method worked for me - delete everything of current connected Database on your Jedis cluster.
public static void resetRedis() {
jedisCluster = RedisManager.getJedis(); // your JedisCluster instance
for (JedisPool pool : jedisCluster.getClusterNodes().values()) {
try (Jedis jedis = pool.getResource()) {
jedis.flushAll();
}
catch (Exception ex){
System.out.println(ex.getMessage());
}
}
}
One more option from my side:
In our production and pre-production databases there are thousands of keys. Time to time we need to delete some keys (by some mask), modify by some criteria etc. Of course, there is no way to do it manually from CLI, especially having sharding (512 logical dbs in each physical).
For this purpose I write java client tool that does all this work. In case of keys deletion the utility can be very simple, only one class there:
public class DataCleaner {
public static void main(String args[]) {
String keyPattern = args[0];
String host = args[1];
int port = Integer.valueOf(args[2]);
int dbIndex = Integer.valueOf(args[3]);
Jedis jedis = new Jedis(host, port);
int deletedKeysNumber = 0;
if(dbIndex >= 0){
deletedKeysNumber += deleteDataFromDB(jedis, keyPattern, dbIndex);
} else {
int dbSize = Integer.valueOf(jedis.configGet("databases").get(1));
for(int i = 0; i < dbSize; i++){
deletedKeysNumber += deleteDataFromDB(jedis, keyPattern, i);
}
}
if(deletedKeysNumber == 0) {
System.out.println("There is no keys with key pattern: " + keyPattern + " was found in database with host: " + host);
}
}
private static int deleteDataFromDB(Jedis jedis, String keyPattern, int dbIndex) {
jedis.select(dbIndex);
Set<String> keys = jedis.keys(keyPattern);
for(String key : keys){
jedis.del(key);
System.out.println("The key: " + key + " has been deleted from database index: " + dbIndex);
}
return keys.size();
}
}
Writing such kind of tools I find very easy and spend no more then 5-10 min.
Use FLUSHALL ASYNC if using (Redis 4.0.0 or greater) else FLUSHALL.
https://redis.io/commands/flushall
Note: Everything before executing FLUSHALL ASYNC will be evicted. The changes made during executing FLUSHALL ASYNC will remain unaffected.
FLUSHALL Deletes all the Keys of All exisiting databases .
FOr Redis version > 4.0 , FLUSHALL ASYNC is supported which runs in a background thread wjthout blocking the server
https://redis.io/commands/flushall
FLUSHDB - Deletes all the keys in the selected Database .
https://redis.io/commands/flushdb
The time complexity to perform the operations will be O(N) where N being the number of keys in the database.
The Response from the redis will be a simple string "OK"
Open redis-cli and type:
FLUSHALL
You can use FLUSHALL which will delete all keys from your every database.
Where as FLUSHDB will delete all keys from our current database.
Stop Redis instance.
Delete RDB file.
Start Redis instance.
redis-cli -h <host> -p <port> flushall
It will remove all data from client connected(with host and port)
After you start the Redis-server using:service redis-server start --port 8000 or redis-server.
Use redis-cli -p 8000 to connect to the server as a client in a different terminal.
You can use either
FLUSHDB - Delete all the keys of the currently selected DB. This command never fails. The time-complexity for this operation is O(N), N being the number of keys in the database.
FLUSHALL - Delete all the keys of all the existing databases, not just the currently selected one. This command never fails. The time-complexity for this operation is O(N), N being the number of keys in all existing databases.
Check the documentation for ASYNC option for both.
If you are using Redis through its python interface, use these two functions for the same functionality:
def flushall(self):
"Delete all keys in all databases on the current host"
return self.execute_command('FLUSHALL')
and
def flushdb(self):
"Delete all keys in the current database"
return self.execute_command('FLUSHDB')
i think sometimes stop the redis-server and delete rdb,aof files。
make sure there’s no data can be reloading.
then start the redis-server,now it's new and empty.
You can use FLUSHDB
e.g
List databases:
127.0.0.1:6379> info keyspace
# Keyspace
List keys
127.0.0.1:6379> keys *
(empty list or set)
Add one value to a key
127.0.0.1:6379> lpush key1 1
(integer) 1
127.0.0.1:6379> keys *
1) "key1"
127.0.0.1:6379> info keyspace
# Keyspace
db0:keys=1,expires=0,avg_ttl=0
Create other key with two values
127.0.0.1:6379> lpush key2 1
(integer) 1
127.0.0.1:6379> lpush key2 2
(integer) 2
127.0.0.1:6379> keys *
1) "key1"
2) "key2"
127.0.0.1:6379> info keyspace
# Keyspace
db0:keys=2,expires=0,avg_ttl=0
List all values in key2
127.0.0.1:6379> lrange key2 0 -1
1) "2"
2) "1"
Do FLUSHDB
127.0.0.1:6379> flushdb
OK
List keys and databases
127.0.0.1:6379> keys *
(empty list or set)
127.0.0.1:6379> info keyspace
# Keyspace
Your questions seems to be about deleting entire keys in a database. In this case you should try:
Connect to redis. You can use the command redis-cli (if running on port 6379), else you will have to specify the port number also.
Select your database (command select {Index})
Execute the command flushdb
If you want to flush keys in all databases, then you should try flushall.
you can use following approach in python
def redis_clear_cache(self):
try:
redis_keys = self.redis_client.keys('*')
except Exception as e:
# print('redis_client.keys() raised exception => ' + str(e))
return 1
try:
if len(redis_keys) != 0:
self.redis_client.delete(*redis_keys)
except Exception as e:
# print('redis_client.delete() raised exception => ' + str(e))
return 1
# print("cleared cache")
return 0
This works for me: redis-cli KEYS \* | xargs --max-procs=16 -L 100 redis-cli DEL
It list all Keys in redis, then pass using xargs to redis-cli DEL, using max 100 Keys per command, but running 16 command at time, very fast and useful when there is not FLUSHDB or FLUSHALL due to security reasons, for example when using Redis from Bitnami in Docker or Kubernetes. Also, it doesn't require any additional programming language and it just one line.
If you want to clear redis in windows:
find redis-cli in
C:\Program Files\Redis
and run FLUSHALL command.
Its better if you can have RDM (Redis Desktop Manager).
You can connect to your redis server by creating a new connection in RDM.
Once its connected you can check the live data, also you can play around with any redis command.
Opening a cli in RDM.
1) Right click on the connection you will see a console option, just click on it a new console window will open at the bottom of RDM.
Coming back to your question FLUSHALL is the command, you can simply type FLUSHALL in the redis cli.
Moreover if you want to know about any redis command and its proper usage, go to link below.
https://redis.io/commands.
There are different approaches. If you want to do this from remote, issue flushall to that instance, through command line tool redis-cli or whatever tools i.e. telnet, a programming language SDK. Or just log in that server, kill the process, delete its dump.rdb file and appendonly.aof(backup them before deletion).
If you are using Java then from the documentation, you can use any one of them based on your use case.
/**
* Remove all keys from all databases.
*
* #return String simple-string-reply
*/
String flushall();
/**
* Remove all keys asynchronously from all databases.
*
* #return String simple-string-reply
*/
String flushallAsync();
/**
* Remove all keys from the current database.
*
* #return String simple-string-reply
*/
String flushdb();
/**
* Remove all keys asynchronously from the current database.
*
* #return String simple-string-reply
*/
String flushdbAsync();
Code:
RedisAdvancedClusterCommands syncCommands = // get sync() or async() commands
syncCommands.flushdb();
Read more: https://github.com/lettuce-io/lettuce-core/wiki/Redis-Cluster
For anyone wondering how to do this in C# it's the same as the answer provided for Python for this same question.
I am using StackExchange.Redis v2.2.88 for a dot net (core) 5 project. I only need to clear my keys for integration testing and I have no purpose to do this in production.
I checked what is available in intellisense and I don't see a stock way to do this with the existing API. I imagine this is intentional and by design. Luckily the API does expose an Execute method.
I tested this by doing the following:
Opened up a command window. I am using docker, so I did it through docker.
Type in redis-cli which starts the CLI
Type in KEYS * and it shows me all of my keys so I can verify they exist before and after executing the following code below:
//Don't abuse this, use with caution
var cache = ConnectionMultiplexer.Connect(
new ConfigurationOptions
{
EndPoints = { "localhost:6379" }
});
var db = _cache.GetDatabase();
db.Execute("flushdb");
Type in KEYS * again and view that it's empty.
Hope this helps anyone looking for it.