Let's suppose we have a list of tasks to be executed and some workers that pop items from that list.
If a worker crashes unexpectedly before finishing the execution of the task then that task is lost.
What kind of mechanism could prevent that so we can reprocess abandoned tasks?
You need to use ZSET to solve this issue
Pop operation
Add to ZSET with expiry time
Remove from list
Ack Operation
Remove from ZSET
Worker
You need to run a scheduled worker that will move items from ZSET to list if they are expired
Read it in detail, how I did in Rqueue https://medium.com/#sonus21/introducing-rqueue-redis-queue-d344f5c36e1b
Github Code: https://github.com/sonus21/rqueue
There is no EXPIRE for set or zset members and there is no atomic operation to pop from zset and push to list. So I ended writing this lua script which runs atomically.
First I add a task to the executing-tasks zset with a timestamp score ((new Date()).valueOf() in javascript):
ZADD 1619028226766 executing-tasks
Then I run the script:
EVAL [THE SCRIPT] 2 executing-tasks tasks 1619028196766
If the task is more than 30 seconds old it will be sent to the tasks list. If not, it will be sent back to the executing-tasks zset.
Here is the script
local source = KEYS[1]
local destination = KEYS[2]
local min_score = ARGV[1]
local popped = redis.call('zpopmin', source)
local id = popped[1]
local score = popped[2]
if table.getn(popped) > 0 then
if score < min_score then
redis.call('rpush', destination, id)
return { "RESTORED", id }
else
redis.call('zadd', source, score, id)
return { "SENT_BACK", id }
end
end
return { "NOTHING_DONE" }
Related
This integration test uses ReactiveRedisTemplate list operations to:
push a List to the cache using the ReactiveRedisTemplate
list operation leftPushAll()
retrieves the List as a flux from the
cache using the ReactiveRedisTemplate list operation range() to retrieve all elements of the list from index 0 to the last index in the list
public void givenList_whenLeftPushAndRange_thenPopCatalog() {
//catalog is a List<Catalog> of size 367
Mono<Long> lPush = reactiveListOps.leftPushAll(LIST_NAME, catalog).log("Pushed");
StepVerifier.create(lPush).expectNext(367L).verifyComplete();
Mono<Long> sz = reactiveListOps.size(LIST_NAME);
Long listsz = sz.block();
assert listsz != null;
Assert.assertEquals(listsz.intValue(), catalog.size());
Flux<Catalog> catalogFlux =
reactiveListOps
.range(LIST_NAME, 0, listsz)
.map(
c -> {
System.out.println(c.getOffset());
return c;
})
.log("Fetched Catalog");
List<Catalog> catalogList = catalogFlux.collectList().block();
assert catalogList != null;
Assert.assertEquals(catalogList.size(), catalog.size());
}
This test works fine. My question is - how is the EXPIRY and TTL of these List objects stored in the cache controlled?
The reason I ask - is on my local Redis server, I notice they remain in the cache for a number of hours, but when I run this code against a Redis server hosted on AWS, the List objects only seem to remain in the cache for 30 minutes.
Is there a configuration property that can be used to control the TTL of list objects?
Thank you, ideally I'd like to have more control over how long these objects remain in the cache.
I've been playing around with redis to keep track of the ratelimit of an external api in a distributed system. I've decided to create a key for each route where a limit is present. The value of the key is how many request I can still make until the limit resets. And the reset is made by setting the TTL of the key to when the limit will reset.
For that I wrote the following lua script:
if redis.call("EXISTS", KEYS[1]) == 1 then
local remaining = redis.call("DECR", KEYS[1])
if remaining < 0 then
local pttl = redis.call("PTTL", KEYS[1])
if pttl > 0 then
--[[
-- We would exceed the limit if we were to do a call now, so let's send back that a limit exists (1)
-- Also let's send back how much we would have exceeded the ratelimit if we were to ignore it (ramaning)
-- and how long we need to wait in ms untill we can try again (pttl)
]]
return {1, remaining, pttl}
elseif pttl == -1 then
-- The key expired the instant after we checked that it existed, so delete it and say there is no ratelimit
redis.call("DEL", KEYS[1])
return {0}
elseif pttl == -2 then
-- The key expired the instant after we decreased it by one. So let's just send back that there is no limit
return {0}
end
else
-- Great we have a ratelimit, but we did not exceed it yet.
return {1, remaining}
end
else
return {0}
end
Since a watched key can expire in the middle of a multi transaction without aborting it. I assume the same is the case for lua scripts. Therefore I put in the cases for when the ttl is -1 or -2.
After I wrote that script I looked a bit more in depth at the eval command page and found out that a lua script has to be a pure function.
In there it says
The script must always evaluates the same Redis write commands with
the same arguments given the same input data set. Operations performed
by the script cannot depend on any hidden (non-explicit) information
or state that may change as script execution proceeds or between
different executions of the script, nor can it depend on any external
input from I/O devices.
With this description I'm not sure if my function is a pure function or not.
After Itamar's answer I wanted to confirm that for myself so I wrote a little lua script to test that. The scripts creates a key with a 10ms TTL and checks the ttl untill it's less then 0:
redis.call("SET", KEYS[1], "someVal","PX", 10)
local tmp = redis.call("PTTL", KEYS[1])
while tmp >= 0
do
tmp = redis.call("PTTL", KEYS[1])
redis.log(redis.LOG_WARNING, "PTTL:" .. tmp)
end
return 0
When I ran this script it never terminated. It just went on to spam my logs until I killed the redis server. However time dosen't stand still while the script runs, instead it just stops once the TTL is 0.
So the key ages, it just never expires.
Since a watched key can expire in the middle of a multi transaction without aborting it. I assume the same is the case for lua scripts. Therefore I put in the cases for when the ttl is -1 or -2.
AFAIR that isn't the case w/ Lua scripts - time kinda stops (in terms of TTL at least) when the script's running.
With this description I'm not sure if my function is a pure function or not.
Your script's great (without actually trying to understand what it does), don't worry :)
I need to design a Redis-driven scalable task scheduling system.
Requirements:
Multiple worker processes.
Many tasks, but long periods of idleness are possible.
Reasonable timing precision.
Minimal resource waste when idle.
Should use synchronous Redis API.
Should work for Redis 2.4 (i.e. no features from upcoming 2.6).
Should not use other means of RPC than Redis.
Pseudo-API: schedule_task(timestamp, task_data). Timestamp is in integer seconds.
Basic idea:
Listen for upcoming tasks on list.
Put tasks to buckets per timestamp.
Sleep until the closest timestamp.
If a new task appears with timestamp less than closest one, wake up.
Process all upcoming tasks with timestamp ≤ now, in batches (assuming
that task execution is fast).
Make sure that concurrent worker wouldn't process same tasks. At the same time, make sure that no tasks are lost if we crash while processing them.
So far I can't figure out how to fit this in Redis primitives...
Any clues?
Note that there is a similar old question: Delayed execution / scheduling with Redis? In this new question I introduce more details (most importantly, many workers). So far I was not able to figure out how to apply old answers here — thus, a new question.
Here's another solution that builds on a couple of others [1]. It uses the redis WATCH command to remove the race condition without using lua in redis 2.6.
The basic scheme is:
Use a redis zset for scheduled tasks and redis queues for ready to run tasks.
Have a dispatcher poll the zset and move tasks that are ready to run into the redis queues. You may want more than 1 dispatcher for redundancy but you probably don't need or want many.
Have as many workers as you want which do blocking pops on the redis queues.
I haven't tested it :-)
The foo job creator would do:
def schedule_task(queue, data, delay_secs):
# This calculation for run_at isn't great- it won't deal well with daylight
# savings changes, leap seconds, and other time anomalies. Improvements
# welcome :-)
run_at = time.time() + delay_secs
# If you're using redis-py's Redis class and not StrictRedis, swap run_at &
# the dict.
redis.zadd(SCHEDULED_ZSET_KEY, run_at, {'queue': queue, 'data': data})
schedule_task('foo_queue', foo_data, 60)
The dispatcher(s) would look like:
while working:
redis.watch(SCHEDULED_ZSET_KEY)
min_score = 0
max_score = time.time()
results = redis.zrangebyscore(
SCHEDULED_ZSET_KEY, min_score, max_score, start=0, num=1, withscores=False)
if results is None or len(results) == 0:
redis.unwatch()
sleep(1)
else: # len(results) == 1
redis.multi()
redis.rpush(results[0]['queue'], results[0]['data'])
redis.zrem(SCHEDULED_ZSET_KEY, results[0])
redis.exec()
The foo worker would look like:
while working:
task_data = redis.blpop('foo_queue', POP_TIMEOUT)
if task_data:
foo(task_data)
[1] This solution is based on not_a_golfer's, one at http://www.saltycrane.com/blog/2011/11/unique-python-redis-based-queue-delay/, and the redis docs for transactions.
You didn't specify the language you're using. You have at least 3 alternatives of doing this without writing a single line of code in Python at least.
Celery has an optional redis broker.
http://celeryproject.org/
resque is an extremely popular redis task queue using redis.
https://github.com/defunkt/resque
RQ is a simple and small redis based queue that aims to "take the good stuff from celery and resque" and be much simpler to work with.
http://python-rq.org/
You can at least look at their design if you can't use them.
But to answer your question - what you want can be done with redis. I've actually written more or less that in the past.
EDIT:
As for modeling what you want on redis, this is what I would do:
queuing a task with a timestamp will be done directly by the client - you put the task in a sorted set with the timestamp as the score and the task as the value (see ZADD).
A central dispatcher wakes every N seconds, checks out the first timestamps on this set, and if there are tasks ready for execution, it pushes the task to a "to be executed NOW" list. This can be done with ZREVRANGEBYSCORE on the "waiting" sorted set, getting all items with timestamp<=now, so you get all the ready items at once. pushing is done by RPUSH.
workers use BLPOP on the "to be executed NOW" list, wake when there is something to work on, and do their thing. This is safe since redis is single threaded, and no 2 workers will ever take the same task.
once finished, the workers put the result back in a response queue, which is checked by the dispatcher or another thread. You can add a "pending" bucket to avoid failures or something like that.
so the code will look something like this (this is just pseudo code):
client:
ZADD "new_tasks" <TIMESTAMP> <TASK_INFO>
dispatcher:
while working:
tasks = ZREVRANGEBYSCORE "new_tasks" <NOW> 0 #this will only take tasks with timestamp lower/equal than now
for task in tasks:
#do the delete and queue as a transaction
MULTI
RPUSH "to_be_executed" task
ZREM "new_tasks" task
EXEC
sleep(1)
I didn't add the response queue handling, but it's more or less like the worker:
worker:
while working:
task = BLPOP "to_be_executed" <TIMEOUT>
if task:
response = work_on_task(task)
RPUSH "results" response
EDit: stateless atomic dispatcher :
while working:
MULTI
ZREVRANGE "new_tasks" 0 1
ZREMRANGEBYRANK "new_tasks" 0 1
task = EXEC
#this is the only risky place - you can solve it by using Lua internall in 2.6
SADD "tmp" task
if task.timestamp <= now:
MULTI
RPUSH "to_be_executed" task
SREM "tmp" task
EXEC
else:
MULTI
ZADD "new_tasks" task.timestamp task
SREM "tmp" task
EXEC
sleep(RESOLUTION)
If you're looking for ready solution on Java. Redisson is right for you. It allows to schedule and execute tasks (with cron-expression support) in distributed way on Redisson nodes using familiar ScheduledExecutorService api and based on Redis queue.
Here is an example. First define a task using java.lang.Runnable interface. Each task can access to Redis instance via injected RedissonClient object.
public class RunnableTask implements Runnable {
#RInject
private RedissonClient redissonClient;
#Override
public void run() throws Exception {
RMap<String, Integer> map = redissonClient.getMap("myMap");
Long result = 0;
for (Integer value : map.values()) {
result += value;
}
redissonClient.getTopic("myMapTopic").publish(result);
}
}
Now it's ready to sumbit it into ScheduledExecutorService:
RScheduledExecutorService executorService = redisson.getExecutorService("myExecutor");
ScheduledFuture<?> future = executorService.schedule(new CallableTask(), 10, 20, TimeUnit.MINUTES);
future.get();
// or cancel it
future.cancel(true);
Examples with cron expressions:
executorService.schedule(new RunnableTask(), CronSchedule.of("10 0/5 * * * ?"));
executorService.schedule(new RunnableTask(), CronSchedule.dailyAtHourAndMinute(10, 5));
executorService.schedule(new RunnableTask(), CronSchedule.weeklyOnDayAndHourAndMinute(12, 4, Calendar.MONDAY, Calendar.FRIDAY));
All tasks are executed on Redisson node.
A combined approach seems plausible:
No new task timestamp may be less than current time (clamp if less). Assuming reliable NTP synch.
All tasks go to bucket-lists at keys, suffixed with task timestamp.
Additionally, all task timestamps go to a dedicated zset (key and score — timestamp itself).
New tasks are accepted from clients via separate Redis list.
Loop: Fetch oldest N expired timestamps via zrangebyscore ... limit.
BLPOP with timeout on new tasks list and lists for fetched timestamps.
If got an old task, process it. If new — add to bucket and zset.
Check if processed buckets are empty. If so — delete list and entrt from zset. Probably do not check very recently expired buckets, to safeguard against time synchronization issues. End loop.
Critique? Comments? Alternatives?
Lua
I made something similar to what's been suggested here, but optimized the sleep duration to be more precise. This solution is good if you have few inserts into the delayed task queue. Here's how I did it with a Lua script:
local laterChannel = KEYS[1]
local nowChannel = KEYS[2]
local currentTime = tonumber(KEYS[3])
local first = redis.call("zrange", laterChannel, 0, 0, "WITHSCORES")
if (#first ~= 2)
then
return "2147483647"
end
local execTime = tonumber(first[2])
local event = first[1]
if (currentTime >= execTime)
then
redis.call("zrem", laterChannel, event)
redis.call("rpush", nowChannel, event)
return "0"
else
return tostring(execTime - currentTime)
end
It uses two "channels". laterChannel is a ZSET and nowChannel is a LIST. Whenever it's time to execute a task, the event is moved from the the ZSET to the LIST. The Lua script with respond with how many MS the dispatcher should sleep until the next poll. If the ZSET is empty, sleep forever. If it's time to execute something, do not sleep(i e poll again immediately). Otherwise, sleep until it's time to execute the next task.
So what if something is added while the dispatcher is sleeping?
This solution works in conjunction with key space events. You basically need to subscribe to the key of laterChannel and whenever there is an add event, you wake up all the dispatcher so they can poll again.
Then you have another dispatcher that uses the blocking left pop on nowChannel. This means:
You can have the dispatcher across multiple instances(i e it's scaling)
The polling is atomic so you won't have any race conditions or double events
The task is executed by any of the instances that are free
There are ways to optimize this even more. For example, instead of returning "0", you fetch the next item from the zset and return the correct amount of time to sleep directly.
Expiration
If you can not use Lua scripts, you can use key space events on expired documents.
Subscribe to the channel and receive the event when Redis evicts it. Then, grab a lock. The first instance to do so will move it to a list(the "execute now" channel). Then you don't have to worry about sleeps and polling. Redis will tell you when it's time to execute something.
execute_later(timestamp, eventId, event) {
SET eventId event EXP timestamp
SET "lock:" + eventId, ""
}
subscribeToEvictions(eventId) {
var deletedCount = DEL eventId
if (deletedCount == 1) {
// move to list
}
}
This however has it own downsides. For example, if you have many nodes, all of them will receive the event and try to get the lock. But I still think it's overall less requests any anything suggested here.
I have a queue interface I want to implement in redis. The trick is that each worker can claim an item for N seconds after that it's presumed the worker has crashed and the item needs to be claimable again. It's the worker's responsibility to remove the item when finished. How would you do this in redis? I am using phpredis but that's kind of irrelevant.
To realize a simple queue in redis that can be used to resubmit crashed jobs I'd try something like this:
1 list "up_for_grabs"
1 list "being_worked_on"
auto expiring locks
a worker trying to grab a job would do something like this:
timeout = 3600
#wrap this in a transaction so our cleanup wont kill the task
#Move the job away from the queue so nobody else tries to claim it
job = RPOPLPUSH(up_for_grabs, being_worked_on)
#Set a lock and expire it, the value tells us when that job will time out. This can be arbitrary though
SETEX('lock:' + job, Time.now + timeout, timeout)
#our application logic
do_work(job)
#Remove the finished item from the queue.
LREM being_worked_on -1 job
#Delete the item's lock. If it crashes here, the expire will take care of it
DEL('lock:' + job)
And every now and then, we could just grab our list and check that all jobs that are in there actually have a lock.
If we find any jobs that DON'T have a lock, this means it expired and our worker probably crashed.
In this case we would resubmit.
This would be the pseudo code for that:
loop do
items = LRANGE(being_worked_on, 0, -1)
items.each do |job|
if !(EXISTS("lock:" + job))
puts "We found a job that didn't have a lock, resubmitting"
LREM being_worked_on -1 job
LPUSH(up_for_grabs, job)
end
end
sleep 60
end
You can set up standard synchronized locking scheme in Redis with [SETNX][1]. Basically, you use SETNX to create a lock that everyone tries to acquire. To release the lock, you can DEL it and you can also set up an EXPIRE to make the lock releasable. There are other considerations here, but nothing out of the ordinary in setting up locking and critical sections in a distributed application.
I want to delete all keys. I want everything wiped out and give me a blank database.
Is there a way to do this in Redis client?
With redis-cli:
FLUSHDB – Deletes all keys from the connection's current database.
FLUSHALL – Deletes all keys from all databases.
For example, in your shell:
redis-cli flushall
Heads up that FLUSHALL may be overkill. FLUSHDB is the one to flush a database only. FLUSHALL will wipe out the entire server. As in every database on the server. Since the question was about flushing a database I think this is an important enough distinction to merit a separate answer.
Answers so far are absolutely correct; they delete all keys.
However, if you also want to delete all Lua scripts from the Redis instance, you should follow it by:
SCRIPT FLUSH
The OP asks two questions; this completes the second question (everything wiped).
FLUSHALL
Remove all keys from all databases
FLUSHDB
Remove all keys from the current database
SCRIPT FLUSH
Remove all the scripts from the script cache.
If you're using the redis-rb gem then you can simply call:
your_redis_client.flushdb
you can use flushall in your terminal
redis-cli> flushall
This method worked for me - delete everything of current connected Database on your Jedis cluster.
public static void resetRedis() {
jedisCluster = RedisManager.getJedis(); // your JedisCluster instance
for (JedisPool pool : jedisCluster.getClusterNodes().values()) {
try (Jedis jedis = pool.getResource()) {
jedis.flushAll();
}
catch (Exception ex){
System.out.println(ex.getMessage());
}
}
}
One more option from my side:
In our production and pre-production databases there are thousands of keys. Time to time we need to delete some keys (by some mask), modify by some criteria etc. Of course, there is no way to do it manually from CLI, especially having sharding (512 logical dbs in each physical).
For this purpose I write java client tool that does all this work. In case of keys deletion the utility can be very simple, only one class there:
public class DataCleaner {
public static void main(String args[]) {
String keyPattern = args[0];
String host = args[1];
int port = Integer.valueOf(args[2]);
int dbIndex = Integer.valueOf(args[3]);
Jedis jedis = new Jedis(host, port);
int deletedKeysNumber = 0;
if(dbIndex >= 0){
deletedKeysNumber += deleteDataFromDB(jedis, keyPattern, dbIndex);
} else {
int dbSize = Integer.valueOf(jedis.configGet("databases").get(1));
for(int i = 0; i < dbSize; i++){
deletedKeysNumber += deleteDataFromDB(jedis, keyPattern, i);
}
}
if(deletedKeysNumber == 0) {
System.out.println("There is no keys with key pattern: " + keyPattern + " was found in database with host: " + host);
}
}
private static int deleteDataFromDB(Jedis jedis, String keyPattern, int dbIndex) {
jedis.select(dbIndex);
Set<String> keys = jedis.keys(keyPattern);
for(String key : keys){
jedis.del(key);
System.out.println("The key: " + key + " has been deleted from database index: " + dbIndex);
}
return keys.size();
}
}
Writing such kind of tools I find very easy and spend no more then 5-10 min.
Use FLUSHALL ASYNC if using (Redis 4.0.0 or greater) else FLUSHALL.
https://redis.io/commands/flushall
Note: Everything before executing FLUSHALL ASYNC will be evicted. The changes made during executing FLUSHALL ASYNC will remain unaffected.
FLUSHALL Deletes all the Keys of All exisiting databases .
FOr Redis version > 4.0 , FLUSHALL ASYNC is supported which runs in a background thread wjthout blocking the server
https://redis.io/commands/flushall
FLUSHDB - Deletes all the keys in the selected Database .
https://redis.io/commands/flushdb
The time complexity to perform the operations will be O(N) where N being the number of keys in the database.
The Response from the redis will be a simple string "OK"
Open redis-cli and type:
FLUSHALL
You can use FLUSHALL which will delete all keys from your every database.
Where as FLUSHDB will delete all keys from our current database.
Stop Redis instance.
Delete RDB file.
Start Redis instance.
redis-cli -h <host> -p <port> flushall
It will remove all data from client connected(with host and port)
After you start the Redis-server using:service redis-server start --port 8000 or redis-server.
Use redis-cli -p 8000 to connect to the server as a client in a different terminal.
You can use either
FLUSHDB - Delete all the keys of the currently selected DB. This command never fails. The time-complexity for this operation is O(N), N being the number of keys in the database.
FLUSHALL - Delete all the keys of all the existing databases, not just the currently selected one. This command never fails. The time-complexity for this operation is O(N), N being the number of keys in all existing databases.
Check the documentation for ASYNC option for both.
If you are using Redis through its python interface, use these two functions for the same functionality:
def flushall(self):
"Delete all keys in all databases on the current host"
return self.execute_command('FLUSHALL')
and
def flushdb(self):
"Delete all keys in the current database"
return self.execute_command('FLUSHDB')
i think sometimes stop the redis-server and delete rdb,aof files。
make sure there’s no data can be reloading.
then start the redis-server,now it's new and empty.
You can use FLUSHDB
e.g
List databases:
127.0.0.1:6379> info keyspace
# Keyspace
List keys
127.0.0.1:6379> keys *
(empty list or set)
Add one value to a key
127.0.0.1:6379> lpush key1 1
(integer) 1
127.0.0.1:6379> keys *
1) "key1"
127.0.0.1:6379> info keyspace
# Keyspace
db0:keys=1,expires=0,avg_ttl=0
Create other key with two values
127.0.0.1:6379> lpush key2 1
(integer) 1
127.0.0.1:6379> lpush key2 2
(integer) 2
127.0.0.1:6379> keys *
1) "key1"
2) "key2"
127.0.0.1:6379> info keyspace
# Keyspace
db0:keys=2,expires=0,avg_ttl=0
List all values in key2
127.0.0.1:6379> lrange key2 0 -1
1) "2"
2) "1"
Do FLUSHDB
127.0.0.1:6379> flushdb
OK
List keys and databases
127.0.0.1:6379> keys *
(empty list or set)
127.0.0.1:6379> info keyspace
# Keyspace
Your questions seems to be about deleting entire keys in a database. In this case you should try:
Connect to redis. You can use the command redis-cli (if running on port 6379), else you will have to specify the port number also.
Select your database (command select {Index})
Execute the command flushdb
If you want to flush keys in all databases, then you should try flushall.
you can use following approach in python
def redis_clear_cache(self):
try:
redis_keys = self.redis_client.keys('*')
except Exception as e:
# print('redis_client.keys() raised exception => ' + str(e))
return 1
try:
if len(redis_keys) != 0:
self.redis_client.delete(*redis_keys)
except Exception as e:
# print('redis_client.delete() raised exception => ' + str(e))
return 1
# print("cleared cache")
return 0
This works for me: redis-cli KEYS \* | xargs --max-procs=16 -L 100 redis-cli DEL
It list all Keys in redis, then pass using xargs to redis-cli DEL, using max 100 Keys per command, but running 16 command at time, very fast and useful when there is not FLUSHDB or FLUSHALL due to security reasons, for example when using Redis from Bitnami in Docker or Kubernetes. Also, it doesn't require any additional programming language and it just one line.
If you want to clear redis in windows:
find redis-cli in
C:\Program Files\Redis
and run FLUSHALL command.
Its better if you can have RDM (Redis Desktop Manager).
You can connect to your redis server by creating a new connection in RDM.
Once its connected you can check the live data, also you can play around with any redis command.
Opening a cli in RDM.
1) Right click on the connection you will see a console option, just click on it a new console window will open at the bottom of RDM.
Coming back to your question FLUSHALL is the command, you can simply type FLUSHALL in the redis cli.
Moreover if you want to know about any redis command and its proper usage, go to link below.
https://redis.io/commands.
There are different approaches. If you want to do this from remote, issue flushall to that instance, through command line tool redis-cli or whatever tools i.e. telnet, a programming language SDK. Or just log in that server, kill the process, delete its dump.rdb file and appendonly.aof(backup them before deletion).
If you are using Java then from the documentation, you can use any one of them based on your use case.
/**
* Remove all keys from all databases.
*
* #return String simple-string-reply
*/
String flushall();
/**
* Remove all keys asynchronously from all databases.
*
* #return String simple-string-reply
*/
String flushallAsync();
/**
* Remove all keys from the current database.
*
* #return String simple-string-reply
*/
String flushdb();
/**
* Remove all keys asynchronously from the current database.
*
* #return String simple-string-reply
*/
String flushdbAsync();
Code:
RedisAdvancedClusterCommands syncCommands = // get sync() or async() commands
syncCommands.flushdb();
Read more: https://github.com/lettuce-io/lettuce-core/wiki/Redis-Cluster
For anyone wondering how to do this in C# it's the same as the answer provided for Python for this same question.
I am using StackExchange.Redis v2.2.88 for a dot net (core) 5 project. I only need to clear my keys for integration testing and I have no purpose to do this in production.
I checked what is available in intellisense and I don't see a stock way to do this with the existing API. I imagine this is intentional and by design. Luckily the API does expose an Execute method.
I tested this by doing the following:
Opened up a command window. I am using docker, so I did it through docker.
Type in redis-cli which starts the CLI
Type in KEYS * and it shows me all of my keys so I can verify they exist before and after executing the following code below:
//Don't abuse this, use with caution
var cache = ConnectionMultiplexer.Connect(
new ConfigurationOptions
{
EndPoints = { "localhost:6379" }
});
var db = _cache.GetDatabase();
db.Execute("flushdb");
Type in KEYS * again and view that it's empty.
Hope this helps anyone looking for it.