redis wiped out the copied rdb file during it starts - redis

We have encountered a weird redis issue.
After I upgrade my redis from an old version to a new one,
I bring up the redis with clean data.
I copied the previous rdb file into the data direcotry
I restart the redis to load the data.
THen, I figure that my data is wiped out in step 4. Do any of you have encounter this? What could be the possible root cause for this?
We are suspect the redis is getting new request for it. Will that be an possible issue?

Before shutting down Redis will persist its data to disk (unless it is completely disabled in config), so you should not try such "hot swapping" of RDB file while Redis server is running - as it simply has overwritten the file on exit. Instead, just stop Redis server and replace RDB file to get it loaded (and later saved back properly).

Related

How to restore Redis db?

I am following the documents about how to restore Redis and I am at complete loss at this point.
The document says
127.0.0.1:6379> SAVE
OK
This command will create the dump.rdb file in your redis directory.
Which it does, it creates the exact same file for me in /usr/lib/redis which is alright I guess.
To restore redis data just move redis backup file (dump.rdb) into your redis directory and start the server. To get your redis directory use CONFIG command can be used. The CONFIG GET command is used to read the configuration parameters of a running Redis server.
127.0.0.1:6379> CONFIG get dir
1) "dir"
2) "/var/lib/redis/6379"
Here is where it makes no sense to me. The .rdb file for me is already saved in /var/lib/redis/ and I have no sub folder to that. I don't understand what "dir" is doing there and how I can restore my database.
Please enlighten me. I don't seem to be able to save it or I cannot find it perhaps.
Okay so basically, the rdb file saved in /var/lib/redis/ is saved every time the server stops and this can be moved to another folder and be used as a point for restoring every time redis service starts.

Redis save disabled but rdb files are still generated

I have save disabled in my conf file
appendonly no
save ""
dump.rdb files are still generated.
https://groups.google.com/g/redis-db/c/ILyp4y1em5w/m/PHKlhrh5gQIJ
This thread suggests that these rdb files might be created for master slave replication.
But, they should get removed on restart or flush. However, that doesn't happen.
What could be causing it?
I am using redis 5

Optimize a redis appendonly file

I have a appendonly.aof file that's grown too large (1.5gb, and 29,558,054 lines).
When I try and load redis it hangs on "Loading data into memory" for what seems like all day (still hasn't finished).
Is there anything I can do to optimize this file as it likely contains many duplicate transactions (like deleting the same record).
Or anything I can do to see progress to know if i'm waiting for nothing or how long it will take before I try and restore an older backup?
With redis 4+ you can use mixed format for optimizing appendonly by setting aof-use-rdb-preamble to yes.
With this setting in place redis dumps the data in RDB format to AOF file with every BGAOFREWRITE call, which you can verify by aof files contents which starts with REDIS keyword.
Upon restarts with this REDIS keyword in aof file along with this aof-use-rdb-preamble, redis will load the RDB followed by aof contents.
You can configure your Redis server based on this AOF
And if you are using docker you should be careful with how frequently your container is being restarted

Redis Server after loading from dump.rdb deleting all keys

I have my redis server in my local and when i copy those contents with dump.rdb bgsave and put it in my other machine .Every thing works fine but after some inactivity my keys keep getting deleted and I'm ending up with 433KB of dump file and my dump file being replaced.What am i doing wrong?I have 3.0.3 in local and 2.8.4 in my other machine.I am following steps from this [link][1]. I couldn't able to figure out this issue.I checked the server logs and there's no error there just only those bgsaves for every 900,300 seconds.Please Help me
Most commonly this is probably because
Your Redis instance is open to a public network and isn't using password authentication - crackers can do anything for deleting your keys to compromising the server.
All your keys are set to expire
You are using an eviction policy such as all-keys, your maxmemory is set and you've reached it.
You have a rogue piece of code that deletes them.

Redis / Create new .rdb file while Redis is still running

I use Redis, and today I start to get the following exception:
Can't save in background: fork: Cannot allocate memory
As I understand, this error appears because my DB is too big, and there is no memory for this process.
So I start to delete tables, but the problem is that Redis doesn't success to write it to the disc, and in face it doesn't know about this changes.
I decided to create new .rdb file (in /etc/redis.config), and then change the file path with the new RDB file:
dbfilename dump_cache_new.rdb
Then, I will reload all the data which critical to me (I can do it - its data from my file system), and restart redis service.
The problem is that I can't create this file, because redis is now executing with the old path (and Redis has to run, because other process takes some critical data from it).
How can I create this dump_cache_new.rdb file, while redis is still running with the old path?
If you want to change the snapshot file name (or most other configuration parameters) on a running instance of Redis, use the CONFIG SET command. Based on that documentation page, it looks like dir and dbfilename are both parameters than can be set on a live instance.
Another option to consider is using the synchronous SAVE command, which doesn't require a fork.
You almost never want to call SAVE in production environments where it will block all the other clients. Instead usually BGSAVE is used. However in case of issues preventing Redis to create the background saving child (for instance errors in the fork(2) system call), the SAVE command can be a good last resort to perform the dump of the latest dataset.
It's a pretty severe operation, but if you're already at the point of dumping data to make the save work, this would at least allow you to first make a snapshot.