I have save disabled in my conf file
appendonly no
save ""
dump.rdb files are still generated.
https://groups.google.com/g/redis-db/c/ILyp4y1em5w/m/PHKlhrh5gQIJ
This thread suggests that these rdb files might be created for master slave replication.
But, they should get removed on restart or flush. However, that doesn't happen.
What could be causing it?
I am using redis 5
Related
I am following the documents about how to restore Redis and I am at complete loss at this point.
The document says
127.0.0.1:6379> SAVE
OK
This command will create the dump.rdb file in your redis directory.
Which it does, it creates the exact same file for me in /usr/lib/redis which is alright I guess.
To restore redis data just move redis backup file (dump.rdb) into your redis directory and start the server. To get your redis directory use CONFIG command can be used. The CONFIG GET command is used to read the configuration parameters of a running Redis server.
127.0.0.1:6379> CONFIG get dir
1) "dir"
2) "/var/lib/redis/6379"
Here is where it makes no sense to me. The .rdb file for me is already saved in /var/lib/redis/ and I have no sub folder to that. I don't understand what "dir" is doing there and how I can restore my database.
Please enlighten me. I don't seem to be able to save it or I cannot find it perhaps.
Okay so basically, the rdb file saved in /var/lib/redis/ is saved every time the server stops and this can be moved to another folder and be used as a point for restoring every time redis service starts.
I have a appendonly.aof file that's grown too large (1.5gb, and 29,558,054 lines).
When I try and load redis it hangs on "Loading data into memory" for what seems like all day (still hasn't finished).
Is there anything I can do to optimize this file as it likely contains many duplicate transactions (like deleting the same record).
Or anything I can do to see progress to know if i'm waiting for nothing or how long it will take before I try and restore an older backup?
With redis 4+ you can use mixed format for optimizing appendonly by setting aof-use-rdb-preamble to yes.
With this setting in place redis dumps the data in RDB format to AOF file with every BGAOFREWRITE call, which you can verify by aof files contents which starts with REDIS keyword.
Upon restarts with this REDIS keyword in aof file along with this aof-use-rdb-preamble, redis will load the RDB followed by aof contents.
You can configure your Redis server based on this AOF
And if you are using docker you should be careful with how frequently your container is being restarted
We have encountered a weird redis issue.
After I upgrade my redis from an old version to a new one,
I bring up the redis with clean data.
I copied the previous rdb file into the data direcotry
I restart the redis to load the data.
THen, I figure that my data is wiped out in step 4. Do any of you have encounter this? What could be the possible root cause for this?
We are suspect the redis is getting new request for it. Will that be an possible issue?
Before shutting down Redis will persist its data to disk (unless it is completely disabled in config), so you should not try such "hot swapping" of RDB file while Redis server is running - as it simply has overwritten the file on exit. Instead, just stop Redis server and replace RDB file to get it loaded (and later saved back properly).
rdb files have snap info and append info.
So why does redis not first load rdb file, then load rdb command after rdb file?
The load code: loadDataFromDisk
From Redis doc:
It is possible to combine both AOF and RDB in the same instance. Notice that, in this case, when Redis restarts the AOF file will be used to reconstruct the original dataset since it is guaranteed to be the most complete.
I have my redis server in my local and when i copy those contents with dump.rdb bgsave and put it in my other machine .Every thing works fine but after some inactivity my keys keep getting deleted and I'm ending up with 433KB of dump file and my dump file being replaced.What am i doing wrong?I have 3.0.3 in local and 2.8.4 in my other machine.I am following steps from this [link][1]. I couldn't able to figure out this issue.I checked the server logs and there's no error there just only those bgsaves for every 900,300 seconds.Please Help me
Most commonly this is probably because
Your Redis instance is open to a public network and isn't using password authentication - crackers can do anything for deleting your keys to compromising the server.
All your keys are set to expire
You are using an eviction policy such as all-keys, your maxmemory is set and you've reached it.
You have a rogue piece of code that deletes them.