Is there any way to schedule redis back-ups at a specific time of day (e.g. 3:00 AM GMT) - preferably via a setting in the accompanying conf file?
I already understand that one can set a backup rule in redis configuration (e.g. save every X hours if Y keys have changed).
But how does one schedule the said backup at a particular time of day? Would love to know something basic, but effective. In case it matters, my redis version is 5.0.3
So far I know it is currently not possible from inside redis. But its achievable using crontab. Here is a short example:
create a backup script file:
/tmp/backup.sh
echo save | redis-cli >> /tmp/redis-backup.log
If using sockets, the above would be:
echo save | redis-cli -s /var/run/redis.sock >> /tmp/redis-backup.log
The socket location in your system may vary.
Next, give execute permission to the script:
chmod +x /tmp/backup.sh
Finally, make an entry in crontab: crontab -e
0 3 * * * /tmp/backup.sh
This will run backup.sh in exactly 3AM.
In case you want to disable redis saving setup in the conf (without restarting the redis instance), the best way is to log into redis-cli and issue CONFIG SET save "". Double check that it worked via CONFIG GET save. Finally, don't forget to change the save settings in the relevant conf file as well. Lastly, it's wiser to use bgsave instead of save if tackling a redis instance in production.
For more, checkout these links:
How To Back Up and Restore Your Redis Data
Cron Scheduler
How To Start/Stop/Restart Cron Service In Linux
In redis, two of the eviction policies, allkeys-lru and volatile-lru, evict keys based on access time. So, this information must exist somewhere. Is it possible for me to query the access time of a key? Or, better yet, page through a sorted list of keys based on access time?
Look at Object IDLETIME it gives time for which the object was idle
as guided by #Itamar Haber the way they disable some command is by using redis.conf
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command DEBUG ""
As you are using Redis as a service on Heroku you have to have admin rights to do this
Hope this helps!
I have my redis server in my local and when i copy those contents with dump.rdb bgsave and put it in my other machine .Every thing works fine but after some inactivity my keys keep getting deleted and I'm ending up with 433KB of dump file and my dump file being replaced.What am i doing wrong?I have 3.0.3 in local and 2.8.4 in my other machine.I am following steps from this [link][1]. I couldn't able to figure out this issue.I checked the server logs and there's no error there just only those bgsaves for every 900,300 seconds.Please Help me
Most commonly this is probably because
Your Redis instance is open to a public network and isn't using password authentication - crackers can do anything for deleting your keys to compromising the server.
All your keys are set to expire
You are using an eviction policy such as all-keys, your maxmemory is set and you've reached it.
You have a rogue piece of code that deletes them.
I am using Redis since last 12 months without any issue, But from last 30 days unknowingly the database getting empty and we couldn't find any logs regarding this. Even it is flushing all the data out randomly after restoration.
We tried following steps to resolve this but result was zero.
We have checked redis logs
Monitored the redis using MONITOR command
We are trying to renaming the critical commands through config but redis is dump after the config change below is example command
rename-command FLUSHDB e0cc96ad2eab73c2c347011806a76b73
We gone made without knowing anything. Helps are appreciated.
Redis Version : 2.8.17
Running under Debian Linux
Renaming the command through config file will work in this case.
Same rename command you have to place inside the config file.
rename-command FLUSHDB e0cc96ad2eab73c2c347011806a76b73
I have a bash file that contains wget commands to download over 100,000 files totaling around 20gb of data.
The bash file looks something like:
wget http://something.com/path/to/file.data
wget http://something.com/path/to/file2.data
wget http://something.com/path/to/file3.data
wget http://something.com/path/to/file4.data
And there are exactly 114,770 rows of this. How reliable would it be to ssh into a server I have an account on and run this? Would my ssh session time out eventually? would I have to be ssh'ed in the entire time? What if my local computer crashed/got shut down?
Also, does anyone know how many resources this would take? Am I crazy to want to do this on a shared server?
I know this is a weird question, just wondering if anyone has any ideas. Thanks!
Use
#nohup ./scriptname &>logname.log
This will ensure
The process will continue even if ssh session is interrupted
You can monitor it, as it is in action
Will also recommend, that you can have some prompt at regular intervals, will be good for log analysis. e.g. #echo "1000 files copied"
As far as resource utilisation is concerned, it entirely depends on the system and majorly on network characteristics. Theoretically you can callculate the time with just Data Size & Bandwidth. But in real life, delays, latencies, and data-losses come into picture.
So make some assuptions, do some mathematics and you'll get the answer :)
Depends on the reliability of the communication medium, hardware, ...!
You can use screen to keep it running while you disconnect from the remote computer.
You want to disconnect the script from your shell and have it run in the background (using nohup), so that it continues running when you log out.
You also want to have some kind of progress indicator, such as a log file that logs every file that was downloaded, and also all the error messages. Nohup sends stderr and stdout into files.
With such a file, you can pick up broken downloads and aborted runs later on.
Give it a test-run first with a small set of files to see if you got the command down and like the output.
I suggest you detach it from your shell with nohup.
$ nohup myLongRunningScript.sh > script.stdout 2>script.stderr &
$ exit
The script will run to completion - you don't need to be logged in throughout.
Do check for any options you can give wget to make it retry on failure.
If it is possible, generate MD5 checksums for all of the files and use it to check if they all were transferred correctly.
Start it with
nohup ./scriptname &
and you should be fine.
Also I would recommend that you log the progress so that you would be able to find out where it stopped if it does.
wget url >>logfile.log
could be enough.
To monitor progress live you could:
tail -f logfile.log
It may be worth it to look at an alternate technology, like rsync. I've used it on many projects and it works very, very well.