We have encountered a weird redis issue.
After I upgrade my redis from an old version to a new one,
I bring up the redis with clean data.
I copied the previous rdb file into the data direcotry
I restart the redis to load the data.
THen, I figure that my data is wiped out in step 4. Do any of you have encounter this? What could be the possible root cause for this?
We are suspect the redis is getting new request for it. Will that be an possible issue?
Before shutting down Redis will persist its data to disk (unless it is completely disabled in config), so you should not try such "hot swapping" of RDB file while Redis server is running - as it simply has overwritten the file on exit. Instead, just stop Redis server and replace RDB file to get it loaded (and later saved back properly).
Is there any way to schedule redis back-ups at a specific time of day (e.g. 3:00 AM GMT) - preferably via a setting in the accompanying conf file?
I already understand that one can set a backup rule in redis configuration (e.g. save every X hours if Y keys have changed).
But how does one schedule the said backup at a particular time of day? Would love to know something basic, but effective. In case it matters, my redis version is 5.0.3
So far I know it is currently not possible from inside redis. But its achievable using crontab. Here is a short example:
create a backup script file:
/tmp/backup.sh
echo save | redis-cli >> /tmp/redis-backup.log
If using sockets, the above would be:
echo save | redis-cli -s /var/run/redis.sock >> /tmp/redis-backup.log
The socket location in your system may vary.
Next, give execute permission to the script:
chmod +x /tmp/backup.sh
Finally, make an entry in crontab: crontab -e
0 3 * * * /tmp/backup.sh
This will run backup.sh in exactly 3AM.
In case you want to disable redis saving setup in the conf (without restarting the redis instance), the best way is to log into redis-cli and issue CONFIG SET save "". Double check that it worked via CONFIG GET save. Finally, don't forget to change the save settings in the relevant conf file as well. Lastly, it's wiser to use bgsave instead of save if tackling a redis instance in production.
For more, checkout these links:
How To Back Up and Restore Your Redis Data
Cron Scheduler
How To Start/Stop/Restart Cron Service In Linux
I made a change to redis.conf, but I don't see the changes applied to the running instance. Do I need to restart redis is order to pick up changes?
Yes you have to restart the server to get changes from redis.conf file. Alternatively you can do it in run time using config set command.
Read more about them on the following links
http://redis.io/topics/config
http://redis.io/commands/config-set
I have my redis server in my local and when i copy those contents with dump.rdb bgsave and put it in my other machine .Every thing works fine but after some inactivity my keys keep getting deleted and I'm ending up with 433KB of dump file and my dump file being replaced.What am i doing wrong?I have 3.0.3 in local and 2.8.4 in my other machine.I am following steps from this [link][1]. I couldn't able to figure out this issue.I checked the server logs and there's no error there just only those bgsaves for every 900,300 seconds.Please Help me
Most commonly this is probably because
Your Redis instance is open to a public network and isn't using password authentication - crackers can do anything for deleting your keys to compromising the server.
All your keys are set to expire
You are using an eviction policy such as all-keys, your maxmemory is set and you've reached it.
You have a rogue piece of code that deletes them.
I'm very new to RIAK. I have a cluster with 5 nodes and I want to backup the data on the cluster. I ran the following command to backup data.
[root#PCPRIAK33 local]# riak-admin backup localhost riak /var/local/temp all
However I am getting the following error.
Attempting to restart script through sudo -H -u riak
{"init terminating in do_boot",{{nocatch,{could_not_reach_node,localhost}},[{riak_kv_backup,ensure_connected,1,[{file,"src/riak_kv_backup.erl"},{line,171}]},{riak_kv_backup,backup,3,[{file,"src/riak_kv_backup.erl"},{line,40}]},{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,572}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
init terminating in do_boot ()
Can you please help me out? :)
Thanks.
I'll answer your immediate question (how to use riak-admin backup) first, but see the comments on preferred methods of backing up, at the end.
The command is:
riak-admin backup <node name> <erlang cookie> <file name with path> all
The node name you can find in your riak vm.args file (look for the line that looks like -name riak#127.0.0.1). It'll be of the form riak#xx.xx.xx.xx with the IP address. So, on my local machine, a single node is named riak#127.0.0.1.
The erlang cookie is also found in the vm.args file, it will most likely be erlang.
The file name parameter should be a fully-qualified path to the actual file name (meaning, you can't give it just a directory name). The filename and extension are arbitrary. So, I would use something like cluster_backup.riak.
So, to put it all together, your backup command should look like:
riak-admin backup riak#<your node ip> riak /var/local/temp/cluster_backup.riak all
Now, having said all that, I don't recommend using the riak-admin backup and restore commands to back up your whole cluster. For several reasons. One, it stores every replica of every object. Meaning, if you're running with the default replica value of n=3, you will be storing 3 copies of each object in your backup file.
Two, the code invoked by that command is single-threaded, and not connection pooled. So all in all, it's going to be SLOW to restore and backup.
Instead, I recommend one of the following approaches:
Take filesystem level snapshots of the data directories of each node. This is the approach currently recommended by Basho, and detailed here: http://docs.basho.com/riak/latest/ops/running/backups/
If you definitely want a "logical" backup (meaning, an export of the objects contained in the cluster), you can use an experimental standalone tool such as the Riak Data Migrator (but see the limitations in the Readme).
I recommend testing out / timing each of these approaches, to see which one is faster for your situation.