Redis / Create new .rdb file while Redis is still running - redis

I use Redis, and today I start to get the following exception:
Can't save in background: fork: Cannot allocate memory
As I understand, this error appears because my DB is too big, and there is no memory for this process.
So I start to delete tables, but the problem is that Redis doesn't success to write it to the disc, and in face it doesn't know about this changes.
I decided to create new .rdb file (in /etc/redis.config), and then change the file path with the new RDB file:
dbfilename dump_cache_new.rdb
Then, I will reload all the data which critical to me (I can do it - its data from my file system), and restart redis service.
The problem is that I can't create this file, because redis is now executing with the old path (and Redis has to run, because other process takes some critical data from it).
How can I create this dump_cache_new.rdb file, while redis is still running with the old path?

If you want to change the snapshot file name (or most other configuration parameters) on a running instance of Redis, use the CONFIG SET command. Based on that documentation page, it looks like dir and dbfilename are both parameters than can be set on a live instance.
Another option to consider is using the synchronous SAVE command, which doesn't require a fork.
You almost never want to call SAVE in production environments where it will block all the other clients. Instead usually BGSAVE is used. However in case of issues preventing Redis to create the background saving child (for instance errors in the fork(2) system call), the SAVE command can be a good last resort to perform the dump of the latest dataset.
It's a pretty severe operation, but if you're already at the point of dumping data to make the save work, this would at least allow you to first make a snapshot.

Related

How to restore Redis db?

I am following the documents about how to restore Redis and I am at complete loss at this point.
The document says
127.0.0.1:6379> SAVE
OK
This command will create the dump.rdb file in your redis directory.
Which it does, it creates the exact same file for me in /usr/lib/redis which is alright I guess.
To restore redis data just move redis backup file (dump.rdb) into your redis directory and start the server. To get your redis directory use CONFIG command can be used. The CONFIG GET command is used to read the configuration parameters of a running Redis server.
127.0.0.1:6379> CONFIG get dir
1) "dir"
2) "/var/lib/redis/6379"
Here is where it makes no sense to me. The .rdb file for me is already saved in /var/lib/redis/ and I have no sub folder to that. I don't understand what "dir" is doing there and how I can restore my database.
Please enlighten me. I don't seem to be able to save it or I cannot find it perhaps.
Okay so basically, the rdb file saved in /var/lib/redis/ is saved every time the server stops and this can be moved to another folder and be used as a point for restoring every time redis service starts.

redis wiped out the copied rdb file during it starts

We have encountered a weird redis issue.
After I upgrade my redis from an old version to a new one,
I bring up the redis with clean data.
I copied the previous rdb file into the data direcotry
I restart the redis to load the data.
THen, I figure that my data is wiped out in step 4. Do any of you have encounter this? What could be the possible root cause for this?
We are suspect the redis is getting new request for it. Will that be an possible issue?
Before shutting down Redis will persist its data to disk (unless it is completely disabled in config), so you should not try such "hot swapping" of RDB file while Redis server is running - as it simply has overwritten the file on exit. Instead, just stop Redis server and replace RDB file to get it loaded (and later saved back properly).

Change redis db location

To change the location of redis on ubuntu 14 is just copy the db to another path and create a symlink or need another aproach to this?
dir /var/lib/redis
You can do it by sending Redis CONFIG SET dir /new/path, and making the same change in the configuration file or issuing CONFIG REWRITE. The next dump file, e.g. created with BGSAVE, will use the new path.
You solution is valid if you can afford downtime on your system during this change in order to maintain data consistency.
Another solution is to setup second Redis instance on different port on the same machine that will replicate from the first instance and you application will working with second instance. After a while you will delete your first instance.

Undo zfs create

I have a problem. I created a pool consisting of single volume of 1 file 2.5Tb just to fight with file duplicates. I copied a folder with photos. Some of the photos were not backed up. Just now I see my pool folder is empty. When I checked with 'sudo zfs list' it said 'No datasets available'.
I thought it was detached and to attach I started again all these commands.
sudo zpool create singlepool -f /home/john/zfsvolumes/zfs_single_volume.dat -m /home/share/zfssinglepool
sudo zfs set dedup=on singlepool
sudo zpool get dedupratio singlepool
sudo zfs set compression=lz4 singlepool
sudo chown -R writer:writer /home/share/zfssinglepool
I see now empty pool!
May I get my folders back which I copied to the pool before I started create pool again?
Unfortunately, use of zpool create -f will recreate the pool from scratch even if ZFS recognizes that a pool has already been created using that storage:
-f Forces use of vdevs, even if they appear in use or specify a
conflicting replication level. Not all devices can be over-
ridden in this manner.
This is similar to reformatting a partition with other file systems, which will leave whatever data is there written in place, but still erase the references the file system needs to find the data. You may be able to pay an expert to reconstruct your data, but otherwise I'm afraid the data will be very hard to get back from your pool. As in any data recovery mission, I'd advise making a copy of the data ASAP on some external media that you can use to do the recovery from, in case further attempts at recovery accidentally corrupt the data even worse.

Backing up a RIAK database data

I'm very new to RIAK. I have a cluster with 5 nodes and I want to backup the data on the cluster. I ran the following command to backup data.
[root#PCPRIAK33 local]# riak-admin backup localhost riak /var/local/temp all
However I am getting the following error.
Attempting to restart script through sudo -H -u riak
{"init terminating in do_boot",{{nocatch,{could_not_reach_node,localhost}},[{riak_kv_backup,ensure_connected,1,[{file,"src/riak_kv_backup.erl"},{line,171}]},{riak_kv_backup,backup,3,[{file,"src/riak_kv_backup.erl"},{line,40}]},{erl_eval,do_apply,6,[{file,"erl_eval.erl"},{line,572}]},{init,start_it,1,[]},{init,start_em,1,[]}]}}
init terminating in do_boot ()
Can you please help me out? :)
Thanks.
I'll answer your immediate question (how to use riak-admin backup) first, but see the comments on preferred methods of backing up, at the end.
The command is:
riak-admin backup <node name> <erlang cookie> <file name with path> all
The node name you can find in your riak vm.args file (look for the line that looks like -name riak#127.0.0.1). It'll be of the form riak#xx.xx.xx.xx with the IP address. So, on my local machine, a single node is named riak#127.0.0.1.
The erlang cookie is also found in the vm.args file, it will most likely be erlang.
The file name parameter should be a fully-qualified path to the actual file name (meaning, you can't give it just a directory name). The filename and extension are arbitrary. So, I would use something like cluster_backup.riak.
So, to put it all together, your backup command should look like:
riak-admin backup riak#<your node ip> riak /var/local/temp/cluster_backup.riak all
Now, having said all that, I don't recommend using the riak-admin backup and restore commands to back up your whole cluster. For several reasons. One, it stores every replica of every object. Meaning, if you're running with the default replica value of n=3, you will be storing 3 copies of each object in your backup file.
Two, the code invoked by that command is single-threaded, and not connection pooled. So all in all, it's going to be SLOW to restore and backup.
Instead, I recommend one of the following approaches:
Take filesystem level snapshots of the data directories of each node. This is the approach currently recommended by Basho, and detailed here: http://docs.basho.com/riak/latest/ops/running/backups/
If you definitely want a "logical" backup (meaning, an export of the objects contained in the cluster), you can use an experimental standalone tool such as the Riak Data Migrator (but see the limitations in the Readme).
I recommend testing out / timing each of these approaches, to see which one is faster for your situation.