I'm using rdb-tool to generate json file from Redis dump file. For example:
rdb --command json /opt/redis/data/master.rdb --db 8 > /opt/redis/data/latest.json
is there anyway that I can generate Redis json data file from remote server? something similiar to this:
rdb --command json --db 8 --host myhost.com --port 6378 > /opt/redis/data/latest.json
Thanks
Not directly.
You have to request first a dump to be generated to the remote server (with a BGSAVE command). Beware it is asynchronous, so you have to wait for the completion of the dump by checking the results of the INFO command. Then download the file on your local machine (with sftp, scp, netcat, etc ...), and finally you can run rdb-tools script locally.
Another way to do it (provided you have the memory available on your client box), is to start a slave redis-server on the client. It will automatically generate and download a dump file from the master, that you can use against rdb-tools locally.
Related
I have a requirement to dynamically turn on appendonly setting after redis replayed all the data from backup file dump.rdb. So is there any redis-cli cmd to know that redis has loaded all the data from dump.rdb snapshot?#
When Redis is loading RDB files, it refuses most commands, e.g. PING. So you can send a PING command, i.e. redis-cli ping, to Redis. If it returns PONG, Redis has already loaded all data. If Redis is still loading, it returns an error reply.
After running fine for a while, I am getting write error on my redis instance:
(error) MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error.
In the log I see:
9948:C 22 Mar 20:49:32.241 # Failed opening the RDB file root (in server root dir /var/spool/cron) for saving: Read-only file system
However, my redis config file is /etc/redis/redis.conf as confirmed by:
redis-cli -p 6379 info | grep 'config_file'
config_file:/etc/redis/redis.conf
And there I have:
dir /mnt/data/redis
And indeed, there is a snapshot there.
But despite the above, redis now thinks my data directory is
redis-cli -p 6379 CONFIG GET dir
1) "dir"
2) "/var/spool/cron"
Corresponding to the error I was getting as quoted above.
Can anyone tell me why/how my data directory is changing after redis starts, such that it is no longer what is specified in the config file?
So the answer is that the redis server was hacked and the configuration changed, which is very easy to do as it turns out. (I should point out that I had no reason to think it wasn't easy to do. I just assumed security by obscurity was sufficient in this case--wrong. No matter, this was just a playground not any sort of production server).
So don't open your redis port to the world. Use security groups if on AWS to limit access to machines that need it, or use AUTH (which is still not awesome because then all clients need to know the single password which also apparently gets sent in the clear), or have some middleware controlling access.
Hacking redis is easy to do, can compromise your data, and even enable unauthorized SSH access to your server. And that's why you shouldn't highline.
I was given a Redis server that is set up remotely.
I can access data in that and I can do CRUD operation with that server.
But I want the replica of the same database in my local.
I have Redis desktop manager setup in my local. And also redis-server setup running.
Things I have tried:
using SAVE command.
I have connected to the remote server and executed save command. It ran
successfully and created dump.rdb file on that server. But I can't access that file as I don't have permission for server FTP.
using BGSAVE
same scenario here also
using redis-cli command
redis-cli -h server ip -p 6379 save > \\local ip\dump.rdb
Here I got an error The network name cannot be found.
Can anyone please suggest me on how can I copy the .rdb file from the server to local?
I installed Redis on Ubuntu 16.04. I couldn't find Redis directory nor redis.conf file (tried with: sudo find redis.conf).
My application depends on some data pulled from third party APIs. I store the (processed) data in Redis. My problem is, after reboot I lose the data. I guess I need to specify in config file that the data should be persisted on reboot, but I couldn't find the config file. Do I need to create the config file? Are there some templates to use? My goal is just to have the data persisted after reboot.
Thank you!
Use dpkg -L redis-server | grep redis.conf to find config file path. It should be located at /etc/redis/redis.conf as I know.
Redis has 2 methods for persistense: Snapshotting and Append-only file:
Snapshotting will be enabled by adding (or uncommenting) save X Y in config file. It means Redis will automatically dump the dataset to disk every X seconds if at least Y keys changed. There could be more than one save options in config file.
Append-only file will be enabled by adding (or uncommenting) appendonly yes in config file
you should turn on the rdb or aof.
see https://redis.io/topics/persistence
Add this to the config file.
appendonly yes
This will append data as you store new data. This enables durability.
I've got,
My laptop
A remote server I can SSH into which has a Docker volume inside of which are some files I'd like to copy to my laptop.
What is the best way to copy these files over? Bonus points for using things like rsync, etc.. which are fast / can resume / show me progress and not writing any temporary files.
Note: my user on the remote server does not have permission to just scp the data straight out of the volume mount in /var/lib/docker, although I can run any containers on there.
Having this problem, I created dvsync which uses ngrok to establish a tunnel that is being used by rsync to copy data even if the machine is in a private VPC. To use it, you first start the dvsync-server locally, pointing it at the source directory:
$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
--mount source=MY_DIRECTORY,target=/data,readonly \
quay.io/suda/dvsync-server
Note, you need the NGROK_AUTHTOKEN which can be obtained from ngrok dashboard. Then start the dvsync-client on the target machine:
docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
--mount source=MY_TARGET_VOLUME,target=/data \
quay.io/suda/dvsync-client
The DVSYNC_TOKEN can be found in dvsync-server output and it's a base64 encoded private key and tunnel info. Once the data has been copied, the client wil exit.
I'm not sure about the best way of doing so, but if I were you I would run a container sharing the same volume (in read-only -- as it seems you just want to download the files within the volume) and download theses.
This container could be running rsync as you wish.