Redis - Default snapshotting configuration - redis

I am playing with Redis cluster as an experiment and I used an existing script to create and run the cluster: Create Redis Cluster
Everything runs just fine and I don't specify any "save" parameters in the command line while starting redis cluster. However, when I specify --save 60 10000 in this script, I see dump.rdb file being created when I write some data to the database.
Then I happened to look at the code [server.c], where the below functions is being called:
line 2323: appendServerSaveParams(60,10000); /* save after 1 minute and 10000 changes */
As per my understanding, the rdb file should have been created even when I didn't specify any "--save" option in the command line while starting redis server.
Can someone explain this behavior? I am trying to understand the default snapshotting configuration when no "--save" option is specified while starting Redis instances using command line (hence not using Redis.conf).

Related

redis.exceptions.ResponseError: MISCONF Redis is configured to save RDB snapshots

I'm with this problem when I try to save to redis. Introducing the message below.
MISCONF Redis is configured to save RDB snapshots, but it's currently unable to persist to disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Red
The redis log file displays this:
Background saving started by pid 73
Write error saving DB on disk: Function not implemented
Has anyone ever experienced this?
I found the answer. You need to wsl 2 To find out the version run below command in PowerShell
wsl -l -v
If it is version 1, run the command below and open ubuntu again
wsl --set-version 2
More information: https://learn.microsoft.com/en-us/windows/wsl/install

How configure multiple Redis instances on Debian

I've got a Debian server running Redis and I'd like to run a second copy using a different port. There are plenty of guides explaining how to do it on Ubuntu and other flavours of Linux but I'm having a hard time translating those to Debian.
So far I've created a copy of the /etc/redis/redis.conf and have renamed it /etc/redis/redis_6380.conf. In the new file I've changed the name of the PID file, location of the log file, the listening port (to 3680) so that they do not conflict with the existing instance of Redis.
The problem I have is knowing which changes to make so that systemd can start the new instance.
I've made a duplicate of /lib/systemd/system/redis-server.service and called it redis-server-6380.service and have changed the EXECStart and PIDFile lines to point to the new files:
ExecStart=/usr/bin/redis-server /etc/redis/redis_6380.conf
PIDFile=/var/run/redis/redis-server_6380.pid
Doing:
systemctl enable redis-server-6380.service
results in:
Failed to enable unit: File /etc/systemd/system/redis.service already exists and is a symlink to /lib/systemd/system/redis-server.service
How do I fix this ? I'm guessing that I've missed out a vital step but I'm not that familiar with configuring systemd supervised processes on Debian.
The end of the redis-server unit file would say:
[Install]
WantedBy=multi-user.target
Alias=redis.service
Either remove that Alias, or make sure it would be unique.
Also: make sure your Redis instances would have their own database and log files.

Flink job started from another program on YARN fails with "JobClientActor seems to have died"

I'm new flink user and I have the following problem.
I use flink on YARN cluster to transfer related data extracted from RDBMS to HBase.
I write flink batch application on java with multiple ExecutionEnvironments (one per RDB table to transfer table rows in parrallel) to transfer table by table sequentially (because call of env.execute() is blocking).
I start YARN session like this
export YARN_CONF_DIR=/etc/hadoop/conf
export FLINK_HOME=/opt/flink-1.3.1
export FLINK_CONF_DIR=$FLINK_HOME/conf
$FLINK_HOME/bin/yarn-session.sh -n 1 -s 4 -d -jm 2048 -tm 8096
Then I run my application on YARN session started via shell script transfer.sh. Its content is here
#!/bin/bash
export YARN_CONF_DIR=/etc/hadoop/conf
export FLINK_HOME=/opt/flink-1.3.1
export FLINK_CONF_DIR=$FLINK_HOME/conf
$FLINK_HOME/bin/flink run -p 4 transfer.jar
When I start this script from command line manually it works fine - jobs are submitted to YARN session one by one without errors.
Now I should be able to run this script from another java program.
For this aim I use
Runtime.exec("transfer.sh");
(maybe are there better ways to do this? I have seen at REST API but there are some difficulties because job manager is proxied by YARN).
At the beginning is works as usually - first several jobs are submitted to session and finished successfully. But the following jobs are not submitted to YARN session.
In /opt/flink-1.3.1/log/flink-tsvetkoff-client-hadoop-dev1.log I see error (and no another errors found in DEBUG level)
The program execution failed: JobClientActor seems to have died before the JobExecutionResult could be retrieved.
I have tried to analyse this problem by myself and found out that this error has occurred in JobClient class while sending ping request with timeout to JobClientActor (i.e. YARN cluster).
I tried to increase multiple heartbeat and timeout options like akka.*.timeout, akka.watch.heartbeat.* and yarn.heartbeat-delay options but it doesn't solve the problem - new jobs are not submit to YARN session from CliFrontend.
The environment for both case (manual call and call from another program) is the same. When I call
$ ps axu | grep transfer
it will give me output
/usr/lib/jvm/java-8-oracle/bin/java -Dlog.file=/opt/flink-1.3.1/log/flink-tsvetkoff-client-hadoop-dev1.log -Dlog4j.configuration=file:/opt/flink-1.3.1/conf/log4j-cli.properties -Dlogback.configurationFile=file:/opt/flink-1.3.1/conf/logback.xml -classpath /opt/flink-1.3.1/lib/flink-metrics-graphite-1.3.1.jar:/opt/flink-1.3.1/lib/flink-python_2.11-1.3.1.jar:/opt/flink-1.3.1/lib/flink-shaded-hadoop2-uber-1.3.1.jar:/opt/flink-1.3.1/lib/log4j-1.2.17.jar:/opt/flink-1.3.1/lib/slf4j-log4j12-1.7.7.jar:/opt/flink-1.3.1/lib/flink-dist_2.11-1.3.1.jar:::/etc/hadoop/conf org.apache.flink.client.CliFrontend run -p 4 transfer.jar
I also tried to update flink to 1.4.0 release or change parallelism of job (even to -p 1) but error has still occurred.
I have no idea what could be different? Is any workaround by the way?
Thank you for any help.
Finally I find out how to resolve that error
Just replace Runtime.exec(...) with new ProcessBuilder(...).inheritIO().start().
I really don't know why the call of inheritIO helps in that case because as I understand it just redirects IO streams from child process to parent process.
But I have checked that if I comment out this line of code the program begins to fall again.

how to import dump.rdb file to redis local server

Hi I'm trying to import the dump.rdb file to my local redis I'm using ubuntu 14.04,
I've tried this solutions :
backup data from server using SAVE command
Locate the location to put the dump.rdb file
Since I install redis using this tutorial, so I copy the imported dump.rdb to my redis root directory, and then started the redis server like this :
src/redis-server
and then connect the client using :
src/redis-cli
But When I tried to get the all keys using KEYS * I got (empty list or set) where did I go wrong? I've been playing this for hours, any help? thank you
If you have followed the steps correctly it will work fine.
1) Make sure the imported dump.rdb contains your data
2) Stop the redis server
3) copy the file in the correct directory (inside redis bin directory)
parallel to redis-server.
4) make sure you have the same data, that is copied. (bcz possibilites
that if your server is still running, it will replace your dump.rdb).
5) start your redis server you will surely find the values.
If it still doesn't work. Check the dbfilename in your redis.conf file.
It must be dbfilename dump.rdb. If there is a change in the location place it in the correct directory.
Hope this works.
I found the problem in my step, in the documentation quick start redis :
Using src/redis-server Redis was started without any explicit configuration file so I need to start the server with the configuration file to make the server read my dump.rdb file like this :
src/redis-server redis.conf
now I can get all the imported data.

How to recover redis data from snapshot(rdb file) copied from another machine?

I transferred my redis snapshot (dump.rdb file) using scp to a remote server. I need to run a redis server on this remote and recover the data from the dump.rdb file. How can I do that?
For databases where the appendonly flag is set to no, you can do the following:
Stop Redis (because Redis overwrites the current rdb file when it exits).
Copy your backup rdb file to the Redis working directory (this is the dir option in your Redis config). Also, make sure your backup filename matches the dbfilename config option.
Start Redis.
If, on the other hand, you need to restore an rdb file to the appendonly database, you should do something along the lines of:
Stop Redis (because Redis overwrites the current rdb file when it exits).
Copy your backup rdb file to the Redis working directory (this is the dir option in your Redis config). Also, make sure your backup filename matches the dbfilename config option.
Change the Redis config appendonly flag to no (otherwise Redis will ignore your rdb file when it starts).
Start Redis.
Run redis-cli BGREWRITEAOF to create a new appendonly file.
Restore the Redis config appendonly flag to yes.
Specifically, this is the relevant bit of documentation from the Redis config file comments:
# Note that you can have both the async dumps and the append only file if you
# like (you have to comment the "save" statements above to disable the dumps).
# >> Still if appendonly mode is enabled Redis will load the data from the
# >> log file at startup ignoring the dump.rdb file.
There is nothing specific to do. Just install the redis server on the new machine, and edit the configuration file. You just need to change the following parameters to point to the location of the dump file you have just copied.
# The filename where to dump the DB
dbfilename mydump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# Also the Append Only File will be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /data/mydirectory/
Finally, the redis server can be started in the normal way.
Assuming that you run Redis 2.6 or higher, your Redis snapshot filename is dump.rdb, and it exists in the directory /home/user/dbs, the following command would do the trick:
redis-server --dbfilename dump.rdb --dir /home/user/dbs
Relevant section from the official documentation: Passing arguments via the command line
Or you can:
Stop your redis server / instance, eg., service redis6379 stop
Copy the dump.rdb file to the right location, eg., cp /path/to/dump-6379.rdb /var/lib/redis/dump-6379.rdb. Give it the right permissions (user:group should be redis:redis and mode 644)
Start your redis server / instance, eg., service redis6379 start
It is important that you stop the redis server before copying the file to the right location, because Redis saves a snapshot before terminating, so it will replace your file.
Besides, you might want to back up the existing dump.rdb file first.
I would like to add here a tiny detail that did not get mentioned and I will not use config file but instead specify everything in the command line.
When both mydump.rdb and appendonly.aof files are specified when starting redis-server, it will be the appendonly.aof file that wins such that the data from appendonly.aof gets loaded. For example:
redis-server --dbfilename mydump001.rdb --dir /data --appendonly yes
The above start invocation will use the /dir location to find the presence of mydump001.rdb or appendonly.aof files. In this case, redis-server will load the contents from appendonly.aof. If appendonly.aof does not exists, it will create an empty /data/appendonly.aof and the redis-server will be empty.
If you want to load a specific dump file, you can do:
redis-server --dbfilename mydump001.rdb --dir /data
I added this answer coz it is not obvious which is which. In the presence of 2 backup files, and this is often not mentioned.
start redis on your second server, like so:
$ > redis-server /path/to/my/redis/configuration/file/redis.conf
when redis starts, it will find your rdb file because it will look for the name
and file path in the configuration file (redis.conf) that you supply when
start the redis server, as above.
to supply the file name and path, just edit two lines in the redis.conf file template (supplied in the root directory of the redis source. Save your revised version as redis.conf in the directory location that you supplied upon starting the server.
You will find the settings you need in the redis.conf template in the source top-level directory, at lines 127 and 137 (redis version 2.6.9).
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory
dir ./
as you can see, defaults are provided for both settings
so just change the first of these two lines (127) to identify your rdb file
and in the second (137) substitute the default "./" for the actual file path
for your snapshot rdb file; save the redis.conf with your changes, and start redis passing in this new conf file.
This solution work with redis-cluster, but should work too with redis.
Install this dependencie https://github.com/sripathikrishnan/redis-rdb-tools
pip install rdbtools python-lzf
after that execute this
rdb -c protocol /path/to/dump.rdb | redis-cli -h host -p port --pipe
If this is a cluster, the port should the master`s port.
try set appendonly no.
In My case, *.aof file was empty(0 byte), must set appendonly=no then make it load the dump.rdb
Install https://github.com/leonchen83/redis-rdb-cli
rmt -s ./your-dump.rdb -m redis://host:port -r