Redis cluster instance takes long time to load RDB - redis

I have a Redis dump.rdb file of 177MB in size. It is a snapshot dump of one of the instances of a testing Redis cluster. When I restart the instance, it takes about 25 seconds to load the file into memory. But when I starts a standalone Redis server with the same dump, it only takes about 3.7 seconds to load.
The questions are:
Does any one know what causes the difference?
Is there a way to shorten the db loading time for the Redis cluster instance?

Related

Starting a Redis cluster with an RDB file

I'm trying to create a Redis cluster using an RDB file taken from a single-instance Redis server. Here is what I've tried:
#! /usr/bin/env bash
for i in 6000 6001 6002 6003
do
redis-server --port $i --cluster-config-file "node-$i.cconf" --cluster-enabled yes --dir "$(pwd)" --dbfilename dump.rdb &
done
That script starts up 4 Redis processes that are cluster enabled. It also initializes each node with the dump file.
Then I run redis-trib.rb so that the 4 nodes can find each other:
redis-trib.rb create 127.0.0.1:6000 127.0.0.1:6001 127.0.0.1:6002 127.0.0.1:6003
I get the following error:
>>> Creating cluster
[ERR] Node 127.0.0.1:6060 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
I've also tried a variant where only the first node/process is initialized with the RDB file and the others are empty. I can join the 3 empty nodes into a cluster but not the one that's pre-populated with data.
What is the correct way to import a pre-existing RDB file into a Redis cluster?
In case this is an "X-Y problem" this is why I'm doing this:
I'm working on a migration from a single-instance Redis server to an Elasticache Redis Cluster (cluster mode enabled). Elasticache can easily import an RDB file on cluster startup if you upload it to S3. But it takes a long time for an Elasticache cluster to start. To reduce the feedback loop as I test my migration code, I'd like to be able to start a cluster locally also.
I've also tried using the create-cluster utility, but that doesn't appear to have any options to pre-populate the cluster with data.

How to do a redis FLUSHALL without initiating a sentinel failover?

We have a redis configuration with two redis servers. We also have 3 sentinels to monitor the two instances and initiate a fail over when needed.
We currently have a process where we periodically have to do a FLUSHALL on the redis server. This is a blocking operation that takes longer than the time we have allotted for the sentinels to timeout. In other words, we have our sentinel configuration with:
sentinel down-after-milliseconds OurMasterName 5000
and doing a redis-cli FLUSHALL on the server takes > 5000 milliseconds, so the sentinels initiate a fail over.
We acknowledge that doing a FLUSHALL isn't great and we also know that we could increase the down-after-milliseconds to but for the purposes of this question assume that neither of these are options.
The question is: how can we do a FLUSHALL (or equivalent operation) WITHOUT having our sentinels initiate a fail over due to the FLUSHALL blocking for greater than 5000 milliseconds? Has anyone encountered and solved this problem?
You could just create new instances: if you are using something like AWS or Azure than you have API for creating a new Redis cluster. Start it, load it with data and once ready just modify the DNS, again with API call -so all these can be handled by some part of your application. But on premises things can get more complex because it will require some automation with ansible/chef/puppet.
The next best option you currently have to is to delete keys in batches to reduce the amout of work to at once. You can build a list, assuming you don't have one, using scan Then delete in whatever batch size works for you.
Edit: as you are not interested in keeping data, disable persistence, delete the RDB file, then just restart the instance. This way you do t have to update sentinel like you would if you take the provision new hosts.
Out of curiosity, if you're just going to be flushing all the time and don't care about the data as you'll be wiping it, why bother with sentinel?

how to handle memory leaks in amazon web services t1.micro?

I have a t1.micro instance in amazon web services to handle a virtual image (in concrete a formhub image) and sometimes I got an eror of not allocated memory, I solve it rebooting the instance. Any clues?
is possible to reboot the instances automatically every day?
The micro instances are quite constrained with only 600mb or so of RAM. You may solve the problem by moving up to a small or medium instance or even one of the new T2 instances - even the smallest one has 1Gb of RAM.
If this is not an option for you, you can add a cron job to restart the instance at a particular time of day.
ssh in to the instance and type the command:
sudo crontab -e
Enter a line like:
0 5 * * * /sbin/reboot
to restart the system at 5am each day. This is for an Ubuntu system - the reboot command may be elsewhere in other distributions. Run the command which reboot to check.

Is it recommended to run redis using Supervisor

Is it a good practice to run redis in production with Supervisor?
I've googled around, but haven't seen many examples of doing so. If not, what is the proper way of running redis in production?
I personally just use Monit on Redis in production. If Redis crash Monit will restart it but more importantly Monit will be able to monitor (and alert when a threeshold is reach) the amount of RAM that Redis currently takes (which is the biggest issue)
Configuration could be something like this (if maxmemory was set to 1Gb in Redis)
check process redis
with pidfile /var/run/redis.pid
start program = "/etc/init.d/redis-server start"
stop program = "/etc/init.d/redis-server stop"
if 10 restarts within 10 cycles
then timeout
if failed host 127.0.0.1 port 6379 then restart
if memory is greater than 1GB for 2 cycles then alert
Well..it depends. If I were do use redis under daemon control I would use runit. I do use monit but only for monitoring. I like to see the green light.
However, for redis to exploit the true power, you dont run redis as a deamon esp a master. If a master goes down, you will have to switch a slave to a master. Quit simply, I just shoot the node in the head and I have a chef recipe bring up a new node.
But then again....it also depends on how often you snapshot. I do not snapshot thus no need for deamon control.
People use reids for brute force speed. that means not writing to disk and keep all data in ram. If a node goes down...and you dont snapshot...data is lost.

Configure Redis slave to stop saving data to file

Can I configure Redis slave to stop saving dumps? I have omitted all save instructions in config file but slave is still doing dumps.
So I assume you have checked in the configuration file of the slave that RDB is deactivated (all save lines commented out), and the slave has been restarted after the configuration file has been changed (so this configuration is active).
At this point the background dump operation of the slave is deactivated, but it does not prevent the slave to write a dump file. Actually, the slave has to write a dump file at startup time: this is how it retrieves the data from the master in bulk mode.
When the slave starts, it sends a SYNC request to the master:
The master starts accumulating Redis commands.
The master performs a background dump
The master sends the dump file to the slave in bulk mode
The slave reads the dump file from the master and write it to the disk
When it is complete, the slave loads the dump file from the disk
The slave starts processing Redis commands accumulated by the master
Eventually, the slave will catch up
The slave is in sync with the master
That's why you can find dump files on slave side even if RDB is deactivated for the slaves.
A good reading is http://redis.io/topics/persistence
Redis has 2 kinds of persistence, you should disable AOF too:
Append-only file
Snapshotting is not very durable. If your computer running Redis
stops, your power line fails, or you accidentally kill -9 your
instance, the latest data written on Redis will get lost. While this
may not be a big deal for some applications, there are use cases for
full durability, and in these cases Redis was not a viable option. The
append-only file is an alternative, fully-durable strategy for Redis.
It became available in version 1.1.
You can turn on the AOF in your configuration file:
appendonly yes
From now on, every time Redis receives a command that changes the
dataset (e.g. SET) it will append it to the AOF. When you restart
Redis it will re-play the AOF to rebuild the state.