Redis SLAVEOF for single database - redis

I'm hoping to run a SLAVEOF command from a new redis box to migrate data from an Elasticache node to a normal EC2 box running redis. Ideally I would run something like SLAVEOF IP DB_INDEX so that I'm only pulling data from DB_INDEX on the master instead of all available databases. Is this possible?

No, you can not replicate just a single "database" in Redis. It is easier to think of these as "keyspaces" rather than individual databases. Further, according to the documentation at Elasticache the way to import data is to upload a snapshot (RDB file) - not via a replication command.
Since you are just doing a migration you could:
Replicate to a clean instance
Iterate over all databases you don't want and do a FLUSHDB (do NOT do a FLUSHALL).
Then, if you want the data to be on DB0 and it is not there, you can use the MOVE command on each key to put it in the default 0 database.
This would result in having your new instance having just the data you want there, in the "0" database - if you chose to move the keys.

Related

Reduce Redis cluster to single GCP memorystore

I have 3 redis instance with redis. One is the master and the other two, are the slaves. I have connected to master node and get info by redis-cli with INFO command. I can see the parameter cluster_enabled:0 and
#Replication
role:master
connected_slaves:2
slave0:ip=xxxxx,port=6379,state=online,offset=15924636776,lag=1
slave1:ip=xxxxx,port=6379,state=online,offset=15924636776,lag=0
And the keyspace, each node has different dbs. I need to migrate all data to a single memorystore in GCP but I don't know how. Anyone can help me?
Since the two nodes are slaves and clustering is not enabled, you only need to replicate the master node. RIOT is a great tool for migrating data in and out of Redis.
However, if you say DB by node do you mean redis DB that you access by select? In that case you'll need to prefix keys as there may be overlap between the keysets of the DBs.
I think setting up another Redis cluster in a single node configuration is the least of your worries.
The real challenge for you would be migrating all your records over to the new setup. This is not a simple question to answer and would depend heavily on multiple factors:
The total size of your data being migrated
Is this is a live Database in production
Do you want to keep the two DB schemas in your new configuration separate?
Ok, I believe currently your Redis Instances are hosted on Google Compute Engine.
And you are looking to migrate to Memorystore for Redis.
As mentioned here, you can leverage Redis snapshots for this. It provides you step-wise instructions on how to achieve this, leveraging GCS buckets as transient storage.
import data into Cloud Memorystore instances using RDB (Redis Database Backup) snapshots, as well as back up data from existing Redis instances.

Redis replication at key level

We are using redis. We have two set of data. One set of data(Assume it is using the prefix redis:local: eg: redis:local:key1) is used by the main application and no need of replication.
Another set of data (Prefix redis:replicate: eg: redis:replicate:key2) is used by the main application and should be replicated to slave redis instances.
I have two questions.
Is it possible to configure redis to replicate only keys with prefix redis:replicate:?
If that is not possible, Is it possible to configure redis to replicate only one database? We will store the first set of data in database-0 and the second set of data in database-1. So we have to replicate only database-1.
Currenly, we are running two instances of redis to solve the issue.
Redis only supports replication of whole instances. Limiting replication to a key prefix or database is not possible.
Running two instances of Redis is simplest and reliable option.
Another way would be to write a custom replication program which is difficult and failure prone in comparison.
There is also another question concerning replication of only one database: Replicate a single Redis database from an instance that has multiple databases

How to restore all the data from redis?

I wanted to restore all the data I save using redis BGSAVE command.It saves the data to its default location /var/lib/redis/6379/dump.rdb .The data contains hashmaps,key-value pairs .How to get back the data to redis from the dump.rdb file?
I am using RESTORE command but it is not solving the purpose!
Just restart the server. On startup it will read the dump. It never has to read the dump during its operation, so there's no command for it.
RESTORE can be useful, but it's per key command. Meaning you have to parse the dump yourself, extract key names and their serialized values and only then call RESTORE for each key. Also, it was implemented to support migrating keys between two running servers. Not exactly your use-case.
Restarting the server is easier, isn't it? :)

how to split one redis instance into two?

I have a redis as big as 4G on one 4G memory machine,I want to split it into two 2G redis instance so that I can run the two on two different machines.
how to do that?
thx
AFAIK there is no easy way to do it.
One way to do it is to use the redis-rdb-tools package from Sripathi Krishnan. The procedure is:
choose a strategy to shard your data (i.e a function which distributes the keys over the instances)
write a Python script to parse a Redis dump file, connect to several instances, and apply the commands to insert the data on the correct instances
dump the Redis instance
flush the instance
create and start the second instance
run the script on the dump of the first instance
See more information at https://github.com/sripathikrishnan/redis-rdb-tools
you can use redis cluster
Redis Cluster provides a way to run a Redis installation where data is
automatically sharded across multiple Redis nodes.
Every Redis Cluster node requires two TCP connections open.
http://redis.io/topics/cluster-tutorial

The faster method to move redis data to MySQL

We have big shopping and product dealing system. We have faced lots problem with MySQL so after few r&D we planned to use Redis and we start integrating Redis in our system.
Following this previously directly hitting the database now we have moved the Redis system
User shopping cart details
Affiliates clicks tracking records
We have product dealing user data.
other site stats.
I am not only storing the data in Redis system i have written crons which moves Redis data in MySQL data at time intervals. This is the main point i am facing the issues.
Bellow points i am looking for solution
Is their any other ways to dump big data from Redis to MySQL?
Redis fail our store data in file so is it possible to store that data directly to MySQL database?
Is Redis have any trigger system using that i can avoid the crons like queue system?
Is their any other way to dump big data from Redis to MySQL?
Redis has the possibility (using bgsave) to generate a dump of the data in a non blocking and consistent way.
https://github.com/sripathikrishnan/redis-rdb-tools
You could use Sripathi Krishnan's well-known package to parse a redis dump file (RDB) in Python, and populate the MySQL instance offline. Or you can convert the Redis dump to JSON format, and write scripts in any language you want to populate MySQL.
This solution is only interesting if you want to copy the complete data of the Redis instance into MySQL.
Does Redis have any trigger system that i can use to avoid the crons like queue system?
Redis has no trigger concept, but nothing prevents you to post events in Redis queues each time something must be copied to MySQL. For instance, instead of:
# Add an item to a user shopping cart
RPUSH user:<id>:cart <item>
you could execute:
# Add an item to a user shopping cart
MULTI
RPUSH user:<id>:cart <item>
RPUSH cart_to_mysql <id>:<item>
EXEC
The MULTI/EXEC block makes it atomic and consistent. Then you just have to write a little daemon waiting on items of the cart_to_mysql queue (using BLPOP commands). For each dequeued item, the daemon has to fetch the relevant data from Redis, and populate the MySQL instance.
Redis fail our store data in file so is it possible to store that data directly to MySQL database?
I'm not sure I understand the question here. But if you use the above solution, the latency between Redis updates and MySQL updates will be quite limited. So if Redis fails, you will only loose the very last operations (contrary to a solution based on cron jobs). It is of course not possible to have 100% consistency in the propagation of data though.