Is there any reliable backup & restore method, tool, ... for Redis DB ?
been googling around couples of hours and there was nothing just copy the dump file /var/lib/redis/dump.rdb or some scripts that uses MIGRATE (idk if this even count as backup)
Ok lets say there is Redis DB (big one) and its Windows version of Redis
github.com/MicrosoftArchive/redis
so we need a copy of this DB in other branch of company that uses final Linux version of Redis, cuz windows version is outdated and its performance is not good as Linux version.
all keys and values are encrypted and stored as binary format in Redis, so is there any reliable backup & restore for Redis DB ?
There are solutions around that helps you automate this process.
Making a Redis backup is quite simple though:
Review / update your Redis configuration.
Create a snapshot for your Redis data (known as a "dump"/"rdb file").
Save the RDB file to a remote location
To create the snapshot (2) you'll need to use redis-cli SAVEor BGSAVEcommand.
You'll then need to create a little script (you can find one here) to transfer your .rdb file to a remote storage (like AWS S3).
You can then automate all of that using CRON.
Of course, you can now use services like SimpleBackups to get all of that done for you.
Related
Assuming we want to automate the process of creating RDB files (and don't want to use Redis server for this purpose) what options are available?
The current process involves importing (with redis-cli) a set of RESP files to a Redis server and then saving a RDB file to disk (all that in a stateless Redis container, where the RDB file is not persistent and difficult to access automatically). The imported dictionaries are too large for automated data ingestion via a remote Redis python client (we have to import from files).
If the question's restrictions are liberalized somewhat to not running a local redis-server (as opposed to no dependency on any Redis server), is becomes possible to save (or more precisely, download) a remote Redis server database to a local (client-side) RDB file by connecting from a local client (redis-cli) to a remote redis-server instance (as pointed out by Itamar Haber in a comment to this answer), like this:
redis-cli -h <REMOTE_URL> -p <REMOTE_PORT> --rdb <LOCAL_PATH>/dump.rdb
It is equally possible to use redis-cli to first ingest the data from local RESP files to a remote Redis server (in order to later re-export the data to a local RDB file, as described above).
I am dealing with the infrastructure for a new project. It is a standard Laravel stack = PHP, SQL server, and Nginx. For the PHP + Nginx part, we are using Kubernetes cluster - so scaling and blue/green deployments are taken care of.
When it comes to the database I am a bit unsure. We don't want to use Kubernetes for SQL, so the current idea is to go for Google Cloud SQL managed service (Are the competitors better for blue/green deployment of SQL?). The question is can it sync the data between old and new versions of the database nodes?
Let's say that we have 3 active Pods and at least 2 active database nodes (and a load balancer).
So the standard deployment should look like this:
Pod with the new code is created.
New database node is created with current data.
The new Pod gets new environment variables to connect to the new database.
Database migrations are run on the new database node.
Health check for the new Pod is run, if it passes Pod starts to receive traffic.
One of the old Pods is taken offline.
It should keep doing this iteration until all of the Pods and Database nodes are replaced.
The question is can this work with the database? Let's imagine there is a user on the website that is using the last OLD database node to write some data and when switched to the NEW database node the data are simply not there until the last database node is upgraded. Can they be synced behind the scenes? Does Google Cloud SQL managed service provide that?
Or is there a completely different and better solution to this problem?
Thank you!
I'm not 100% sure if this is what you are looking for, but for my understanding, Cloud SQL replicas would be a better solution. You can have read replicas [1], that are a copy of the master instance and have different options [2]
A read replica is a copy of the master that reflects changes to the master instance in almost real time. You create a replica to offload read requests or analytics traffic from the master. You can create multiple read replicas for a single master instance.
or a failover replica [3], that in case the master goes down, the data continue to be available there.
If an instance configured for high availability experiences an outage or becomes unresponsive, Cloud SQL automatically fails over to the failover replica, and your data continues to be available to clients. This is called a failover.
You can combine those if you need.
Is it possible to persist the ignite cache on local file system?
I need to perform cache operations like insert, update, delete on my look up data.
But this has to be persisted on local file system of the respective nodes to survive the data even after restart of the ignite cluster.
Alternatively I was able to persist the data on MySQL database.
But I'm looking for a persistence solution that works independent of databases and HDFS.
Ignite since version 2.1 has it own native Persistence. Moreover, it has advantages over integration with 3rd party databases.
You can read about it here: https://apacheignite.readme.io/docs/distributed-persistent-store
How to disable Save for some DBs and allow for the others in the Redis
You cannot. An RDB snapshot is a single file that contains the data of all dbs.
You can send a FLUSHDB on the dbs you do not want to restore after the RDB is loaded.
If you'll use a dedicated Redis process for each db you could configure each one differently with a dedicated redis.conf file, and a SAVE and BGSAVE commands will only create a snapshot of the Redis process it was issued on.
I'm using Google Compute Instances to run several websites and am looking for a way to automatically backup/snapshot instances that would allow me to rollback in the event of a major error.
Assuming that your servers will restart after a reboot, a snapshot of your VM disks would make it easy to restore to a previous backup, or to clone a server onto a second host if you want to pre-stage some changes before making them live.