So, I have a three node gluster sharing a single volume. There are two clients connecting to that volume. I'm installing my backup agent on all the nodes and clients. I would like to try to reduce duplication of backups not only for space but for network overhead. This is not mission critical data. Would it be sufficient to just back up the brick on the first gluster node and maybe one of the two clients or just the brick? My backup software would be just doing a standard file system backup. I know this is a subjective question but I would just like to get some feedback.
Thanks, Chris
Backing up brick alone is not a good idea.
To keep it simple, you can run your rsync tool(or any backup tool) from client machine to wherever your destination is.
OR You can make use of gluster geo-replication to do the backup. Note here, the backup destination must be a gluster volume.
Related
i'm looking for an alternative for Hyperoo, one of the best backup solutions for VMs backup..
I tried many softwares, like Veem, Iperius, Altaro, Acronis ecc but everyone use Microsoft checkpoints and create AVHDX files, sometimes it happens that the backup has some problems and the avhdx remains open, I find myself forced to merge that punt hoping everything goes well.
All these programs make a false incrementally backup.
Even with every small modification the vhdx changes a little. The backup program checks that the virtual machine has changed and makes a full backup.
Hyperoo creates one full vhdx file and then many rollback files, one file each day.
I understand that you want safety and performance for your Hyper-V VM backups. Backup and restore is a stressful experience. As you mentioned the solutions use the Hyper-V checkpoint technologies and I don't know something else.
We tested a lot backup tools and end up with Veeam. Usually the backup and restore works. Unfortunately it put a lot weight on the infrastructure during backup (storage is slow..) and sometimes the backups fails because of this. To evade this we setup fixed periods outside the work time. Keep in mind that we use the Backup only for server VMs (not VDI).
I would recommend Veeam as backup solution, but maybe you can take a look at Commvault.(https://documentation.commvault.com/commvault/v11/article?p=31493.htm)
Greetings.
I have data on several machines that I want to backup in away that I can restore to certain points in time.
From what I read Snapshot Replication achieves this (as opossed to back-up that clobbers previous results).
The main motivation is that if the data files are ransacked, and encoded, then if I just back-up I can end up in a state where the backed up files are also encrypted.
One way to do this is by using 2 Synology NAS machines where I can have:
rsync processes to back-up files from multiple machines into a NAS1
apply Snapshot Replication from NAS1 to NAS2
In this way, if the data is hijacked at certain point, I can restore the data to the last good state by restoring NAS2 to previous point in time.
I would like to know if:
Snapshot Replication is the way to go, or there are other solutions?
are there other ways to achieve Snapshot Replication, e.g. with single NAS?
I have an older Synology 2-Bay NAS DS213j.
Assuming that I buy a second, newer, NAS (e.g. DS220j), are the 2 NAS machines expected to work together?
Thanks
I found out that Hyper Backup can save snapshots in time, so I'm using it instead of Snapshot Replication
I have a VPS with OVH. There are two options in there, Automated Backup and Snapshot. What is the difference between both and which one should I enable so I don't lose the data and the configuration on the server. It took me quite some time to optimize my server so I don't want to go through that pain again. Plus, there's like 30GB of data uploaded. I don't want to risk that even.
This explains it: https://www.ovh.com/world/vps/backup-vps.xml
So basically the automated backup is done automatically everyday and replicated in 3 different sites to ensure nothing is lost.
Snapshot seems like you have a max of two different snapshot and that you should do them yourself (like a VM snapshot).
Being relatively new to GCE, but not to other virtualization tools like VmWare or VirtuaBox, I'm not able to find in GCE a concrete way to get a full snapshot of a live machine.
I'm guessing it's my fault or poor knowledge, but really GCE doesn't saves the "system state", or else dumps memory to snapshot?
I'd found many scripts and examples on how to flush buffers to disks before I create the snapshot, but no way to obtain a complete state of the machine, including what the machine itself is running at THAT point.
Let me say that, if this is correct, the GCE snapshot IS NOT a snapshot.
Thanks in advance for your help.
That's a VM image, not a snapshot, and it does not include the contents of RAM or the processor state. A snapshot is a point-in-time copy of a persistent disk.
[link] (http://vcloud.vmware.com/uk/using-vcloud-air/tutorials/working-with-snapshots)
Here's an example of a cloud platform saving true snapshots, portraits of a specific second of a working machine.
Let me add a thought:
I don't know if VCloud is considering a particular state, gains privileged access to disks for a limited time, avoiding contingency, or else does a temporary duplication of the working disk in another volume.
I'm still reading around, trying to get INTO the problem.
BUT... it dumps memory to snapshot.
This is the point, and I'm wondering why this seems to be not possible in GCE.
I am using redis version 2.8.3. I want to build a redis cluster. But in this cluster there should be multiple master. This means I need multiple nodes that has write access and applying ability to all other nodes.
I could build a cluster with a master and multiple slaves. I just configured slaves redis.conf files and added that ;
slaveof myMasterIp myMasterPort
Thats all. Than I try to write something into db via master. It is replicated to all slaves and I really like it.
But when I try to write via a slave, it told me that slaves have no right to write. After that I just set read-only status of slave in redis.conf file to false. Hence, I could write something into db.
But I realize that, it is not replicated to my master replication so it is not replicated to all other slave neigther.
This means I could'not build an active-active cluster.
I tried to find something whether redis has active-active cluster capability. But I could not find exact answer about it.
Is it available to build active-active cluster with redis?
If it is, How can I do it ?
Thank you!
Redis v2.8.3 does not support multi-master setups. The real question, however, is why do you want to set one up? Put differently, what challenge/problem are you trying to solve?
It looks like the challenge you're trying to solve is how to reduce the network load (more on that below) by eliminating over-the-net reads. Since Redis isn't multi-master (yet), the only way to do it is by setting up each app server with a master and a slave (to the other master) - i.e. grand total of 4 Redis instances (and twice the RAM).
The simple scenario is when each app updates only a mutually-exclusive subset of the database's keys. In that scenario this kind of setup may actually be beneficial (at least in the short term). If, however, both apps can touch all keys or if even just one key is "shared" for writes between the apps, then you'll need to bake locking/conflict resolution/etc... logic into your apps to consolidate local master and slave differences (and that may be a bit of an overkill). In either case, however, you'll end up with too many (i.e. more than 1) Redises, which means more admin effort at the very least.
Also note that by colocating app and database on the same server you're setting yourself for near-certain scalability failure. What will happen when you need more compute resources for your apps or Redis? How will you add yet another app server to the mix?
Which brings me back to the actual problem you are trying to solve - network load. Why exactly is that an issue? Are your apps so throughput-heavy or is the network so thin that you are willing to go to such lengths? Or maybe latency is the issue that you want to resolve? Be the case as it may be, I recommended that you consider a time-proven design instead, namely separating Redis from the apps and putting it on its own resources. True, network will hit you in the face and you'll have to work around/with it (which is what everybody else does). On the other hand, you'll have more flexibility and control over your much simpler setup and that, in my book, is a huge gain.
Redis Enterprise has had this feature for quite a while, but if you are looking for an open source solution KeyDB is a fork with Active Active support (called Active Replica).
Setting it up is just a little more work than standard replication:
Both servers must have "active-replica yes" in their respective configuration files
On server B execute the command "replicaof [A address] [A port]"
Server B will drop its database and load server A's dataset
On server A execute the command "replicaof [B address] [B port]"
Server A will drop its database and load server B's dataset (including the data it just transferred in the prior step)
Both servers will now propagate writes to each other. You can test this by writing to a key on Server A and ensuring it is visible on B and vice versa.
https://github.com/JohnSully/KeyDB/wiki/KeyDB-(Redis-Fork):-Active-Replica-Support