In my company we are deciding to move part of our backups to the cloud, and in particular, as the title suggests, we have configured wasabi backup.
The first urgency is to move the backups to the proxmox inside the company on wasabi, but looking on the documentation and online I can't find a way to move the backups from the proxmox to wasabi.
Do you have any suggestions or advice?
We're looking to accomplish something similar with Proxmox and Wasabi. After some digging this afternoon, the most mature way of doing this would be to use Veeam with Agent Backup. Veeam does not officially support the Proxmox kernel, explained by staff here, and it doesn't seem like they have any intention of doing so. This means you cannot back up the VM/CTs from the hypervisor level (reliably). But, it seems that you can leverage the Agent Backup instead, and use the VBS (Veeam Backup Server) to push incremental backups to Wasabi. I use Veeam and Wasabi together with some clientele on ESXi for a 3-2-1 backup scheme with Agent Backups, works great. This is the approach we're going to take with Proxmox as well. Although it's more expensive than some cheap workaround, this backup method scales very well considering you can use VEM to manage other VBSs.
EDIT: Here's a few links to Veeam resources to check out:
Veeam Agent Backup (Linux version, but they make a Windows and Mac agent too.)
General VBR Resource Page
Related
How can I backup/restore the data on Cumulocity?
Is there any backup feature or service on this platform?
This feature is specially needed when we use the Edge solution and something goes wrong with the edge server i.e. the Hard Disk defects.
Depending on where you are hosting the Cumulocity Edge VM you can export snapshots, e.g. on Hyper-V machines. This snapshot could then be archived/stored anywhere else.
I downloaded Redis server and cli to my local machine and it working good.
I just wanted to know if I can use it also in production server:
Are there any critical limitations? For example: Can I use 100 GB for free? (It will be on my computer).
I know that Redis labs cost money per month but if I download the redis to my machine and not using the redis labs, would it be free? (and the cost will be only the storage of the machine I using).
Redis is an open source software, licensed under BSD. That basically means you can do anything you want with it, without owing anyone anything.
Redis Labs, the home of open source Redis and the provider of commercial products that leverage on it, offers a wide spectrum of solutions - whether hosted, as-a-service, downloadable, remotely managed and so forth. You can (and should sometimes) use them, but that's definitely not a requirement.
Disclaimer: I work at Redis Labs and with the open source project.
I am looking for a "free" IaaS service as an alternative to EC2 which will let me SSH into a system with full user permissions (create/delete files, install services, libraries and applications from the repository).
Tried OpenShift but ended up leaving due to strict permission policy on the SSH. Heroku, dotCloud, CloudFoundry.com, Stackato are PaaS providers. Rackspace and Linode might have what I need but are not free.
Is my own home server or EC2 are the only two options that I have? For the curious, I want to deploy my entire .vim folder and .vimc file for development on the cloud from a computer when I am not at home.
It seems like you want something for free that is not provided anywhere for free. I know its a shame, but it is reasonable that companies would charge for such a thing. Given that you want it for free I am guessing that you don't need much power or anything large scale. In that case I would look into the cheaper end of Virtual Private servers or a micro instance on EC2. VPS servers start at around $20 a month and a micro server starts at $14. Of course for the microserver you will have to pay a little extra for bandwidth and probably and EBS volume. Additionally AWS offers a free tier which pretty much allows you to run a micro instance with EBS for the first year.
I have a Python / REDIS service running on my desk that I want to move to my Blue-Domino-hosted site. I've got Python available on the server, but not REDIS. They don't give me root access to my Debian VM so I can't git, extract, and install myself from a Unix prompt.
Their tech support might do the install for me, but they need me to point them to server requirements, which I don't see on the REDIS download page.
I could probably FTP binaries to the site if they were available, but that's dicey.
Has anyone dealt with this?
Installing Redis is actually quite easy, from source. It doesn't have any dependencies, so just download the tarball, unzip it, and follow the install instructions. I'm always afraid of doing that sort of stuff, but with Redis it really was a breeze. If you don't dare to do it their tech support should be able to do it.
If it is Intel/AMD server, you can compile the Redis somewhere (32 bit version for example), and upload it as binary. Then start it with Python. I did this myself couples weeks ago.
For port you will need to use something over 1000. I don't recommend to use default port. Remember to change LogLevel too. Daemonize works well as non-root too.
Some servers blocks all external ports, so you will not be able to connect to Redis from outside, but this will be a problem only if you connect from different machine. For same machine should be OK, since is "internal".
However, I am unsure how hosting administrator will react when he sees the process running :) I personally will kill it immediately.
There is other option as well - check service like Redis4you.com . But their free account is small, you probably will need to spend some money for more RAM.
Is your hosting provider looking for a minimum set of system requirements for running Redis? This is indeed not listed on the Redis website. Probably because there aren't many exotic requirements. Also it depends a lot on your use case. Basically what you need to run Redis is:
Operating system: Unix like, Linux is recommended (one reason to favor Linux I've heard of is the performance of its TCP/IP stack)
Tools: GCC, make, (git).
Memory: lots (no seriously this depends on your use case, but because Redis keeps everything in-memory you need a least more RAM than the size of your dataset).
Disk: disk access for making snapshots.
The problem seems to be dealing with something non-traditional with my BlueDomino hosting. Since this project is a new venture, I think the best course for me is to rent a small Linux VM from rackspace and forget about the BD hosting.
we are looking for a backup solution for our xen servers that meets the following requirements:
makes backups while machines are running
has easy to use disaster recovery without depending on complex infrastructure in case of a disaster
can backup all kinds of linux and windows machines
sends some kind of message if something is not working. We dont want to monitor everything manually
We tried Acronis Backup & Recovery 10 virtual edition but it is not compatible with linux vms. Bacula does not seem to have good disaster recovery, as far as we know.
My question:
What are good backup solutions for our requirements?
Thanks advance for your answers.
Cheers
Arne
moved to https://serverfault.com/questions/230062/good-backup-solution-for-xen-virtual-machines