Google Compute Engine virtual machine backup strategy - backup

Does Google Compute Engine provide automatic, scheduled backups for virtual machines? I want to backup entire disk, exactly how I can do it manually, Couldn't find it in their documentation, and if not, what are other strategies for scheduling backups you have used?

You can use Compute Engine API to programmatically do automatic snapshot. A similar question has been answered on this thread.

Yes, recently GCP announced snapshot schedules which can be used to do exactly what you want. You can refer to the this link to set your schedule either by GCP console or gcloud command.

Related

Proxmox backup to wasabi

In my company we are deciding to move part of our backups to the cloud, and in particular, as the title suggests, we have configured wasabi backup.
The first urgency is to move the backups to the proxmox inside the company on wasabi, but looking on the documentation and online I can't find a way to move the backups from the proxmox to wasabi.
Do you have any suggestions or advice?
We're looking to accomplish something similar with Proxmox and Wasabi. After some digging this afternoon, the most mature way of doing this would be to use Veeam with Agent Backup. Veeam does not officially support the Proxmox kernel, explained by staff here, and it doesn't seem like they have any intention of doing so. This means you cannot back up the VM/CTs from the hypervisor level (reliably). But, it seems that you can leverage the Agent Backup instead, and use the VBS (Veeam Backup Server) to push incremental backups to Wasabi. I use Veeam and Wasabi together with some clientele on ESXi for a 3-2-1 backup scheme with Agent Backups, works great. This is the approach we're going to take with Proxmox as well. Although it's more expensive than some cheap workaround, this backup method scales very well considering you can use VEM to manage other VBSs.
EDIT: Here's a few links to Veeam resources to check out:
Veeam Agent Backup (Linux version, but they make a Windows and Mac agent too.)
General VBR Resource Page

Configuration Management for Installation of OS

I was looking for a uniform configuration management tool for remote installation of OS on remote servers(similar to puppet/chef) having wide range of platform support. I think we can use PXE/kickstart for remote installation. I am not sure that can be used to install OS on multiple servers in parallel? Other way to spin up the EC2 instance from AWS and pay amazon for the usage. I was wondering is there any other best option for this requirement?
Regards
Bubunia
You can consider ansible as a strong candidate for this.
Some of its features:
Open source with large development community
Number of modules which can help you building flexible solutions.
Cloud focused development modules
Ansible inventory which can help you automate things end to end on the basis of tags to your instances
Agent less
Easy to write, read and understand yaml format
Pre-builded modules for multiple installations available in open source community
Work with multiple OS
It is efficient as well I am using it from last 1 year and found it very good.
Sparrow6 cm supports quite a range of platforms/os. You can choose Sparrowdo to run configuration jobs in push manner over ssh.

Questions About Using Amazon Web Services (AWS) For Remote Development

We are a very small mobile company (building an application for the iphone) and we are currently considering hosting services. We are currently leaning towards Amazon's hosting/web services. Accordingly, I have some questions:
1) Can I create an admin account on AWS and assign user accounts to developers that should have access to most (but not all) features.
2) Do we need to learn / use AWS APIs in the development of our product? I don't like the
idea of having to create hooks into a hosting service.
3) It looks like the pricing for AWS scales with usage. So, since we are in development and have only developers accessing the server right now, am I right that the cost will be quite low if anything?
4) How does AWS do version management? We have several developers scattered throughout the country. Each will need to checkout the the recent build from the server for development
on his local box. Basically, something like SVN. Is this possible?
5) I am guessing we need something like a dev, svn, and production server? Is this right? If so, how do I set this up and find out the associated costs?
6) We are considering a few database options, among them NoSQL and Neo4j - will we be able to do this using AWS? The server language will be Java.
Thanks for your time.
To answer your questions:
Yes, kind of. There is Identity and Access Management offered by AWS, but it's not the easiest solution to use. Having said that, it can allow you to lock down some of the access activities on an account so that you have some control over your users. I would say that AWS is still very much a single-user environment for server administrators.
You could get away using only the management console. Your use of scripting may only be required if you want to run batch or periodic activities (eg. take a snapshot of all machines at 2am every night).
Costs for EC2 are low, especially for the Micro machine sizes. But keep in mind that the idea of cloud computing is the availability of on-demand resources for short term use. If you run dev machines needlessly over night then you will still be paying! And if someone launches an Extra Large machine (or 30 machine instances) then you will suddenly find yourself with bigger bills than expected.
(5. and 6. as well) Amazon EC2 is really about issuing you the boxes. What you do thereafter is fully up to you. You can create snapshots daily of your machines, you can deploy SVN and noSQL etc. etc.
I've been seriously into EC2 for a while now, and lots of companies are starting to look at the idea you propose. There are benefits to giving staff on-demand compute power, without having to manage any infrastructure in-house. But I will re-iterate my first point that EC2 is very much a single-user, server administration environment, which doesn't lend itself to being used as a dev playground without additional tools. (Or at least it becomes a challenging task if you have several devs spread around in your company).
I own a business that helps companies use EC2 for dev/lab/playground type of environments. I won't directly flog it here, but will show a quick demo we just put on DropBox: http://dl.dropbox.com/u/16347737/RequestEC2Machines.html Feel free to request a machine to see how adding process to EC2 can help meet your goals.
I run/develop a website using Amazon EC2 & SimpleDB and I have some comments for you on your questions
Hi.
We are a very small mobile company (building an application for the iphone) and we are currently considering hosting services. We are currently leaning towards Amazon's hosting/web services. Accordingly, I have some questions:
1) Can I create an admin account on
AWS and assign user accounts to
developers that should have access to
most (but not all) features.
In my experience, there doesn't seem to be a direct correspondence between Amazon users and users on a single instance. An instance's root account is connected to the amazon account indirectly through a key pair. Although, I must say that I haven't explored this question in detail.
2) Do we need to learn / use AWS APIs in the development of our product? I don't like the > idea of having to create hooks into a hosting service.
I manage everything through their web console and Eclipse IDE plugins. I've never had to touch the API yet for development and deployment.
3) It looks like the pricing for AWS scales with usage. So, since we are in
development and have only developers accessing the server right now, am
I right that the cost will be quite low if anything?
Micro instances cost the lowest and the cost is pretty good if you're just starting an instance for a couple of hours and then stopping it. I never think twice about starting a micro instance to try out something new
4) How does AWS do version management? We have several developers
scattered throughout the country. Each will need to checkout the the recent
build from the server for development on his local box. Basically, something like SVN.
Is this possible?
I haven't seen this feature being offered directly by Amazon. You can of course keep an instance always on for your repository with backups
5) I am guessing we need something like a dev, svn, and production server?
Is this right? If so, how do I set this up and find out the associated costs?
EC Pricing - http://aws.amazon.com/ec2/pricing/
Amazon Simple Monthly Calculator - http://calculator.s3.amazonaws.com/calc5.html
6) We are considering a few database options, among them NoSQL and Neo4j -
will we be able to do this using AWS? The server language will be Java.
Amazon instances can be what you want them to be, hence you can either use a pre-configured ami to launch an instance or start off with a bare bones Ubuntu Server or Windows Server e.g. and build a system with what you want. You can then save the snapshot of that system to launch more in the future or to re-launch if your instance crashes

Amazon EC2 Windows AMI with shared S3 storage

I've currently got a base Windows 2008 Server AMI that I created on Amazon EC2. I use it to create 20-30 EBS-based EC2 instances at a time for processing large amounts of data into PDFs for a client. However, once the data processing is complete, I have to manually connect to each machine and copy off the files. This takes a lot of time and effort, and so I'm trying to figure out the best way to use S3 as a centralised storage for the outputted PDF files.
I've seen a number of third party (commercial) utilities that can map S3 buckets to drives within Windows, but is there a better, more sensible way to achieve what I want? Having not used S3 before, only EC2, I'm not sure of what options are available, and I've not been able to find anything online addressing the issue of using S3 as centralised storage for multiple EC2 Windows instances.
Update: Thanks for suggestions of command line tools for using S3. Was hoping for something a little more integrated and less ad-hoc. Seeing as EC2 is closely related to S3 (S3 used to be the default storage mechanism for AMIs, etc), that there might be something neater/easier I could do. Perhaps even around Private Cloud Networks and EC2 backed S3 servers, etc, or something (an area I know nothing about). No other ideas?
I'd probably look for a command line tool. A quick search on Google lead me to a .Net tool:
http://s3.codeplex.com/
And a Java one:
http://www.beaconhill.com/opensource/s3cp.html
I'm sure there are others out there as well.
You could use an EC2 instance with EBS exported through samba which can act as a centralized storage that windows instances can map?
this sounds very much like a hadoop/Amazon MapReduce job to me. Unfortunately, hadoop is best deployed on Linux:
Hadoop on windows server
I assume the software you use for pdf-processing is Windows only?
If this is not the case, I'd seriously consider porting your solution to Linux.

EC2 automation tools / strategies?

What tools or strategies are you using for automation of EC2 activities?
I need to be able to bring up a number of EC2 instances, provision various software to it (primarily Python packages), interact with S3 (primarily download data), and run various jobs. I'll be doing this both on-demand and on a scheduled basis.
I'm trying to decide if I should:
Create an AMI with all my software loaded on it
or
Launch a plain vanilla linux AMI instance and scp my software to it
For the provisioning and automation Boto looks pretty good. Or I could write something with Paramiko. Recommend either or anything else I should be looking it?
Basically I'm looking for advice / success stories, let me know what's working for you.
To answer your bullets about selecting AMIs, I would say that it depends on how much software you're installing.
I have been successful with a hybrid approach, where I build an AMI and load my heavyweight and more stable software. This is the stuff that needs to run an installer, or takes considerable time to install (remember that if you re-install a package every time as part of your startup process, you're paying for the install every time). Then, I upload the small and volatile software at provisioning/startup time. In this bucket goes most of the application code, data, etc. That way, I can change my app and not have to touch the AMI.
The benefits of this approach:
Don't have to pay for running the same software install thousands of times.
AMI can stay fairly stable over time.
Can use software that requires intervention or GUI interaction to install.
Major drawbacks:
Your AMI's OS version will become stale over time.
Your AMI may not be flexible as to the instance type/architecture it will run on. For instance, you may create it on a 32-bit OS and thereby prevent it from running on the High CPU instance types, or vice versa. So you may lock yourself into a pricing scheme.
I don't use Python, so I can't comment on either of the APIs you referenced.
AWS just released the Systems Manager suite, which includes an Automation service that will (among other things) handle your use cases around AMIs.
This question was asked some time ago now but I believe my answer could be useful to other users. I believe the best automations tools available on the market are provided by Cloud Management platforms. For example they offer auto-scaling, configuration software integration (Chef/Puppet), databases replications, dns management...
The most popular cloud management softwares are Scalr (disclaimer: I work there), RightScale and enStratus. Scalr is open-source and released under the Apache 2 license.
Regarding your specific question on AMIs, cloud Management platforms usually provide pre-configured AMIs (at Scalr, we call them roles). If you want to create your own AMI built on an existing instance, you'll be able to create snpashots and use them as a template for future instances.