This question is for anyone who has actually used Amazon EC2. I'm looking into what it would take to deploy a server there.
It looks like I can start in VirtualBox, setup my server and then export the image using the provided ec2-tools.
What gets tricky is if I actually want to make configuration changes to my running server, they will not be persistent.
I have some PHP code that I need to be able to deploy (and redeploy) to the system, so I was thinking that EBS would be a good choice there.
I have a massive amount of data that I need stored, but it just so happens that latency is not an issue, so I was thinking something like s3fs might work.
So my question is... What would you do? What does your configuration look like? What have been particular challenges that perhaps you didn't see coming?
We have deployed a large-scale commercial app in the AWS environment.
There are three basic approaches to keeping your changes under control once the server is running, all of which we use in different situations:
Keep the changes in source control. Have a script that is part of your original image that can pull down the latest and greatest. You can pull down PHP code, Apache settings, whatever you need. If you need to restart your instance from your AMI (Amazon Machine Image), just run your script to get the latest code and configuration, and you're good to go.
Use EBS (Elastic Block Storage). EBS is like a big external hard drive that you can attach to your instance. Even if your instance goes away, EBS survives. If you later need two (or more) identical instances, you can give each one of them access to what you save in EBS. See https://stackoverflow.com/a/3630707/141172
Burn a new AMI after each change. There's a tool to create a new AMI from a running instance. If EBS is like having an external hard drive, creating a new AMI is like having a DVD-R. You can save the current state of your machine to it. Next time you have to start a new instance, base it on that new AMI. Good to go.
I recommend storing your PHP code in a repository such as SVN, and writing a script that checks the latest code out of the repository and redeploys it when you want to upgrade. You could also have this script run on instance startup so that you get the latest code whenever you spin up a new instance; saves on having to create a new AMI every time.
The main challenge that I didn't see coming with EC2 is instance startup time - especially with Windows. Linux instances take 5 to 10 minutes to launch, but I've seen Windows instances take up to 40 minutes; this can be an issue if you want to do dynamic load balancing and start up new instances when your load increases.
I'd suggest the best bet is to simply 'try it'. The charges to run a small instance are not high and data transfer rates are very low - I have moved quite a few GB and my data fees are still less than a dollar(!) in my first month. You will likely end up paying mostly for system time rather than data I suspect.
I haven't deployed yet but have run up an instance, migrated it from Ubuntu 8.04 to 8.10, tried different port security settings, seen what sort of access attempts unknown people have tried (mostly looking for phpadmin), run some testing against it and generally experimented with the config and restart of the components I'm deploying. It has been a good prelude to my end deployment. I won't be starting with a big DB so will be initially sticking with the standard EC2 instance space.
The only negativity I have heard it that some spammers have made some of the IP ranges subject to spam-blocking - but have not yet confirmed that.
Your virtual box approach I will suggest you take after you are more familiar with the EC2 infrastructure. I suggest that you go to EC2, open an account and follow Amazon's EC2 getting-started guide. This guide will give you enough overview on all things (EBS, IP, CONNECTIONS, and otherS) to get you started. We are currently using EC2 for production and the way we started was like I am explaining here.
I hope you become a Cloud Expert Soon.
Per timbo's concern, I was able to nab an IP that, so far hasn't legitimately shown up on any spam lists. You will have a few hiccups since many blacklists are technically whitelists and will have every IP on their list until otherwise notified that a Mail Server is running on that IP. It's really easy to remove, most of them have automated removal request forms and every one that doesn't has been very cooperative in removing me from their lists. Just be professional, ask if they can give a time and reason for the block and what steps you should take to remove your IP. All the services I have emailed never asked me to jump through any hoops, within two or three business days they all informed me my IP had been removed.
Still, if you plan on running a mail server I would recommend reserving IPs now. They're 1 cent per every hour they are not bound to an instance so it works out to being about $7 a month. I went ahead and reserved an extra one as I plan on starting up another instance soon.
I have deployed some simple stuff to EC2 Win2k3 instances. Here's my advice:
Find a tutorial. Sign up for the service. Just spend an afternoon setting up your first server. It's pretty darned easy, though there will be obstacles to overcome. It's not too tough.
When I was fooling with EC2 I think I spent like $2.00 setting up a server and playing with it for a while.
Some of your data will be persistent, but you can connect S3 to EC2 as well.
Just go for it!
With regards to the concerns about blacklisting of mail servers, you can also use Amazon's Simple Email Service (SES), which obviates the need to run the mail server on the EC2 instances.
I had trouble with this as well, but posted a note here in their forums - https://forums.aws.amazon.com/thread.jspa?threadID=80158&tstart=0
Related
I signed up for Google Cloud the other day using their free trial promotion. I love it so far. I've got a couple of questions that are probably generic to cloud computing, which I'm new to. I have my test virtual machine up without any issues, using Ubuntu Linux.
My question with cloud concepts are - first:
- How to scale instance. Can you scale from micro to small (also vice versa)?
If scaling isn't done that way, and it's about using instance groups, how do load balancing and instance groups work?
This is the concept I'm most confused with...how would I push an code update if I had 3 instances for the load balancer?
Thanks for your help!
First question: How do you vertically scale an instance? Answer: you must re-create the instance and destroy the old one. You can't just make an existing instance smaller or larger. Luckily, you can script the whole setup. GCE allows you to add a flag called --metadata-from-file. If you are using systemd, I recommend something to the effect of --metadata-from-file user-data=cloud-config.yaml. Since you are using Ubuntu, and Ubuntu's support for systemd is sketchy at best, you probably just want to do something like: --metadata-from-file startup-script=my-startup-script.sh Scripting your deployment will allow you to scale, re-create and document your deployment and is a best practice in cloud computing.
Second question: How do instance groups and load balancing groups work? Answer: Instance groups in GCE are almost always of the "managed" variety. This allows you to create a template that defines how you want your instances to work. Then you can horizontally scale them (i.e. add more or take some away) behind a load balancer. You can even leverage preemptible instances to save you some cash.
Third question: How do I push an update? This depends on how you deploy. But in general I would say:
If you use Docker, push a new image to GCR and have your instances pull it.
If you use CM (like Salt or Ansible) just use those tools normally. They work fine on GCE
If you use startup scripts do something like gcloud compute instances myinstance add-metadata metadata-from-file startup-script=newScript.sh (and restart after)
If everything is contained in a managed instance template, update your template.
We're thinking about moving to the Elastic Load Balancer on Amazon. However, it turns out that since we use more than one domain name, we would have to rename some of our applications to limit to a single ELB. Another issue is we currently use free level one certificates, whereas moving to ELB would require moving up to level 2, although that's not a huge deal. Another issue is we don't have a lot of volume at this point, and don't really have a need for load-balancing in terms of traffic alleviation. Also, in the case of a failure of an amazon instance, which seems to be quite rare (have not experienced in several years), we can quickly be up and running by creating another instance and restoring.
Otoh, according to all I read about it, people are generally happy and recommend it, due to ease of setup and the value it brings.
Given the above, is it worth it?
since we use more than one domain name, we would have to rename some of our applications to limit to a single ELB
What makes you say this? There's nothing preventing you from launching multiple ELB's if you really want to. And if your application already manages multiple domains properly then there's no reason a single ELB can't handle that either. We currently have one ELB fronting an application on a bunch of EC2 instances that 11 different domains all point to.
Another issue is we currently use free level one certificates, whereas moving to ELB would require moving up to level 2, although that's not a huge deal.
Not sure what you mean by "level one" and "level 2". If you're using a self-signed SSL certificate then you'll need to switch to using certificate signed by a third party Certificate Authority, which will indeed cost you some money. Amazon supports all manner of certificates, including simple certs, EV certs, SAN certs, etc. You'll find more information on ELB and SSL certs in the AWS documentation.
Also, in the case of a failure of an amazon instance, which seems to be quite rare (have not experienced in several years), we can quickly be up and running by creating another instance and restoring.
Consider yourself lucky. We've had Amazon instances fail from time to time, and we also regularly get notifications from Amazon that instances need to be rebooted in order to migrate them off of faulty/old hardware.
If you really don't care about being down for a while and feel like you don't need the capacity that a load balancer and multiple appservers provides then there's no reason for you to move to using an ELB. However if you want the reliability of multiple appservers then moving to an ELB is indeed a good idea.
And if you anticipate your traffic level growing then you might want to consider using Amazon's Auto Scaling tools. Using Auto Scaling you basically tell Amazon the minimum number of application servers you want running behind an ELB, and some parameters to indicate when they should automatically launch additional instances if/when load increases.
Our Amazon account rep actually recommended to us that if we had even a single instance that we wanted to minimize downtime of (like a monitoring server, etc) that we should create an Auto Scaling group with a limit of exactly 1 instance in it. That way if the instance ever does die for any reason whatsoever, Amazon will automatically spin up a new replacement instance.
Agree with Bruce, just wanted to add my 5 cents about Auto Scaling(ASG) and " Amazon will automatically spin up a new replacement instance.".
This is really cool way to get robust hosting solution, but will need some challenge to create CloudFormation template and bash auto install script that will be called from CloudFormation template to install all server software and deploy your app code.
So if you will have 2 instances and ASG with Min/Max = 2, then if some instance will be crashed, ASG will recreate it automaticly with all software installed and code deployed and ready to go
Also if you need to handle some periodic traffic jumps automaticly, then you can change the ASG as (Min=2, Max=5), create 2 CloudWatch alarms:
1. if cpu usage is 90+ for 5 or 10 mins
2. if cpu usage is 30- for 5 or 10 mins
Then assign Alarm 1 to scale up 1 additional instance and assign alarm 2 to destroy any additional instance created by 1
I've deployed a single micro-instance redis on compute engine using the (very convenient) click-to-deploy feature.
I would now like to update this configuration to have a couple of instances, so that I can benchmark how this increases performance.
Is it possible to modify the config while it's running?
The other option would be to add a whole new redis deployment, bleed traffic onto that over time and eventually shut down the old one. Not only does this sound like a pain in the butt, but, I also can't see any way in the web UI to click-to-deploy multiple clusters.
I've got my learners license with all this, so would also appreciate any general 'good-to-knows'.
I'm on the Google Cloud team working on this feature and wanted to chime in. Sorry no one replied to this for so long.
We are working on some of the features you describe that would surely make the service more useful and powerful. Stay tuned on that.
I admit that there really is not a good solution for modifying an existing deployment to date, unless you launch a new cluster and migrate your data over / redirect reads and writes to the new cluster. This is a limitation we are working to fix.
As a workaround for creating two deployments using Click to Deploy with Redis, you could create a separate project.
Also, if you wanted to migrate to your own template using the Deployment Manager API https://cloud.google.com/deployment-manager/overview, keep in mind Deployment Manager does not have this limitation, and you can create multiple deployments from the same template in the same project.
Chris
I am scripting creation and manipulation of an ec2 instance. During testing all is well except that I actually launch the instance, which fairly costly in the long run.
I have been searching for a test end point where I can verify that syntax of the call I make is ok, but I have not been able to find one.
Is there any way I can send ec2 api requests, for instance running new instances, and get responses without actually launching the instance?
I see a few ways. The cheapest I think is (as #stivlo suggested) run up one of the free instances.
Maybe a bit overkill but you could run a local version of Eucalyptus for testing. See more at http://open.eucalyptus.com/. When I looked at it (about 6-9 months ago) it worked with the ec2 tools
The third (and possibly most suitable) is write a script that stops\terminates an ec2 instance. That way you run one up, when it's been confirmed turn it off. The cost involved would be pence.
My company is about to write a new public facing website in SharePoint (so Windows Server 2008 RC2, SQL Server 2008 RC2, etc) and we're looking at using Amazon EC2 to host it. I've read and been told that instances can disappear (often through user-error, but also in batches), so I'm skeptical that EC2 is the best idea for us.
I've done research on the Amazon AWS site, but must confess that most of the terminology used is confusing, and Googling my questions often brought me here, so I thought I'd ask my questions here too and see if people can advise me.
1) It's critical that our website be available to the public as much as possible (the usual 99.9% up times apply). The Amazon EC2 Service Level Agreement commitment is 99.95% availability, which is fine, but what happens if we hit that 0.05% scenario? Would our E2 instance be lost? Can these be recovered? If so, what would we need to do to ensure that we recover to a not-too-old version of our site?
2) I've read about Amazon Elastic Block Store (EBS), and how this is persist independently from the lifetime of the instance. If I understand right, EBS is like having a hard-drive, so if the instance is lost we can start a new instance using our EBS to recover the latest version, while the 'local instance store' would be lost if the instance is lost as well. Is that right?
3) Are 'reserved instances' a more stable option? i.e. are they less likely to disappear? If they do still disappear, what recovery benefits do they offer, if any?
I know these questions are kinda vague, but hopefully you'll be able to offer a newbie from basic info - enough to point me in the right direction for further, deeper research at least.
Many thanks.
Kevin
We rely on AWS for our webservers. I won't use anything else. They're highly scalable, easily configurable and have an absurd uptime. I've never experienced downtime with them. We've been with them for two years.
Reserved instances are cheaper. Get them if you're planning on having that instance for a while. It's simply a cost/budgeting issue.
Never heard of people losing an EC2 instance.
Not terribly knowledgeable about EBS, but S3 is a good way to back up data.
HTH
EDIT:
Came across some links that might be helpful. Cheers.
http://techblog.netflix.com/2010/12/four-reasons-we-choose-amazons-cloud-as.html
http://techblog.netflix.com/2010/12/5-lessons-weve-learned-using-aws.html
http://www.codinghorror.com/blog/2011/04/working-with-the-chaos-monkey.html
One of the main design goals of AWS is to make fault tolerant services--that is services that can recover from failures. That is, they design all of their services with the assumption that something will fail in some way at some point, but that there will be redundancies and other mechanism in place to recover from those inevitable failures.
In the case of storage services like S3 and SimpleDB, this is achieved primarily by replicating your data across multiple nodes (machines) in multiple data centers. So when one node experiences a hardware failure or one data center experiences a power outage, there's no real down time as the replicas can still service the requests. As a consumer, you aren't even aware of the down nodes or data centers.
EC2 is designed to work similarly, but it is not quite as encapsulated as S3 and SimpleDB, so you'll need to plan for a bit of the work yourself. For example, if you need a web service with guaranteed uptime and availablity, you'll want to look into AWS ELB (Elastic Load Balancing) service. That way if an instance is down, requests will automatically be routed to other healthy instances. For your data, you can either store it in other AWS services (like S3 and SimpleDB and EBS) which have built-in redundancy or you can build your own solution using similar redundancy techniques.
The SLA amounts to none, when we found out that:
Instances and EBS volumes DID get lost
It takes Amazon more than 2 days to recover from a disaster, and even that not to the full extent
We were the lucky ones, that managed to get back on our feet in less than 2 days. Other companies got stuck with no recovery option.
And what does Amazon recommend? "Don't trust our reliability. Pay for 2 or 3 more copies of your system in different regions, and then you will be safe".
More information can be found here:
http://www.zdnet.com/blog/saas/lightning-strike-zaps-ec2-ireland/1382
tldr: AWS is very reliable if you know what you're doing, a bad idea if you don't.
As your unfamiliar with terms here's a very quick glossary:
AZ - Availability zone, there's several availability zones per region (e.g. 3 in Ireland). They are physical isolated datacentres with different power grids, flood plains etc. But with internal network quality speed connections. It's possible even likely an AZ may become unavailable at some point, I don't think all AZ's in a region have ever been down though.
EBS/Instance Store - These are the two main types of storage available to instance. The best way to describe them is Instance Store is the equivalent to a HDD you have plugged in via sata to your motherboard - its very fast. But what happens if you shutdown your instance (or if the motherboard fails) and want to instantly start on another board? (Amazon completely hides the physical hardware setup) obviously you aren't going to wait for an engineer to unplug a drive from one server and into another so they don't even offer this. Instance store is fast but temporary and tied to the physical machine DO NOT store anything important on it. EBS then is the alternative it is a very low latency network drive that any server can connect to as though it were local. You shut down a server, change the size and restart on a completely different server on the other side of the datacentre (again the physical stuff is hidden), doesn't matter your ebs hasn't gone anywhere (by default theyre also on multiple physical discs).
Commodity cloud hardware - My interpretation of all the 'cloud hardware fails all the time - its really risky and unreliable' is that yes aws hardware is not as reliable as enterprise level components in a managed datacentre. This doesn't mean its unreliable, it just means you should build failure as an option into your design.
First very important thing to note when talking about SLA's is that amazon state very clearly that the SLA ONLY applies if one or more AZ goes down. So if you do not understand how their service works and only build one server in one AZ and a generator or router fails it's your own fault.
As for recovery, that depends - is your entire application state stored on one server - if it is, don't bother with the cloud. If however you can cluster your state on multiple servers, store it in RDS or some other persistent DB. OR if your content changes so infrequently you can utilise periodic copies to s3 storage, you'll be fine. You failure strategy (in order of preference) could be clustered, failover, or auto repair. For the first one you have clustered servers sharing state - it doesn't matter if you lose a server or an AZ. For the second you only have one live server, but if it goes down you have a failover standing by with the same content. Finally with auto repair there's two possible situations - if your data is only on one EBS drive, you could start another instance with the same drive and carry on. But if the EBS drive or AZ fails, you will need to be ready with some snapshot in s3 that a completely fresh instance can copy and start up with.
Reserved instances are no more reliable - they're the same hardware, you're just entering into a contract to say i'll have x machines for y years. Which allows aws to plan better, which is cheaper for you.