There are documents available for to mention that standard load balancer have monitor metrics: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-monitor-log
But, I need to understand why do the basic ones not have any monitor metrics. Is it because of pricing? If yes, is there any official document to prove that?
No, Basic Load Balancers don’t support metrics. Azure Monitor multi-dimensional metrics are only available for Standard Load Balancer, which is also the recommended SKU by Microsoft.
Azure offers a Basic SKU and Standard SKU that have different functional, performance, security and health tracking capabilities. These differences are explained in the SKU Comparison article.
Upgrading to the Standard SKU is recommended for any production workloads to take advantage of the robust set of Load Balancer metrics.
I use VMs in Compute Engine in Google Cloud Platform. When i create instance, google says me, that it will cost me one price(near 5$), but in reality it charges me more.
In detailed billings, I found out, that instances cost for about 2$ per 2 weeks, and 5$ more for load balancing.
I know what load balancing is(only in general), but where is it user, if I use only 1 VM per time? Do I really need it? How can I avoid it?
As you mentioned, GCE load balancers are useful to distribute the load between a set of instances. There are some other advantages they can provide like autoscaling.
If you are working with only one VM, you can certainly have the VM connected to the internet by the use of an external IP. The only thing to have in mind is that external IPs can change if they are defined as ephemeral. To avoid this scenario that would imply to re-configure your applications to the new IP, you can define it as an Static IP.
^be keen on static IP though. do not reserve unless you will be using it. I read somewhere that reserved but unused static IP address will be charged like a cent or a fraction of a cent per hour
Currently , I am doing some research about the load balancer .
On Wikipedia , refer to this link http://en.wikipedia.org/wiki/Load_balancing_(computing).
It says : "Usually load balancers are implemented in high-availability pairs which may also replicate session persistence data if required by the specific application."
Besides , I have also used the search engine to find some related articles about the reason and the cases when we need to use 2 load balancers in a system but I did not find any good information.
So I want to ask why do we need 2 load balancers in most the cases? and which cases we need to use 2 or more load balancers instead of one?
Now a days there is need of implementing applications which are highly available. So in case of load balancer you should have a pairs of load balancer as a highly available pair.
Because if you are using a single server/node load balancer there is a chance it may go down or need to take off for the maintenance. This will cause application downtime or we need to redirect all requests to only one server which will affect the performance severely.
To avoid these things it is always recommended that load balancers should be available in highly available pairs so that load balancer is continuously operational for a desirably long length of time or all the time.
Google BigQuery ensures a minimum availability of its service?
Given the eventual failure of any component in Google's infrastructure, might happen to lose some or all information I have uploaded?
How Google can ensure data availability even if a failure occurs?
I mean what happens if a node (server) goes down? What happens to data that is stored on it? And if fail 10 or 100 nodes? What would have to happen for that service becomes unavailable?
I am researching on the availability of this platform and what mechanisms has to be fault-tolerant
Thanks
BigQuery has a 99.9% monthly uptime SLA.
Check https://developers.google.com/bigquery/docs/sla for details.
The whole system is based on a highly replicated fault tolerant architecture, but not all details from it are made public.
If you also need 24x7 fast phone and email support, you can get it. Details at https://cloud.google.com/support/packages.
My company is about to write a new public facing website in SharePoint (so Windows Server 2008 RC2, SQL Server 2008 RC2, etc) and we're looking at using Amazon EC2 to host it. I've read and been told that instances can disappear (often through user-error, but also in batches), so I'm skeptical that EC2 is the best idea for us.
I've done research on the Amazon AWS site, but must confess that most of the terminology used is confusing, and Googling my questions often brought me here, so I thought I'd ask my questions here too and see if people can advise me.
1) It's critical that our website be available to the public as much as possible (the usual 99.9% up times apply). The Amazon EC2 Service Level Agreement commitment is 99.95% availability, which is fine, but what happens if we hit that 0.05% scenario? Would our E2 instance be lost? Can these be recovered? If so, what would we need to do to ensure that we recover to a not-too-old version of our site?
2) I've read about Amazon Elastic Block Store (EBS), and how this is persist independently from the lifetime of the instance. If I understand right, EBS is like having a hard-drive, so if the instance is lost we can start a new instance using our EBS to recover the latest version, while the 'local instance store' would be lost if the instance is lost as well. Is that right?
3) Are 'reserved instances' a more stable option? i.e. are they less likely to disappear? If they do still disappear, what recovery benefits do they offer, if any?
I know these questions are kinda vague, but hopefully you'll be able to offer a newbie from basic info - enough to point me in the right direction for further, deeper research at least.
Many thanks.
Kevin
We rely on AWS for our webservers. I won't use anything else. They're highly scalable, easily configurable and have an absurd uptime. I've never experienced downtime with them. We've been with them for two years.
Reserved instances are cheaper. Get them if you're planning on having that instance for a while. It's simply a cost/budgeting issue.
Never heard of people losing an EC2 instance.
Not terribly knowledgeable about EBS, but S3 is a good way to back up data.
HTH
EDIT:
Came across some links that might be helpful. Cheers.
http://techblog.netflix.com/2010/12/four-reasons-we-choose-amazons-cloud-as.html
http://techblog.netflix.com/2010/12/5-lessons-weve-learned-using-aws.html
http://www.codinghorror.com/blog/2011/04/working-with-the-chaos-monkey.html
One of the main design goals of AWS is to make fault tolerant services--that is services that can recover from failures. That is, they design all of their services with the assumption that something will fail in some way at some point, but that there will be redundancies and other mechanism in place to recover from those inevitable failures.
In the case of storage services like S3 and SimpleDB, this is achieved primarily by replicating your data across multiple nodes (machines) in multiple data centers. So when one node experiences a hardware failure or one data center experiences a power outage, there's no real down time as the replicas can still service the requests. As a consumer, you aren't even aware of the down nodes or data centers.
EC2 is designed to work similarly, but it is not quite as encapsulated as S3 and SimpleDB, so you'll need to plan for a bit of the work yourself. For example, if you need a web service with guaranteed uptime and availablity, you'll want to look into AWS ELB (Elastic Load Balancing) service. That way if an instance is down, requests will automatically be routed to other healthy instances. For your data, you can either store it in other AWS services (like S3 and SimpleDB and EBS) which have built-in redundancy or you can build your own solution using similar redundancy techniques.
The SLA amounts to none, when we found out that:
Instances and EBS volumes DID get lost
It takes Amazon more than 2 days to recover from a disaster, and even that not to the full extent
We were the lucky ones, that managed to get back on our feet in less than 2 days. Other companies got stuck with no recovery option.
And what does Amazon recommend? "Don't trust our reliability. Pay for 2 or 3 more copies of your system in different regions, and then you will be safe".
More information can be found here:
http://www.zdnet.com/blog/saas/lightning-strike-zaps-ec2-ireland/1382
tldr: AWS is very reliable if you know what you're doing, a bad idea if you don't.
As your unfamiliar with terms here's a very quick glossary:
AZ - Availability zone, there's several availability zones per region (e.g. 3 in Ireland). They are physical isolated datacentres with different power grids, flood plains etc. But with internal network quality speed connections. It's possible even likely an AZ may become unavailable at some point, I don't think all AZ's in a region have ever been down though.
EBS/Instance Store - These are the two main types of storage available to instance. The best way to describe them is Instance Store is the equivalent to a HDD you have plugged in via sata to your motherboard - its very fast. But what happens if you shutdown your instance (or if the motherboard fails) and want to instantly start on another board? (Amazon completely hides the physical hardware setup) obviously you aren't going to wait for an engineer to unplug a drive from one server and into another so they don't even offer this. Instance store is fast but temporary and tied to the physical machine DO NOT store anything important on it. EBS then is the alternative it is a very low latency network drive that any server can connect to as though it were local. You shut down a server, change the size and restart on a completely different server on the other side of the datacentre (again the physical stuff is hidden), doesn't matter your ebs hasn't gone anywhere (by default theyre also on multiple physical discs).
Commodity cloud hardware - My interpretation of all the 'cloud hardware fails all the time - its really risky and unreliable' is that yes aws hardware is not as reliable as enterprise level components in a managed datacentre. This doesn't mean its unreliable, it just means you should build failure as an option into your design.
First very important thing to note when talking about SLA's is that amazon state very clearly that the SLA ONLY applies if one or more AZ goes down. So if you do not understand how their service works and only build one server in one AZ and a generator or router fails it's your own fault.
As for recovery, that depends - is your entire application state stored on one server - if it is, don't bother with the cloud. If however you can cluster your state on multiple servers, store it in RDS or some other persistent DB. OR if your content changes so infrequently you can utilise periodic copies to s3 storage, you'll be fine. You failure strategy (in order of preference) could be clustered, failover, or auto repair. For the first one you have clustered servers sharing state - it doesn't matter if you lose a server or an AZ. For the second you only have one live server, but if it goes down you have a failover standing by with the same content. Finally with auto repair there's two possible situations - if your data is only on one EBS drive, you could start another instance with the same drive and carry on. But if the EBS drive or AZ fails, you will need to be ready with some snapshot in s3 that a completely fresh instance can copy and start up with.
Reserved instances are no more reliable - they're the same hardware, you're just entering into a contract to say i'll have x machines for y years. Which allows aws to plan better, which is cheaper for you.