Are AWS ELB IP addresses unique to an ELB? - ssl

Does anyone know how AWS ELB with SSL work behind the scenes? Running an nslookup on my ELB's domain name I get 4 unique IP addresses. If my ELB is SSL enabled, is it possible for AWS to share these same IPs with other SSL enabled ELBs (not necessarily owned by me)?
As I understand it the hostname in a web request is inside of the encrypted web request for a https request. If this is the case, does AWS have to give each SSL-enabled ELB unique IP addresses that are never shared with anyone else's SSL ELB instance? Put another way -- does AWS give 4 unique IP addresses for every SSL ELB you've requested?

Does anyone know how AWS ELB with SSL work behind the scenes? [...] Put another way --
does AWS give 4 unique IP addresses for every SSL ELB you've
requested?
Elastic Load Balancing (ELB) employs a scalable architecture in itself, meaning the number of unique IP addresses assigned to your ELB does in fact vary depending on the capacity needs and respective scaling activities of your ELB, see section Scaling Elastic Load Balancers within Best Practices in Evaluating Elastic Load Balancing (which provides a pretty detailed explanation of the Architecture of the Elastic Load Balancing Service and How It Works):
The controller will also monitor the load balancers and manage the
capacity [...]. It increases
capacity by utilizing either larger resources (resources with higher
performance characteristics) or more individual resources. The Elastic
Load Balancing service will update the Domain Name System (DNS) record
of the load balancer when it scales so that the new resources have
their respective IP addresses registered in DNS. The DNS record that
is created includes a Time-to-Live (TTL) setting of 60 seconds,[...]. By default, Elastic Load Balancing will return multiple IP
addresses when clients perform a DNS resolution, with the records
being randomly ordered [...]. As the traffic
profile changes, the controller service will scale the load balancers
to handle more requests, scaling equally in all Availability Zones. [emphasis mine]
This is further detailed in section DNS Resolution, including an important tip for load testing an ELB setup:
When Elastic Load Balancing scales, it updates the DNS record with the
new list of IP addresses. [...] It is critical that you factor this
changing DNS record into your tests. If you do not ensure that DNS is
re-resolved or use multiple test clients to simulate increased load,
the test may continue to hit a single IP address when Elastic Load
Balancing has actually allocated many more IP addresses. [emphasis mine]
The entire topic is explored in much more detail within Shlomo Swidler's excellent analysis The “Elastic” in “Elastic Load Balancing”: ELB Elasticity and How to Test it, which meanwhile refers to the aforementioned Best Practices in Evaluating Elastic Load Balancing by AWS as well, basically confirming his analysis but lacking the illustrative step by step samples Shlomo provides.

Related

What can we do when load balancer becomes the bottleneck?

I just started learning load balancers. Taking a server side application (http/https) load balancer as an example, I assume it listens a specific ip address, then forward the http requests to available servers based on its algorithm.
So is it possible for a load balancer to become a bottleneck? Because it's listening a specific ip address, all requests will first go to the single load balancer. So I think there could be a scenario where the amount of traffic is beyond the limit/capacity of the load balancer.
When it becomes a bottleneck, what can we do? Can we use multiple load balancers?
I think one possible solution is to use multiple load balancers and expose all the ips to clients. (This sounds like client side load balancing) So when a client wants to send a request, it can pick from the ip pool and then send a request to one of the load balancers. (For example, ZooKeeper could be used here.) Is this a working solution? Is there any other way to use multiple load balancers?
Thanks.
Ethan
Your last suggestion works with adding a little twist: The usual approach is to publish the load balancer IP addresses under the same domain name.
This is called DNS load balancing. Clients will ask for the IP resolution for your load balancer's domain name and they will get different IP addresses on a round-robin fashion.
To configure DNS load balancing you have to add multiple A records for your load balancer's domain name to your DNS configuration. Here you can find an example guide for that.

AWS - NLB Performance Issue

AWS
I am using network load balancer infront of private VPC in the API gateway. Basically for APIs in the gateway the endpoint is network load balancer's DNS name.
The issue is, performance sucks (+5 seconds).. If I use the IP address of the EC2 instead of NLB DNS the response is very good (less than 100ms).
Can somebody point me what is the issue? Any configuration screw up I did while creating NLB?
I have been researching for the past 2 days and couldn't find any solution.
Appreciate your response.
I had a similar issue that was due to failing health checks. When all health checks fails, the targets are tried randomly (typically target in each AZ), however, at that stage I had only configured an EC2 in one of the AZs. The solution was to fix the health checks. They require the SecurityGroup (on the EC2 instances) to allow the entire VPC CIDR range (or at least the port the health checks are using).

AWS Route 53 Redirect to Status Page

First question, so if I get this wrong somehow be kind.
We are using Route 53 with Amazon and have our primary front end servers behind an ELB. Our app also routes all requests through HTTPS. We are utilizing an offsite status page via statuspage.io.
What I am trying to accomplish is if the primary site goes down I'd like to have R53 redirect both the SSL and non-SSL traffic to our status page.
I originally had tried setting up a static page in S3 but still had issues with HTTPS requests made on our site.
Has anyone done this successfully? I imagine it has to be possible, but its definitely outside my realm of expertise.
Thank you very much for your time and help.
You are right, S3 website doesn't support HTTPS. However, CloudFront does[1]. What you can do is failover to CloudFront and have your origin be your S3 website or your statuspage.io.
Steps:
Create a distribution and set the CNAMEs to match your DNS entries.
Upload and associate your SSL cert with your distribution
Update failover target to be your CloudFront distribution and set it as an alias.
[1] http://aws.amazon.com/about-aws/whats-new/2014/03/05/amazon-cloudront-announces-sni-custom-ssl/
Route53 is managing the DNS which is not what you want to do (even if you'd change the DNS it would take TTL to sync). What you should do is use a combination of auto-scaling policies and health-checks. These health-checks will be performed by the ELB every 30 seconds and if two consecutive checks will fail it'll mark the instance as out-of-service and will stop directing traffic to it (the ELB is directing traffic to your instances in a round-robin manner).
Having more than one instance and using auto-scaling rules is the key: it will enable AWS to terminate the unhealthy instance and spin up a new instance instead (in the same ASG with the same AMI etc).

Does each cloud instance have its own IP

For a cloud instance that runs Apache, I'm guessing the cloud has an IP address.
One of the benefits of using a cloud is scaling, but I'm not sure how that scaling happens. I thought that new instances are created automatically to accommodate rise in traffic. IF that's correct (correct me if I'm wrong), then does that mean that each new instance would have its own IP or what? because if that's the case, it would complicate matters a lot when pointing a domain to a cloud.
The cloud sits behind a load balancer which is able to redirect traffic to different spawned instances of Apache servers. In that since you can grow and shrink to any number of servers based on how much traffic you are receiving.

Round robin server setup

From what I understand, if you have multiple web servers, then you need some kind of load balancer that will split the traffic amongst your web servers.
Does this mean that the load balancer is the main connecting point on the network? ie. the load balancer has the IP address of the domain name?
If this is the case, it makes it really easy to add new hardware since you don't have to wait for any dns propogation right?
There are several solutions to this "problem".
You could round-robin at the DNS-level. I.e. have www.yourdomain.com point to several IP-addresses (well all your servers).
This doesn't give you any intelligence in the load balancing, but the load will be more or less randomly distributed, but you wouldn't be resilient to hardware failures as they would still require changes to DNS.
On the other hand you could use a proxy or a loadbalancing proxy that has a single IP but then distributes the traffic to several back-end boxes. This gives you a single point of failure (the proxy, you could of course have several proxies to defeat that problem) and would also give you the added bonus of being able to use some metric to divide the load more evenly and intelligently than with just round-robin dns.
This setup can also handle hardware failure in the back-end pretty seamlessly. The end user never sees the back-end, just the front-end.
There are other issues to think about as well, if your page uses sessions or other smart logic, you can run into synchronisation problems when your user (potentially) hits different servers on every access.
It does (in general). It depends on what server OS and software you are using, but in general, you'll hit the load balancer for each request, and the load balancer will then farm out the work according to the scheme you have in place (round robin, least busy, session controlled, application controlled, etc...)
andy has part of the answer, but for true load balancing and high availability you would want to use a pair of hardware load balancers like F5 bigips in an active passive configuration.
Yes your domain IP would be hosted on these devices and traffic would connect firstly to those devices. Bigips offer a lot of added functionality including multiple ways of load balancing and some great url rewriting, ssl acceleration, etc. It also allows you to run your web servers on a seperate non routable address scheme and even run multiple sites on different ports with the F5's handling the translations.
Once you introduce load balancing you may have some other considerations to take into account for your application(s) like sticky sessions and session state but that is a different subject