Routing traffic from F5 BigIP LB to both EC2 instances and Physical servers together - apache

Is there a way to make physical F5 BigIP LB to route traffic to both EC2 instances(Autoscaling) and physical machines? I came across this article https://devcentral.f5.com/articles/using-big-ip-gtm-to-integrate-with-amazon-web-services but it seems it is routing traffic to an entire AWS zone, not to a couple of EC2 instances behind a ELB.

yes, you can route traffic to any resources from BIG-IP, whether they are locally defined on the same L3 network or remote, you just need to make sure you have routes defined on BIG-IP pointing in the right direction. If you are trying to cloudburst, you can define priority level in the pool so that your physical servers get the traffic unless the minimum threshold is crossed, at which point the remote servers (other datacenter or cloud servers, doesn't matter) will be automatically engaged.
You can also add orchestration to where your cloud servers aren't up and active unless you are getting close to a threshold, at which point the BIG-IP can trigger an action to spin up those servers, then add them to the pool dynamically. There are many options available to you with BIG-IP

Related

AWS - NLB Performance Issue

AWS
I am using network load balancer infront of private VPC in the API gateway. Basically for APIs in the gateway the endpoint is network load balancer's DNS name.
The issue is, performance sucks (+5 seconds).. If I use the IP address of the EC2 instead of NLB DNS the response is very good (less than 100ms).
Can somebody point me what is the issue? Any configuration screw up I did while creating NLB?
I have been researching for the past 2 days and couldn't find any solution.
Appreciate your response.
I had a similar issue that was due to failing health checks. When all health checks fails, the targets are tried randomly (typically target in each AZ), however, at that stage I had only configured an EC2 in one of the AZs. The solution was to fix the health checks. They require the SecurityGroup (on the EC2 instances) to allow the entire VPC CIDR range (or at least the port the health checks are using).

GCE: Block direct HTTP/S access. However, should only be available through N/W Loadbalancer

We are in the process of configuring and hosting our services on Google Cloud Services. We are using few instances of GCE. We also have a network load balancer. What I want to do is to block all direct HTT/S requests to individual instances and only be available via N/W load balancer.
Also, N/W load balancer will be mapped to a DNS.
By default GCE will not allow any ports be accessible from outside the network unless you create a firewall rule to allow.
One way is to remove the external IP for all the instances and use only one server as gateway instance with external IP to go to all the other instances. Make sure you add firewall rule allowing access from intended source to the gateway instance. This way only gateway instance is exposed to intended sources or external world based on firewall rule.
Then you can create network load balancer adding up instances that have no direct external access.
This is one way these are more ways to achieve.

Are AWS ELB IP addresses unique to an ELB?

Does anyone know how AWS ELB with SSL work behind the scenes? Running an nslookup on my ELB's domain name I get 4 unique IP addresses. If my ELB is SSL enabled, is it possible for AWS to share these same IPs with other SSL enabled ELBs (not necessarily owned by me)?
As I understand it the hostname in a web request is inside of the encrypted web request for a https request. If this is the case, does AWS have to give each SSL-enabled ELB unique IP addresses that are never shared with anyone else's SSL ELB instance? Put another way -- does AWS give 4 unique IP addresses for every SSL ELB you've requested?
Does anyone know how AWS ELB with SSL work behind the scenes? [...] Put another way --
does AWS give 4 unique IP addresses for every SSL ELB you've
requested?
Elastic Load Balancing (ELB) employs a scalable architecture in itself, meaning the number of unique IP addresses assigned to your ELB does in fact vary depending on the capacity needs and respective scaling activities of your ELB, see section Scaling Elastic Load Balancers within Best Practices in Evaluating Elastic Load Balancing (which provides a pretty detailed explanation of the Architecture of the Elastic Load Balancing Service and How It Works):
The controller will also monitor the load balancers and manage the
capacity [...]. It increases
capacity by utilizing either larger resources (resources with higher
performance characteristics) or more individual resources. The Elastic
Load Balancing service will update the Domain Name System (DNS) record
of the load balancer when it scales so that the new resources have
their respective IP addresses registered in DNS. The DNS record that
is created includes a Time-to-Live (TTL) setting of 60 seconds,[...]. By default, Elastic Load Balancing will return multiple IP
addresses when clients perform a DNS resolution, with the records
being randomly ordered [...]. As the traffic
profile changes, the controller service will scale the load balancers
to handle more requests, scaling equally in all Availability Zones. [emphasis mine]
This is further detailed in section DNS Resolution, including an important tip for load testing an ELB setup:
When Elastic Load Balancing scales, it updates the DNS record with the
new list of IP addresses. [...] It is critical that you factor this
changing DNS record into your tests. If you do not ensure that DNS is
re-resolved or use multiple test clients to simulate increased load,
the test may continue to hit a single IP address when Elastic Load
Balancing has actually allocated many more IP addresses. [emphasis mine]
The entire topic is explored in much more detail within Shlomo Swidler's excellent analysis The “Elastic” in “Elastic Load Balancing”: ELB Elasticity and How to Test it, which meanwhile refers to the aforementioned Best Practices in Evaluating Elastic Load Balancing by AWS as well, basically confirming his analysis but lacking the illustrative step by step samples Shlomo provides.

Does each cloud instance have its own IP

For a cloud instance that runs Apache, I'm guessing the cloud has an IP address.
One of the benefits of using a cloud is scaling, but I'm not sure how that scaling happens. I thought that new instances are created automatically to accommodate rise in traffic. IF that's correct (correct me if I'm wrong), then does that mean that each new instance would have its own IP or what? because if that's the case, it would complicate matters a lot when pointing a domain to a cloud.
The cloud sits behind a load balancer which is able to redirect traffic to different spawned instances of Apache servers. In that since you can grow and shrink to any number of servers based on how much traffic you are receiving.

Round robin server setup

From what I understand, if you have multiple web servers, then you need some kind of load balancer that will split the traffic amongst your web servers.
Does this mean that the load balancer is the main connecting point on the network? ie. the load balancer has the IP address of the domain name?
If this is the case, it makes it really easy to add new hardware since you don't have to wait for any dns propogation right?
There are several solutions to this "problem".
You could round-robin at the DNS-level. I.e. have www.yourdomain.com point to several IP-addresses (well all your servers).
This doesn't give you any intelligence in the load balancing, but the load will be more or less randomly distributed, but you wouldn't be resilient to hardware failures as they would still require changes to DNS.
On the other hand you could use a proxy or a loadbalancing proxy that has a single IP but then distributes the traffic to several back-end boxes. This gives you a single point of failure (the proxy, you could of course have several proxies to defeat that problem) and would also give you the added bonus of being able to use some metric to divide the load more evenly and intelligently than with just round-robin dns.
This setup can also handle hardware failure in the back-end pretty seamlessly. The end user never sees the back-end, just the front-end.
There are other issues to think about as well, if your page uses sessions or other smart logic, you can run into synchronisation problems when your user (potentially) hits different servers on every access.
It does (in general). It depends on what server OS and software you are using, but in general, you'll hit the load balancer for each request, and the load balancer will then farm out the work according to the scheme you have in place (round robin, least busy, session controlled, application controlled, etc...)
andy has part of the answer, but for true load balancing and high availability you would want to use a pair of hardware load balancers like F5 bigips in an active passive configuration.
Yes your domain IP would be hosted on these devices and traffic would connect firstly to those devices. Bigips offer a lot of added functionality including multiple ways of load balancing and some great url rewriting, ssl acceleration, etc. It also allows you to run your web servers on a seperate non routable address scheme and even run multiple sites on different ports with the F5's handling the translations.
Once you introduce load balancing you may have some other considerations to take into account for your application(s) like sticky sessions and session state but that is a different subject