AWS Fargate health checks fail when specifying IP addresses in Security Group inbound rule - aws-fargate

I've set up an Application Load Balancer that points to a Fargate Cluster's service (via a target group). I'm using a security group for both the ALB and the service. When I specify specific IP addresses, the TG health checks fail. It works fine when I include the TG health checker's IPs (2 of them), but that's unsustainable for obvious reasons.
I've tried to use 2 different SGs: 1 for the ALB, and the other for the service. The SG has specific IP inbound rules, and the SG for the service allows all inbound traffic (any IP).
Unfortunately, that doesn't work. Does anyone have any suggestions on how to set this up properly?
Thanks in advance!

Related

AWS - NLB Performance Issue

AWS
I am using network load balancer infront of private VPC in the API gateway. Basically for APIs in the gateway the endpoint is network load balancer's DNS name.
The issue is, performance sucks (+5 seconds).. If I use the IP address of the EC2 instead of NLB DNS the response is very good (less than 100ms).
Can somebody point me what is the issue? Any configuration screw up I did while creating NLB?
I have been researching for the past 2 days and couldn't find any solution.
Appreciate your response.
I had a similar issue that was due to failing health checks. When all health checks fails, the targets are tried randomly (typically target in each AZ), however, at that stage I had only configured an EC2 in one of the AZs. The solution was to fix the health checks. They require the SecurityGroup (on the EC2 instances) to allow the entire VPC CIDR range (or at least the port the health checks are using).

limit access to AWS Elastic IP to US region

If I host a website on AWS EC2 with Elastic IP and I want to limit access to this website from US region users only, Is there any easy way to do this? Website is powered by Apache.
According to this link .htaccess could be an option but didn't find a way to exclusively lock down my website to US region users only.
I will limit my answer to Amazon services.
Being able to block access by world location is an important issue today. With all of the various government regulations regarding where content is located / stored, controlling access may be a legal requirement in some situations.
Amazon has three services that support geolocation: Route53, CloudFront, and WAF (Web Application Firewall). No service is completely bulletproof but given the size of Amazon's network, all of the certifications, government compliance, etc. I tend to believe Amazon's geolocation would be better than a homebrew setup.
Your question specifies Elastic IP address. I am not aware of an Amazon service that supports geolocation blocking for your EIP. Instead, you will want to use Route53 and create a resource record set (RRS) or commonly called domain name or sub domain name to that EIP. Then put the server either in a private subnet, or put the front end service (CloudFront and/or ALB) in the same security group to limit who can access the EIP. Note: private subnets do not support EIP and are not required for ALB.
Configure geolocation as part of the setup for Route53, CloudFront or WAF (better a combination of these services). You can select the parts of the world (e.g. United States) to accept traffic from and block everybody else.
If I was building a small setup that did not require auto-scaling, I would use Route53 and CloudFront in front of my server. For higher fault-tolerance and high availability I would put the servers into a private subnet and add a load balancer with ASG (Auto Scaling Group) behind CloudFront and Route53 and add WAF to CloudFront (or the ALB).
Amazon VPCs via NACLs and Security Groups do not support any form of geolocation. Security Groups and NACLs are just very fast firewalls with a specific feature set. A VPN could be used if the customer base is tightly controlled (e.g. a group of developers or business partners) but would be untenable for a publicly accessed web server (e.g. customer portal). One might think that usernames or SSH keys could be used, but this does not control geography just authentication. A user could still access a server in France from Russia. If the requirement is geolocation, then the three Amazon services in the thread are good choices for geolocation based policies.
Route53:
Geolocation Routing
CloudFront:
Restricting the Geographic Distribution of Your Content
Amason WAF:
Working with Geographic Match Conditions
You could use Cloudfront geoblocking. Block all but US. You will not 100% be able to block. You can spoof Ip and locations, but it's a start.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html
There several Cloud Native options available in AWS that could be used to restrict users to a particular region.
Using AWS CloudFront Geo Restrictions
Using AWS CloudFront + AWS WAF with Geo Matching Conditions (Where you can do the Geo Restriction and Other IP based Whitelisting).
If you plan to use AutoScaling and Load Balancing (For Application Load Balancer), then you can attach AWS WAF to Load Balancer with Geo Matching Conditions Configured.

Windows NLB not balanced

I set up a NLB cluster given two servers (WS 2008 R2). Each server has one NIC card which I set up for a static ip address. I assigned the cluster an internet name (MyCluster), and assigned it a static ip address. The third box is acting as a client sending TCP data (over WCF) to the cluster's IP I configured (static IP). I am observing the NLB cluster from the NLB manager at one of the nodes - both nodes are green, say started. However, I am only able to see traffic coming in to one of the NLB servers. When I suspend it, I see traffic going to the other NLB server, and so on. I was expecting traffic to be split equally between them. I can't figure out what I missed, any tips please?
If you need more detailed information please ask, not sure how much detail to put in here.
Thanks/.
By default, a port rule created with a Filtering mode of multiple host will use single affinity. In other words, multiple requests from the same client will get directed to the same host. To see traffic going to both hosts try accessing the cluster from multiple clients. You could also set the affinity to "none", but this can lead to other problems.
There's good information on the affinity parameter and how to use it in the NLB help file.

IIS7.5 Application Request Routing (ARR) proxy to multiple ports

I have an unusual scenario, where I am trying to scale a WCF service that isn't thread safe. I have four instances of the service running on a single 4-core server, in four separate IIS web sites, with CPU affinity enabled. The sites are bound to ports 8022, 8023, 8024 and 8025.
My question is: can I use Application Request Routing (ARR) to load balance requests to a single port (80) across these four sites?
As far as I know you can't use the Webfarm-Framework for balancing between different ports on one Server. Maybe because from a failure safety perspective it dosn't make sence.
A workaround is to add some additional IPs to your Webserver and configure your 4 web sites bindings to listen on the same port but on different IPs.
So you cann set up a web farm with 4 different IPs as servers which in fact are locatet on the same physical machine.
hope this helps.
Best regards,
Peter

Are AWS ELB IP addresses unique to an ELB?

Does anyone know how AWS ELB with SSL work behind the scenes? Running an nslookup on my ELB's domain name I get 4 unique IP addresses. If my ELB is SSL enabled, is it possible for AWS to share these same IPs with other SSL enabled ELBs (not necessarily owned by me)?
As I understand it the hostname in a web request is inside of the encrypted web request for a https request. If this is the case, does AWS have to give each SSL-enabled ELB unique IP addresses that are never shared with anyone else's SSL ELB instance? Put another way -- does AWS give 4 unique IP addresses for every SSL ELB you've requested?
Does anyone know how AWS ELB with SSL work behind the scenes? [...] Put another way --
does AWS give 4 unique IP addresses for every SSL ELB you've
requested?
Elastic Load Balancing (ELB) employs a scalable architecture in itself, meaning the number of unique IP addresses assigned to your ELB does in fact vary depending on the capacity needs and respective scaling activities of your ELB, see section Scaling Elastic Load Balancers within Best Practices in Evaluating Elastic Load Balancing (which provides a pretty detailed explanation of the Architecture of the Elastic Load Balancing Service and How It Works):
The controller will also monitor the load balancers and manage the
capacity [...]. It increases
capacity by utilizing either larger resources (resources with higher
performance characteristics) or more individual resources. The Elastic
Load Balancing service will update the Domain Name System (DNS) record
of the load balancer when it scales so that the new resources have
their respective IP addresses registered in DNS. The DNS record that
is created includes a Time-to-Live (TTL) setting of 60 seconds,[...]. By default, Elastic Load Balancing will return multiple IP
addresses when clients perform a DNS resolution, with the records
being randomly ordered [...]. As the traffic
profile changes, the controller service will scale the load balancers
to handle more requests, scaling equally in all Availability Zones. [emphasis mine]
This is further detailed in section DNS Resolution, including an important tip for load testing an ELB setup:
When Elastic Load Balancing scales, it updates the DNS record with the
new list of IP addresses. [...] It is critical that you factor this
changing DNS record into your tests. If you do not ensure that DNS is
re-resolved or use multiple test clients to simulate increased load,
the test may continue to hit a single IP address when Elastic Load
Balancing has actually allocated many more IP addresses. [emphasis mine]
The entire topic is explored in much more detail within Shlomo Swidler's excellent analysis The “Elastic” in “Elastic Load Balancing”: ELB Elasticity and How to Test it, which meanwhile refers to the aforementioned Best Practices in Evaluating Elastic Load Balancing by AWS as well, basically confirming his analysis but lacking the illustrative step by step samples Shlomo provides.