limit access to AWS Elastic IP to US region - apache

If I host a website on AWS EC2 with Elastic IP and I want to limit access to this website from US region users only, Is there any easy way to do this? Website is powered by Apache.
According to this link .htaccess could be an option but didn't find a way to exclusively lock down my website to US region users only.

I will limit my answer to Amazon services.
Being able to block access by world location is an important issue today. With all of the various government regulations regarding where content is located / stored, controlling access may be a legal requirement in some situations.
Amazon has three services that support geolocation: Route53, CloudFront, and WAF (Web Application Firewall). No service is completely bulletproof but given the size of Amazon's network, all of the certifications, government compliance, etc. I tend to believe Amazon's geolocation would be better than a homebrew setup.
Your question specifies Elastic IP address. I am not aware of an Amazon service that supports geolocation blocking for your EIP. Instead, you will want to use Route53 and create a resource record set (RRS) or commonly called domain name or sub domain name to that EIP. Then put the server either in a private subnet, or put the front end service (CloudFront and/or ALB) in the same security group to limit who can access the EIP. Note: private subnets do not support EIP and are not required for ALB.
Configure geolocation as part of the setup for Route53, CloudFront or WAF (better a combination of these services). You can select the parts of the world (e.g. United States) to accept traffic from and block everybody else.
If I was building a small setup that did not require auto-scaling, I would use Route53 and CloudFront in front of my server. For higher fault-tolerance and high availability I would put the servers into a private subnet and add a load balancer with ASG (Auto Scaling Group) behind CloudFront and Route53 and add WAF to CloudFront (or the ALB).
Amazon VPCs via NACLs and Security Groups do not support any form of geolocation. Security Groups and NACLs are just very fast firewalls with a specific feature set. A VPN could be used if the customer base is tightly controlled (e.g. a group of developers or business partners) but would be untenable for a publicly accessed web server (e.g. customer portal). One might think that usernames or SSH keys could be used, but this does not control geography just authentication. A user could still access a server in France from Russia. If the requirement is geolocation, then the three Amazon services in the thread are good choices for geolocation based policies.
Route53:
Geolocation Routing
CloudFront:
Restricting the Geographic Distribution of Your Content
Amason WAF:
Working with Geographic Match Conditions

You could use Cloudfront geoblocking. Block all but US. You will not 100% be able to block. You can spoof Ip and locations, but it's a start.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/georestrictions.html

There several Cloud Native options available in AWS that could be used to restrict users to a particular region.
Using AWS CloudFront Geo Restrictions
Using AWS CloudFront + AWS WAF with Geo Matching Conditions (Where you can do the Geo Restriction and Other IP based Whitelisting).
If you plan to use AutoScaling and Load Balancing (For Application Load Balancer), then you can attach AWS WAF to Load Balancer with Geo Matching Conditions Configured.

Related

Restrict Lightsail machine to be accessed from cloudfront

I have a website (https://www.cakexpo.com) hosted on lightsail. Few days ago, we faced a DDOS Attack on the IP which forced me onboard my website to cloudfront.
I moved my website to cloudfront, yet my ip address is still publically available and making it vulnerable for more attacks again.
I am trying to understand how I can hide my ip from public access.
I found that in vpc, you can get the list of corresponding cloudfront ips and whitelist them in security group., which I tried
It worked for some time, but later on I realised that cloudfront uses lots of Ips which are not listed here and thus not whitelisted in my security group.
This makes my site intermittent unavailable.
nslookup shows a different ip, which is not listed in the above list, and this link says that there 190+ ips associated with Cloudfront, which security group cannot handle, IMO. https://ip-ranges.amazonaws.com/ip-ranges.json
Finally I ended up reverting the config and make my IP public.
Is there any other way to hide the lightsail machines from public access?
you can do this in 2 ways.
easy Way: Create a ngnix reverse proxy instance in lightsnail, allow access to ur lightsnail main instance only from that reverse proxy instance. then Create a distribution instance (with is cloudfront for lightsnail) then point as Origin the reverse proxy instance.
Hard Way: vpc peering to Aws, from there you Create a cloudfront instance. allows access from it.

How to set up an architecture of scalable custom domains & auto-SSL on Google Kubernetes Engine

We are researching the best solution to allow customers to use their domain names with our hosting services. The hosting services are based on Google App Engine standard. The requirements are:
Customers can point their domain name to our server via CNAME or A record
Our server should be able to generate SSL certs for them automatically using Let's Encrypt
Our server should be able to handle custom SSL certs uploaded by customers
Should be robust and reliable when adding new customers (new confs, SSL certs etc.) into our servers
Should be scalable, and can handle a large number of custom domains and traffic (e.g. from 0 to 10000)
Minimum operation costs (the less time needed for maintaining the infrastructure, the better)
It seems Google Kubernetes Engine (formerly known as Google Container Engine) would be the direction to go. Is there a specific, proven way to set it up? Any suggestions/experiences sharing would be appreciated.
I would recommend going through this link to get started with setting up a GKE cluster.
For your purpose of SSL on GKE I would recommend creating an Ingress as specified in this link which automatically creates a Loadbalancer Resource in GCP if you use the default GLBC ingress controller. The resulting LB's configuration (Ports, Host Path rules, Certificates, Backend Services, etc. ) are defined by the configuration of the Ingress Object itself. You can point the domain the domain name to the IP of the Loadbalancer.
If you want to configure your Ingress(and consequently the resulting LB) to use certs created by 'Let's Encrypt', you would be modifying the configuration presented in the YAML of the ingress.
For actually integrating Let's Encrypt for Kubernetes, it is actually possible by using a service called cert-manager to automate the process of obtaining TLS/SSL certificates and store them inside secrets.
This link shows how to use cert-manager with GKE.
If you want to use self managed SSL certificates please see this link for more information. The GKE is scalable by the GKE's cluster autoscaler which automatically resizes clusters based on the demands of the workloads you want to run.

AWS - NLB Performance Issue

AWS
I am using network load balancer infront of private VPC in the API gateway. Basically for APIs in the gateway the endpoint is network load balancer's DNS name.
The issue is, performance sucks (+5 seconds).. If I use the IP address of the EC2 instead of NLB DNS the response is very good (less than 100ms).
Can somebody point me what is the issue? Any configuration screw up I did while creating NLB?
I have been researching for the past 2 days and couldn't find any solution.
Appreciate your response.
I had a similar issue that was due to failing health checks. When all health checks fails, the targets are tried randomly (typically target in each AZ), however, at that stage I had only configured an EC2 in one of the AZs. The solution was to fix the health checks. They require the SecurityGroup (on the EC2 instances) to allow the entire VPC CIDR range (or at least the port the health checks are using).

One domain name "load balanced" over multiple regions in Google Compute Engine

I have service running on Google Compute Engine. I've got few instances in Europe in a target pool and few instances in US in a target pool. At the moment I have a domain name, which is hooked up to the Europe target pool IP, and can load balance between those two instances very nicely.
Now, can I configure the Compute Engine Load Balancer so that the one domain name is connected to both regions? All load balancing rules seem to be related to a single region, and I don't know how I could get all the instances involved.
Thanks!
You can point one domain name (A record) at multiple IP addresses, i.e. mydomain.com -> 196.240.7.22 204.80.5.130, but this setup will send half the users to the U.S., and the other half to Europe.
What you probably want to look for is a service that provides geo-aware or geo-located DNS. A few examples include loaddns.com, Dyn, or geoipdns.com, and it also looks like there are patches to do the same thing with BIND.
You should configure your DNS server. Google does not have a DNS service, as one part of their offering, at the moment. You can use Amazon's Route 53 to route your requests. It has a nice feature called latency based routing which allows you to route clients to different IP addresses (in your case - target pools) based on latency. You can find more information here - http://aws.amazon.com/about-aws/whats-new/2012/03/21/amazon-route-53-adds-latency-based-routing/
With Google's HTTP load balancing, you can load balance traffic over these VMs in different regions by exposing via one IP. Google eliminates the need for GEO DNS. Have a look at the doc:
https://developers.google.com/compute/docs/load-balancing/
Hope it helps.

Are AWS ELB IP addresses unique to an ELB?

Does anyone know how AWS ELB with SSL work behind the scenes? Running an nslookup on my ELB's domain name I get 4 unique IP addresses. If my ELB is SSL enabled, is it possible for AWS to share these same IPs with other SSL enabled ELBs (not necessarily owned by me)?
As I understand it the hostname in a web request is inside of the encrypted web request for a https request. If this is the case, does AWS have to give each SSL-enabled ELB unique IP addresses that are never shared with anyone else's SSL ELB instance? Put another way -- does AWS give 4 unique IP addresses for every SSL ELB you've requested?
Does anyone know how AWS ELB with SSL work behind the scenes? [...] Put another way --
does AWS give 4 unique IP addresses for every SSL ELB you've
requested?
Elastic Load Balancing (ELB) employs a scalable architecture in itself, meaning the number of unique IP addresses assigned to your ELB does in fact vary depending on the capacity needs and respective scaling activities of your ELB, see section Scaling Elastic Load Balancers within Best Practices in Evaluating Elastic Load Balancing (which provides a pretty detailed explanation of the Architecture of the Elastic Load Balancing Service and How It Works):
The controller will also monitor the load balancers and manage the
capacity [...]. It increases
capacity by utilizing either larger resources (resources with higher
performance characteristics) or more individual resources. The Elastic
Load Balancing service will update the Domain Name System (DNS) record
of the load balancer when it scales so that the new resources have
their respective IP addresses registered in DNS. The DNS record that
is created includes a Time-to-Live (TTL) setting of 60 seconds,[...]. By default, Elastic Load Balancing will return multiple IP
addresses when clients perform a DNS resolution, with the records
being randomly ordered [...]. As the traffic
profile changes, the controller service will scale the load balancers
to handle more requests, scaling equally in all Availability Zones. [emphasis mine]
This is further detailed in section DNS Resolution, including an important tip for load testing an ELB setup:
When Elastic Load Balancing scales, it updates the DNS record with the
new list of IP addresses. [...] It is critical that you factor this
changing DNS record into your tests. If you do not ensure that DNS is
re-resolved or use multiple test clients to simulate increased load,
the test may continue to hit a single IP address when Elastic Load
Balancing has actually allocated many more IP addresses. [emphasis mine]
The entire topic is explored in much more detail within Shlomo Swidler's excellent analysis The “Elastic” in “Elastic Load Balancing”: ELB Elasticity and How to Test it, which meanwhile refers to the aforementioned Best Practices in Evaluating Elastic Load Balancing by AWS as well, basically confirming his analysis but lacking the illustrative step by step samples Shlomo provides.