AWS Route 53 Redirect to Status Page - ssl

First question, so if I get this wrong somehow be kind.
We are using Route 53 with Amazon and have our primary front end servers behind an ELB. Our app also routes all requests through HTTPS. We are utilizing an offsite status page via statuspage.io.
What I am trying to accomplish is if the primary site goes down I'd like to have R53 redirect both the SSL and non-SSL traffic to our status page.
I originally had tried setting up a static page in S3 but still had issues with HTTPS requests made on our site.
Has anyone done this successfully? I imagine it has to be possible, but its definitely outside my realm of expertise.
Thank you very much for your time and help.

You are right, S3 website doesn't support HTTPS. However, CloudFront does[1]. What you can do is failover to CloudFront and have your origin be your S3 website or your statuspage.io.
Steps:
Create a distribution and set the CNAMEs to match your DNS entries.
Upload and associate your SSL cert with your distribution
Update failover target to be your CloudFront distribution and set it as an alias.
[1] http://aws.amazon.com/about-aws/whats-new/2014/03/05/amazon-cloudront-announces-sni-custom-ssl/

Route53 is managing the DNS which is not what you want to do (even if you'd change the DNS it would take TTL to sync). What you should do is use a combination of auto-scaling policies and health-checks. These health-checks will be performed by the ELB every 30 seconds and if two consecutive checks will fail it'll mark the instance as out-of-service and will stop directing traffic to it (the ELB is directing traffic to your instances in a round-robin manner).
Having more than one instance and using auto-scaling rules is the key: it will enable AWS to terminate the unhealthy instance and spin up a new instance instead (in the same ASG with the same AMI etc).

Related

It's possible to use a dynamic route in the nginx ingress controller?

Our services use a K8s service with a reverse proxy to receive a request by multiple domains and redirect to our services, additionally, we manage SSL certificates powered by let's encrypt for every user that configures their domain in our service. Resuming I have multiple .conf files in the nginx for every domain that is configured. Works really great.
But now we need to increase our levels of security and availability and now we ready to configure the ingress in K8s to handle this problem for us because they are built for it.
Everything looks fine until we discover that every time that I need to configure a new domain as a host in the ingress I need to alter the config file and re-apply.
So that's the problem, I want to apply the same concept that I already have running, but in the nginx ingress controller. It's that possible? I have more than 10k domains up and running, I can't configure all in my ingress resource file.
Any thoughts?
In terms of scaling Kubernetes 10k domains should be fine to be configured in an Ingress resource. You might want to check how much storage you have in the etcd nodes to make sure you can store enough data there.
The default etcd storage is 2Gb, but if you keep increasing it's something to keep in mind.
You can also refer to the K8s best practices when it comes to building large clusters.
Another practice that you can use is to use apply and not create when changing the ingress resource, that way the changes are incremental. Furthermore, if you are using K8s 1.18 or later you can take advantage of Server Side Apply.

Load balance with ssl-id with AWS CloudFront

Our application needs end-end SSL encryption and here is the architecture:
Browser(https)-> AWS CloudFront(Https)- ON-perm F5 Loadbalancer(HTTPS)-> webserver.
AWS CloudFront with Origin pointing to On-Perm Load balancer(F5).
On-Perm Load balancer configured to do a sticky session with SSL-session-id(not application session-id)
Since AWS CloudFront domain name is mapped to dynamic IP and do SSL shake based on edge location IP SSL-sessionId changes even though the request is from the same application session-id it is causing session data loss for the user.
It's not an option for us to change the Load balancer to do session affinity based on application session-id nor we can do SSL termination at Loadbalancer. can someone please help me how can I do the session affinity in this scenario?
What you are attempting cannot be accomplished with Amazon CloudFront.
CloudFront is designed for performance, which means a single viewer connection can use multiple back-end connections in parallel and multiple viewers can also make sequential requests over a single back-end connection.
TLS through CloudFront is not end-to-end -- that would be impossible. CloudFront needs to decrypt and re-encrypt the traffic since it operates at the HTTP layer.

How do I send requests for a specific domain to Apache without it serving that domain yet?

I have an AWS EC2 server that hosts 3 domains with Apache 2. This server sits behind an AWS ELB load balancer which sends it requests. If I want to update this server, instead of taking the server down, I can create a new identical EC2 server and install all the software using the same scripts that built the first server and when it is ready I can add the new server to the ELB and then remove the old server. This gives me zero downtime which is great.
But before I remove the old server how do test the new server to prove everything is working and it is serving those 3 domains? DNS points to the ELB for these domains, the ELB sendsthe requests to the server, and the Apache install on the server routes the traffic to the appropriate site depending on what subdomain was requested. Is there a way make a request to the new server via IP address since that is the only way to address it before it is behind the ELB but tell it I want to make a request to a specific subdomain? If not how else can I prove all 3 sites are running and working properly without just adding it to the ELB, removing the old server, and crossing my fingers?
P.S. Sorry for the poor title. Please edit it if you can think of a better one that better represents what I am asking.
Use ELB healthcheck to perform the check. I recommend you to enable Apache server status mod. Use health check against /server-status and if it returns 200 for certain period of time, ELB will mark the instance as active and healthy.

HTTPS not working (on AWS Elastic Beanstalk based site)

The site works perfectly fine on HTTP, however, does not work on HTTPS.
I've followed all the steps on this page to create a self-signed certificate and add it to my Elastic Beanstalk environment.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https.html
I'm also getting a successful certificate response back from IAM using the following command:
aws iam get-server-certificate --server-certificate-name
After updating Elastic Beanstalk with the certificate, I've also added add a rule to the security group that allows inbound traffic from 0.0.0.0/0 to port 443.
Finally, I've also validated that my load-balancer listener has HTTPS set up correctly.
In spite of all that, my calls to https is not resolving, while http is working perfectly fine.
Any other thoughts on this? Any help would be much appreciated.
Please let me know if you need any more information. Desperately looking for some insight/help into this.
Anyway, not being able to resolve this issue with my site/code, I tried to set up HTTPS on the sample site provided on Elastic Beanstalk. Interestingly enough, even that is not working.
I want to provide an update that I was finally able to resolve the issue.
The root-cause was because I missed to set up an Inbound Rule for the Security Group of the Load Balancer.
For whatever reason, when I read the documentation, I understood that the Inbound Rule needed to be set up for the Security Group of the Instance (and not the Load Balancer). Only after I started tracing the Load Balancer did I realize that I should perhaps try setting up the Rule for the Security Group of the Load Balancer. So, the problem is resolved. Below is the setting I used.
HTTPS 443 HTTP 80 <name of the certificate>
I'd have to say that the documentation could be a bit more clear to clearly identify the change required to the Security Group of the Load Balancer (and not the Instance).
Amazon recently released AWS Certificate Manager :
Go to https://console.aws.amazon.com/acm/home
Add your domain and validate it by email
After the certificate is issued, deploy it to your Elastic Load Balancers following the steps (and easily setup your security groups)
It's event better for a performance point of view:
Because ELB supports SSL offload, deploying a certificate to a load
balancer (rather than to the EC2 instances behind it) will reduce the
amount of encryption and decryption work that the instances need to
handle.
follow the doc for more information:
https://aws.amazon.com/fr/blogs/aws/new-aws-certificate-manager-deploy-ssltls-based-apps-on-aws/
I can't believe this...but it goes to show how bad the AWS console is...I had to scroll down and click "Apply" on an invisible button when it shows "Pending create" after add making me think it's working...facepalm.
Hello i was had like this and i follow this steps and works to me:
Generate certificate
the first thing is request a Request certificate on AWS Certificate Manager (ACM)
take a look on this video to create a new one https://youtu.be/bWPTq8z1vFY
Configurations of the Elastic Beanstalk
on the configuration -> load balancer
create a new listener:
in this step i get this :
Creat a new record on route 53
so i use route53 to host my site
go to route 53 ->select your host zone and create a new record
choose the option of alias to select your route traffic in your case Elastic Beanstalk, your region and the name of your application
this works if you use route 53 and EB but in other host providers i thing i woiuld work too.

Are AWS ELB IP addresses unique to an ELB?

Does anyone know how AWS ELB with SSL work behind the scenes? Running an nslookup on my ELB's domain name I get 4 unique IP addresses. If my ELB is SSL enabled, is it possible for AWS to share these same IPs with other SSL enabled ELBs (not necessarily owned by me)?
As I understand it the hostname in a web request is inside of the encrypted web request for a https request. If this is the case, does AWS have to give each SSL-enabled ELB unique IP addresses that are never shared with anyone else's SSL ELB instance? Put another way -- does AWS give 4 unique IP addresses for every SSL ELB you've requested?
Does anyone know how AWS ELB with SSL work behind the scenes? [...] Put another way --
does AWS give 4 unique IP addresses for every SSL ELB you've
requested?
Elastic Load Balancing (ELB) employs a scalable architecture in itself, meaning the number of unique IP addresses assigned to your ELB does in fact vary depending on the capacity needs and respective scaling activities of your ELB, see section Scaling Elastic Load Balancers within Best Practices in Evaluating Elastic Load Balancing (which provides a pretty detailed explanation of the Architecture of the Elastic Load Balancing Service and How It Works):
The controller will also monitor the load balancers and manage the
capacity [...]. It increases
capacity by utilizing either larger resources (resources with higher
performance characteristics) or more individual resources. The Elastic
Load Balancing service will update the Domain Name System (DNS) record
of the load balancer when it scales so that the new resources have
their respective IP addresses registered in DNS. The DNS record that
is created includes a Time-to-Live (TTL) setting of 60 seconds,[...]. By default, Elastic Load Balancing will return multiple IP
addresses when clients perform a DNS resolution, with the records
being randomly ordered [...]. As the traffic
profile changes, the controller service will scale the load balancers
to handle more requests, scaling equally in all Availability Zones. [emphasis mine]
This is further detailed in section DNS Resolution, including an important tip for load testing an ELB setup:
When Elastic Load Balancing scales, it updates the DNS record with the
new list of IP addresses. [...] It is critical that you factor this
changing DNS record into your tests. If you do not ensure that DNS is
re-resolved or use multiple test clients to simulate increased load,
the test may continue to hit a single IP address when Elastic Load
Balancing has actually allocated many more IP addresses. [emphasis mine]
The entire topic is explored in much more detail within Shlomo Swidler's excellent analysis The “Elastic” in “Elastic Load Balancing”: ELB Elasticity and How to Test it, which meanwhile refers to the aforementioned Best Practices in Evaluating Elastic Load Balancing by AWS as well, basically confirming his analysis but lacking the illustrative step by step samples Shlomo provides.