I'm following https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md to setup AWS EKS cluster and managed to successfully setup cluster and a test nginx service running.
My domain is configured on Cloudflare and is used for different things i.e. domain.com lands static website, api.domain.com, app.domain.com, xyz.domain.com all are currently pointing to an IP address (LoadBalancer) on DigitalOcean Kubernetes that then handles everything and serves api and other requests accordingly.
How can I point multiple sub-domains to AWS using IP or some other way. Do I need to deploy external-dns multiple times (per sub domain)? or can I just deploy it once and use that for all sub domains? The problem here is part of external-dns config is to mention Route 53 ZoneID that is currently a subdomain
- --txt-owner-id=my-hostedzone-identifier
Okay, got the answer. First as documented in the external-dns documentation, run the command:
$ aws route53 create-hosted-zone --name "my-domain-here.com." --caller-reference "external-dns-mydomain-$(date +%s)"
Then I copied the NS records from Route53 for this new domain and added to cloudflare NS records in the DNS section. After that my K8s foo.my-domain-here.com started working!!
So moving forward, I won't need to register anything in the Route53 and just add NS record for bar.my-domain-here.com to point subdomains to EKS applications
although the domain is registered on cloudflare and is being used for marketing and other sub domains,
Related
I have a website (https://www.cakexpo.com) hosted on lightsail. Few days ago, we faced a DDOS Attack on the IP which forced me onboard my website to cloudfront.
I moved my website to cloudfront, yet my ip address is still publically available and making it vulnerable for more attacks again.
I am trying to understand how I can hide my ip from public access.
I found that in vpc, you can get the list of corresponding cloudfront ips and whitelist them in security group., which I tried
It worked for some time, but later on I realised that cloudfront uses lots of Ips which are not listed here and thus not whitelisted in my security group.
This makes my site intermittent unavailable.
nslookup shows a different ip, which is not listed in the above list, and this link says that there 190+ ips associated with Cloudfront, which security group cannot handle, IMO. https://ip-ranges.amazonaws.com/ip-ranges.json
Finally I ended up reverting the config and make my IP public.
Is there any other way to hide the lightsail machines from public access?
you can do this in 2 ways.
easy Way: Create a ngnix reverse proxy instance in lightsnail, allow access to ur lightsnail main instance only from that reverse proxy instance. then Create a distribution instance (with is cloudfront for lightsnail) then point as Origin the reverse proxy instance.
Hard Way: vpc peering to Aws, from there you Create a cloudfront instance. allows access from it.
I have a domain purchased at 1and1 and set up at AWS EC2 with SSL and Apache server.
Even the domain pointing to the correct IP (using nslookup I can see it), it works from some places and not from others.
For example, here from my workplace, I see this page (the domain does not reach the EC2 server):
I launched a Windows EC2 at AWS to make a test and from there, everything is correct (the page loads and SSL is valid):
From my client's computer, it has another behavior. It reaches the EC2 server, but is says the SSL is invalid:
Has anyone faced the same problem?
The first thing you need to do is get an Elastic IP, the instance IP can change during reboot etc but elastic IP are static IP’s so you should make sure you create one of them and assign it your running instance.
Create Hosted Zone and Record Sets
Documentation is here - https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingHostedZone.html
Create a recordset and add values
Add the Amazon NameSpaceServers in Control panel of Domain Provider
Import the SSl certificate to AWS Certificate manager (Optional). Documentation is here https://docs.aws.amazon.com/acm/latest/userguide/import-certificate-api-cli.html#import-certificate-api
Self signed certificate will not work.
Deploy the SSl certificate into Apache server and configure the traffic for https.
Open the AWS in-bond traffic port documentation is here - https://aws.amazon.com/premiumsupport/knowledge-center/connect-http-https-ec2/
I have a domain registered with Site-ground and created a sub-domain as well.
After that I created an AWS lightsail WordPress site that gave me an IP.
Now my question is, is there a way to point my sub-domain to AWS lightsail?
Create a static IP in Lightsail, and attach it to your instance.
Update the DNS in siteground by creating an A record that maps the subdomain you want to the static IP in lightsail.
In our current configuration we are having one ec2 aws instance. On this we are serving one api server which is developed using laravel and one front end which is developed using angular apache. And for dns resolution we are using godaddy. Now we are having one domain let's take it as example.com.
So what actually I need is serve angular application from example.com and laravel application from apis.example.com.
And I don't want to use aws route53 service as it will be paid service again.
So is there any way through which we can acheive it without using route53 and if not how should this be solved using route53.
The steps would be the same using Godaddy or Route53. There is really no reason to think that Route53 would be required in this case.
Assign an Elastic IP to your EC2 instance
Create A records in Godaddy (or any other DNS service you want to use) for both example.com and apis.example.com that point to the Elastic IP
Configure Apache on your EC2 instance to serve requests for example.com
Configure Apache on your EC2 instance to send requests for apis.example.com to your Laravel app
First question, so if I get this wrong somehow be kind.
We are using Route 53 with Amazon and have our primary front end servers behind an ELB. Our app also routes all requests through HTTPS. We are utilizing an offsite status page via statuspage.io.
What I am trying to accomplish is if the primary site goes down I'd like to have R53 redirect both the SSL and non-SSL traffic to our status page.
I originally had tried setting up a static page in S3 but still had issues with HTTPS requests made on our site.
Has anyone done this successfully? I imagine it has to be possible, but its definitely outside my realm of expertise.
Thank you very much for your time and help.
You are right, S3 website doesn't support HTTPS. However, CloudFront does[1]. What you can do is failover to CloudFront and have your origin be your S3 website or your statuspage.io.
Steps:
Create a distribution and set the CNAMEs to match your DNS entries.
Upload and associate your SSL cert with your distribution
Update failover target to be your CloudFront distribution and set it as an alias.
[1] http://aws.amazon.com/about-aws/whats-new/2014/03/05/amazon-cloudront-announces-sni-custom-ssl/
Route53 is managing the DNS which is not what you want to do (even if you'd change the DNS it would take TTL to sync). What you should do is use a combination of auto-scaling policies and health-checks. These health-checks will be performed by the ELB every 30 seconds and if two consecutive checks will fail it'll mark the instance as out-of-service and will stop directing traffic to it (the ELB is directing traffic to your instances in a round-robin manner).
Having more than one instance and using auto-scaling rules is the key: it will enable AWS to terminate the unhealthy instance and spin up a new instance instead (in the same ASG with the same AMI etc).