Is there any configurations needed to my route 53 service when adding an SSL to my cloud front distribution? - ssl

I have built a git backed static site that lives in an S3 bucket and is updated with a Code Pipeline. The site is fully hosted on AWS. The Route 53 name servers point to the S3 bucket but I have recently created a Cloudfront distribution that points to the S3 bucket so I am able to have a SSL certificate. The problem is I believe when you go to the sites url it still points to the S3 bucket and not the Cloudfront distribution. Could this be due to a Route 53 config issue?
The SSL certificated is ACM are active and hosted in (US East)N. Virgina and have been added to the custom SSL certificate in the Cloudfront distribution.
the Cloudfront distribution origin is the S3 bucket with it being "domainname.s3.amazonaws.com" (there are two distributions one for domainname.com and www.domainname.com pointing to each bucket respectivley.
I know a common fix for this is to wait for cloudfront to find the bucket and so I have waited 24 hours before asking the question.
If there is any more information I need to provide please let me know I have tried to proved as much as possible but there is something I am probably overlooking.

Seems like you have to update your Route53 configuration.
As the docs say:
If you want to use your own domain name, use Amazon Route 53 to create
an alias record that points to your CloudFront distribution. An alias
record is a Route 53 extension to DNS. It's similar to a CNAME record,
but you can create an alias record both for the root domain, such as
example.com, and for subdomains, such as www.example.com. (You can
create CNAME records only for subdomains.) When Route 53 receives a
DNS query that matches the name and type of an alias record, Route 53
responds with the domain name that is associated with your
distribution.
You can also check your domain whit a nslookup or dig and see what does the domain resolve, that way you can ensure if it is pointing to your CloudFront distribution
nslookup yourdomain.com
The result of the dig / nslookup should show you something like:
<hash>.cloudfront.net. and that resolving to multiple IP addresses

Related

How to use one GCP load balancer for two subdomains?

I want to create a LAMP site that also has a separate bucket on a subdomain. Basically, mysite.com and downloads.mysite.com. And, I want to put them both on the same load balancer and SSL certificate.
I know how to create the http(s) load balancer for the main site, using an instance group for the backend service and adding an SSL cert, but I can't seem to figure out how to add the downloads subdomain to that load balancer and cert.
I thought to create an additional bucket backend service for downloads. I'm not sure how to set the Host and Path Rules. I've tried:
Host Path Backend
----------------------------------------------------------------
All unmatched (default) All unmatched (default) main-backend
downloads.mysite.com /* bucket-backend
And for the certificate, I tried using mysite.com & downloads.mysite.com, as well as www.mysite.com & downloads.mysite.com, but I always get the error FAILED_NOT_VISIBLE.
And then there's the DNS settings. In the case of just the main LAMP site, I would add an A record with the load balancer's IP address. Not sure if I need to add another A record for the downloads subdomain or not.
Thanks for your help.
For your comment, you already have an A record pointing to your load balancer IP address for the downloads domain, that is enough in most cases but you can read here another reasons why you are getting a FAILED_NOT_VISIBLE error. Check that your downloads domain is visible on internet, can you ping it successfully? It should respond with your LB IP address. Fix this first before you try again to create the additional certificate. Consider that there is a project quota for certificates, verify you are not reaching it.
You can create a global certificate for several domains using a
gcloud command like this example:
gcloud compute ssl-certificates create my-cert \
--domains=one.com,two.com,www.three.com \
--global
You can use URL Maps and combine them if needed with patch matchers in order to direct traffic for your different backends. You can read here about these concepts, and here you will find how to use them.

How to use CloudFront and S3 with alternate domain?

Let's say I have an S3 bucket named example.com and I want to serve its content through CloudFront using an alternate domain example.com.
I've added a CNAME record to direct example.com to the CloudFront endpoint, and secured the domain using an AWS SSL Certificate.
In CloudFront, when I go to select the Origin, it shows my bucket. For example: example.com.s3.amazonaws.com
If I choose this origin, and I browse to https://example.com/my-bucket-item.jpg, I get redirected to https://example.com.s3-us-east-2.amazonaws.com/my-bucket-item.jpg and a "Connection not secure" SSL error appears.
If I set the origin to just the domain example.com then I get a 403 Bad Request error from CloudFront.
From what I understand, my bucket has to share the name of my domain, otherwise I get a "bucket does not exist" error.
I've followed the AWS documentation on this. What I'm doing wrong here?
Update
I successfully got CloudFront to recognize my alternate domain by changing my origin policy to Managed-CORS-S3Origin.
New problem: even though I've selected 'Yes' to 'Restrict Bucket Access', I'm still able to access files via the S3 url. Do I need to turn off public access to my bucket? If I do this, it seems to override my CloudFront policy...
I had to change my origin request policy to Managed-CORS-S3Origin - this solved the general problem for me.

Point non www domain to existing cloudfront distribution

I'm using AWS S3 and Cloudfront to host a website (e.g. www.company.com). I want to additionally have the the naked domain (without the www) to point to the same content. I initially created a redirect in DNS but https://company.com didn't work.
I can create an apex record for the naked domain in the DNS but can I point it to the same cloudfront CNAME used for the www.company.com or do I have to create a new S3 bucket and new cloudfront distribution?
S3 to CloudFront Distribution
Create two CloudFront distributions
Request for Certificates from AWS Certificates Manager
Create CNAME with Route 53 and point the alias target to the respective distributions
Create Origin in both distributions pointing to that S3 bucket
Hope it helps.
Another solution using only one Cloudfront distribution, if redirecting company.com to www.company.com is acceptable (usually it's preferred).
Create an S3 bucket named company.com
Configure the bucket for static website hosting. Choose "Redirect requests for an object" and enter www.company.com.
Update your DNS a record to point to the bucket.

Routing Zone Apex Domain to Amazon Cloudfront

Is it true that to route a zone apex to Cloudfront, I must use Amazon's '53' DNS service?
This is a pretty surprising limitation. If there's no alternative, I have to move DNS services and change SSL certs.
For example:
dev.myapp.com ---- CNAME ----> s3 location // works great
stage.myapp.com -- CNAME ----> Cloudfront Location // works great
myapp.com -------- ALIAS ----> Cloudfront Location // Issa no worky so good
If you're using Amazon Route 53 as your DNS service, you can create an alias resource record set instead of a CNAME. With an alias resource record set, you don't pay for Route 53 queries. In addition, you can create an alias resource record set for a domain name at the zone apex (example.com)
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/CNAMEs.html
Cloudfront dist on top level domain
Are there any alternatives besides using Amazon 53?
The helpful recommendation I got from Darrin at DNSimple is this:
Hi,
The trouble with an ALIAS record and CDNs is it will resolve to an endpoint closest to our name servers rather than the normal behavior which is resolving to an endpoint closest to the client.
You might get a little better performance using our Anycast network since we our name servers are distributed closer to the client already. That said -- I would probably recommend against using an apex record with a CDN in any case.
If you're using a CDN I would probably use a URL redirect from the apex to the CNAME "www".
So the full setup would be:
dev.myapp.com ---- CNAME ----> s3 location
stage.myapp.com -- CNAME ----> Cloudfront Location
www.myapp.com ---- CNAME ----> Cloudfront Location
myapp.com ----- REDIRECT ----> www.myapp.com
I have concerns about the performance implications but I guess we can measure those and react.

How to point a domain to serve static site from Amazon S3? (not sub-domain)

I see several people describing how to do this for a custom domain with sub-domain but no one talking about how to do it without one.
Example: Setting foobar.com and www.foobar.com to point to my Amazon S3–hosted site
I personally do not want the www prefix. Is there no way to make this happen? I seems crazy that Amazon would set it up to allow static sites and custom domains, then lock it down to prefixed domains?
Thanks in advance,
For historical reasons any URL needs to resolve to a subdomain, which you already know how to handle: Create a CNAME record with your DNS provider, pointing www to your S3-hosted subdomain. There are details to get right, described nicely elsewhere.
You nevertheless want to support users who, charmed that their browsers will autocomplete http:// and .com and such, want to type a naked domain domain.com, and have it automatically complete to your default subdomain such as www.domain.com.
The easiest way to accomplish this is to use www as your default subdomain, and point your DNS provider's A record at wwwizer.com (174.129.25.170). They automatically redirect any naked domain to the same domain with www in front.
You get fastest turnaround on development, and your visitors get fastest DNS resolution, if you use Amazon Route 53 to provide your DNS services. Route 53 can point its A records to wwwizer.com. However, you may want to create a micro Amazon EC2 instance, and start programming it. In the '50s everyone rebuilt their own cars. In the '80s everyone pushed a shopping cart down the aisle at Fry's, and built their own computer. Now, you want to be able to build your own computer in the cloud, for many reasons you will discover with time, and Amazon EC2 is best choice. For now, your cloud computer will simply handle naked domains for you. Later, email, generating the static site, ...
Install the Apache web server (the A in LAMP; a LAMP server will do the trick), and configure a virtual host for each of your domains. Then point an elastic IP address at your EC2 instance, and update Route 53 to have your A record point to this elastic IP address. Amazon doesn't support having multiple elastic IPs pointing to the same EC2 instance, but you can provide the same elastic IP to multiple domain A records, and have Apache resolve this within your EC2 instance.
This takes some fiddling and experimenting, as there's lots of conflicting advice on the details. I used the ami-ad36fbc4 instance image (US East, 64 bit EBS-backed Ubuntu 10.04 LTS), as I'm familiar with Ubuntu, there's plenty of online help with Ubuntu, and this image will be supported for years. I edited /etc/apache2/httpd.conf to have the contents
NameVirtualHost *
<VirtualHost *>
ServerName first.net
Redirect permanent / http://www.first.net/
</VirtualHost>
<VirtualHost *>
ServerName second.net
Redirect permanent / http://www.second.net/
</VirtualHost>
then checked for errors using
sudo /usr/sbin/apache2ctl configtest
then restarted the Apache server using
sudo /etc/init.d/apache2 restart
Apache is standard across Linux flavors, but the details such as file locations may vary, e.g./etc/apache2/httpd.conf could be /etc/httpd.conf. For example, it might be necessary put a Listen 80 in httpd.conf, but Apache throws an error if that command was already somewhere else. So read web instructions with a grain of salt, and be prepared to Google any error messages.
As I'd already been using Amazon Route 53 for days to point to wwwizer.com, this worked immediately once I updated Route 53 to point to my elastic IP. Before switching to Route 53, each change took days for me to verify, as the information propagated across the web. Once everyone knows to look to Amazon, Amazon can propagate its internal changes much more quickly.
Unfortunately you can not point foobar.com to an Amazon S3 bucket and the reason for this has to do with how DNS works.
DNS does not allow the root of a domain (called zone apex) to point to another DNS name (you can not have foobar.com set up as a CNAME / only subdomain.foobar.com can be a CNAME)
Since this question was asked things have changed. It is now possible to host your site on S3 with a root domain.
Instead of just having one bucket named "www.yourserver.com", you have to create another bucket with the nude (root) domain name, e.g. "yourserver.com".
After that you will have to use Amazon's DNS service Route 53. Create an A record for the nude domain and a CNAME for the "www" hostname.
Note that you will need to move the domain management of your domain to Amazon Route 53 completely.
See for the detailled walk-through here: http://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html