Our application has 2 domains.
http://www.example.org
and
https://secure.example.org
We are planning to decommission https://secure.example.org and have just 1 secure domain name:https://www.example.org
But we want to make sure any old URL still works and gets redirect to the new URL.
http://www.example.org/my-url should redirect you to https://www.example.org/my-url
https://secure.example.org/my-url should redirect you to https://www.example.org/my-url.
The question is - should the redirect be done at the CDN or WAF. We could also do it at the apache webserver, but would like to avoid hops. What is the best approach with their pros and cons.
AWS CloudFront does not support redirects, but it can achieved with using lambda or by using S3. But is there any concern if we use WAF for redirects.
I'm not sure why you need a CDN for this and I'm fairly certain this is not a feature of AWS WAF. If your domain names are managed inside AWS (Route53) you can simply create an alias record that points the old record at the new one.
If your domain names are managed outside of AWS try migrating them to Route53. If you were going to use CloudFront (AWS CDN) to do this you could put it infront of your old URL but it would still require that you place an alias on the CDN. With CloudFront you can configure HTTP to HTTPS redirects if that is your interest in using the CDN.
Related
I have a static website that is currently hosted in apache servers. I have an akamai server which routes requests to my site to those servers. I want to move my static websites to Amazon S3, to get away from having to host those static files in my servers.
I created a S3 bucket in amazon, gave it appropriate policies. I also set up my bucket for static website hosting. It told me that I can access the site at
http://my-site.s3-website-us-east-1.amazonaws.com
I modified my akamai properties to point to this url as my origin server. When I goto my website, I get Http 504 errors.
What am i missing here?
Thanks
K
S3 buckets don't support HTTPS?
Buckets support HTTPS, but not directly in conjunction with the static web site hosting feature.
See Website Endpoints in the S3 Developer Guide for discussion of the feature set differences between the REST endpoints and the web site hosting endpoints.
Note that if you try to directly connect to your web site hosting endpoint with your browser, you will get a timeout error.
The REST endpoint https://your-bucket.s3.amazonaws.com will work for providing HTTPS between bucket and CDN, as long as there are no dots in the name of your bucket
Or if you need the web site hosting features (index documents and redirects), you can place CloudFront between Akamai and S3, encrypting the traffic inside CloudFront as it left the AWS network on its way to Akamai (it would still be in the clear from S3 to CloudFront, but this is internal traffic on the AWS network). CloudFront automatically provides HTTPS support on the dddexample.cloudfront.net hostname it assigns to each distribution.
I admit, it sounds a bit silly, initially, to put CloudFront behind another CDN but it's really pretty sensible -- CloudFront was designed in part to augment the capabilities of S3. CloudFront also provides Lambda#Edge, which allows injection of logic at 4 trigger points in the request processing cycle (before and after the CloudFront cache, during the request and during the response) where you can modify request and response headers, generate dynamic responses, and make external network requests if needed to implement processing logic.
I faced this problem currently and as mentioned by Michael - sqlbot, putting the CloudFront between Akamai and S3 Bucket could be a workaround, but doing that you're using a CDN behind another CDN. I strongly recommend you to configure the redirects and also customize the response when origin error directly in Akamai (using REST API endpoint in your bucket). You'll need to create three rules, but first, go to CDN > Properties and select your property, Edit New Version based on the last one and click on Add Rule in Property Configuration Settings section. The first rule will be responsible for redirect empty paths to index.html, create it just like the image below:
builtin.AK_PATH is an Akamai's variable. The next step is responsible for redirect paths different from the static ones (html, ico, json, js, css, jpg, png, gif, etc) to \index.html:
The last step is responsible for customize an error response when origin throws an HTTP error code (just like the CloudFront Error Pages). When the origin returns 404 or 403 HTTP status code, the Akamai will call the Failover Hostname Edge Server (which is inside the Akamai network) with the /index.html path. This setup will be triggered when refreshing pages in the browser and when the application has redirection links (which opens new tabs for example). In the Property Hostnames section, add a new hostname that will work as the Failover Hostname Edge Server, the name should has less than 16 characters, then, add the -a.akamaihd.net suffix to it (that's the Akamai pattern). For example: failover-a.akamaihd.net:
Finally, create a new empty rule just like the image below (type the hostname that you just created in the Alternate Hostname in This Property section):
Since you are already using Akamai as a CDN, you could simply use their NetStorage product line to achieve this in a simplified manner.
All you would need to do is to move the content from s3 to Akamai and it would take care of the rest(hosting, distribution, scaling, security, redundancy).
The origin settings on Luna control panel could simply point to the Netstorage FTP location. This will also remove the network latency otherwise present when accessing the S3 bucket from the Akamai Network.
I am using a static site generator for my site, that means my entire site is static. All my resources and HTML files are referenced with the domain name prefixed, so that the CDN could be used.
But due to SEO concerns I disabled non-www access and redirect those to the www.domain.com variant. But now I cannot use a CDN apparently, because the origin server needs to be different from the supername.
Can a CDN be used for HTML files?
How can I deliver content through www.domain.com and use a CDN?
Can I give the CDN access to static.domain.com an an origin server, but deny access to other clients? Seems clumsy!
Any ideas?
Using Apache2.2 trying to use Level 3 CDN through my hosting company's site
depending what you are able to set on the CDN via your hosting company, the best way would be to override the host header on the CDN settings.
So, first let's look at your DNS settings:
www should point to the CDN
origin should point to your web server.
Now, on the CDN you set your origin to origin.yourdomain.com and add (I can't tell you if this is possible in your setup) a "http host header override" to www.yourdomain.com. In some cases it's implemented the other way around, so you would "force IP-Host" to origin.yourdomain.com.
In both cases, what you want to achieve is this:
when an end user requests www.yourdomain.com , it is resolved to the CDN
The CDN needs to fetch the content from your server, so it establishes a session on port 80 (assuming HTTP) to origin.yourdomain.com
Once the port is open, the CDN sends (amongst others) a HTTP Host-Header with www.yourdomain.com (this is the name based virtual host APache is seeing and evaluating).
That way you can set up your web server in exactly the same way as you would without a CDN.
I need to host some javascript files on Amazon s3 and would like to serve it through the amazon s3 url not through my domain.com using https
https://my.website.s3.amazonaws.com/js/custom.js
Is this offered for free or do I need to buy a SSL certificate?
It's free. Basically Amazon provides the certificate for https://*.s3.amazonaws.com . If you are using your own DNS entry https:// then you need your own certificate.
Keep in mind that you might have to make cross domain javascript requests unless all of your site goes to: https://my.website.s3.amazonaws.com .
To do cross domain Javascript you need to use JSONP. You can read more about it here: Make cross-domain ajax JSONP request with jQuery
Or you can look at something like easyXDM
[Edit]
If for some reason your bucket has a dot or more in its name, for example: mybycket.is.great you can use the old style URL
https://s3.amazonaws.com/admobius.qa/mybucket.is.great/<bucket object>
I am not sure if this exactly qualifies for StackOverflow, but since I need to do this programatically, and I figure lots of people on SO use CloudFront, I think it does... so here goes:
I want to hide public access to my custom origin server.
CloudFront pulls from the custom origin, however I cannot find documentation or any sort of example on preventing direct requests from users to my origin when proxied behind CloudFront unless my origin is S3... which isn't the case with a custom origin.
What technique can I use to identify/authenticate that a request is being proxied through CloudFront instead of being directly requested by the client?
The CloudFront documentation only covers this case when used with an S3 origin. The AWS forum post that lists CloudFront's IP addresses has a disclaimer that the list is not guaranteed to be current and should not be relied upon. See https://forums.aws.amazon.com/ann.jspa?annID=910
I assume that anyone using CloudFront has some sort of way to hide their custom origin from direct requests / crawlers. I would appreciate any sort of tip to get me started. Thanks.
I would suggest using something similar to facebook's robots.txt in order to prevent all crawlers from accessing all sensitive content in your website.
https://www.facebook.com/robots.txt (you may have to tweak it a bit)
After that, just point your app.. (eg. Rails) to be the custom origin server.
Now rewrite all the urls on your site to become absolute urls like :
https://d2d3cu3tt4cei5.cloudfront.net/hello.html
Basically all urls should point to your cloudfront distribution. Now if someone requests a file from https://d2d3cu3tt4cei5.cloudfront.net/hello.html and it does not have hello.html.. it can fetch it from your server (over an encrypted channel like https) and then serve it to the user.
so even if the user does a view source, they do not know your origin server... only know your cloudfront distribution.
more details on setting this up here:
http://blog.codeship.io/2012/05/18/Assets-Sprites-CDN.html
Create a custom CNAME that only CloudFront uses. On your own servers, block any request for static assets not coming from that CNAME.
For instance, if your site is http://abc.mydomain.net then set up a CNAME for http://xyz.mydomain.net that points to the exact same place and put that new domain in CloudFront as the origin pull server. Then, on requests, you can tell if it's from CloudFront or not and do whatever you want.
Downside is that this is security through obscurity. The client never sees the requests for http://xyzy.mydomain.net but that doesn't mean they won't have some way of figuring it out.
[I know this thread is old, but I'm answering it for people like me who see it months later.]
From what I've read and seen, CloudFront does not consistently identify itself in requests. But you can get around this problem by overriding robots.txt at the CloudFront distribution.
1) Create a new S3 bucket that only contains one file: robots.txt. That will be the robots.txt for your CloudFront domain.
2) Go to your distribution settings in the AWS Console and click Create Origin. Add the bucket.
3) Go to Behaviors and click Create Behavior:
Path Pattern: robots.txt
Origin: (your new bucket)
4) Set the robots.txt behavior at a higher precedence (lower number).
5) Go to invalidations and invalidate /robots.txt.
Now abc123.cloudfront.net/robots.txt will be served from the bucket and everything else will be served from your domain. You can choose to allow/disallow crawling at either level independently.
Another domain/subdomain will also work in place of a bucket, but why go to the trouble.
I have a website hosted at Amazon S3; the URL is something like www.foobar.com.s3-website-us-east-1.amazonaws.com. I would like to set my domain registrar (NameCheap) to redirect both foobar.com and www.foobar.com to this website. Setting the record
www.foobar.com CNAME www.foobar.com.s3-website-us-east-1.amazonaws.com.
works fine for the www subdomain. However, I can’t figure out how to configure the no-subdomain version of the site. I tried
foobar.com CNAME www.foobar.com.s3-website-us-east-1.amazonaws.com.
but that doesn’t seem to work. (And did I read somewhere that CNAMEs aren’t supposed to be used like this?) I know that the S3 bucket name is supposed to be identical to the fully-qualified DNS name, but does this really mean that I need to mirror my website contents in two different buckets? I feel like there must be a better solution than that.
Thanks!
You cannot serve "domain apex" content from S3-hosted websites. The easiest solution is to configure your domain such that "foobar.com" is redirected to "www.foobar.com". Unfortunately this requires you to run a HTTP server with a rewrite rule to enforce it.
I can't vouch personally for the service but http://www.wwwizer.com/ offers a free apex to www redirect service.
It looks like AWS S3 added bucket redirection around the end of 2012 to allow support for www.foobar.com and foobar.com