How to use Akamai infront of S3 buckets? - ssl

I have a static website that is currently hosted in apache servers. I have an akamai server which routes requests to my site to those servers. I want to move my static websites to Amazon S3, to get away from having to host those static files in my servers.
I created a S3 bucket in amazon, gave it appropriate policies. I also set up my bucket for static website hosting. It told me that I can access the site at
http://my-site.s3-website-us-east-1.amazonaws.com
I modified my akamai properties to point to this url as my origin server. When I goto my website, I get Http 504 errors.
What am i missing here?
Thanks
K

S3 buckets don't support HTTPS?
Buckets support HTTPS, but not directly in conjunction with the static web site hosting feature.
See Website Endpoints in the S3 Developer Guide for discussion of the feature set differences between the REST endpoints and the web site hosting endpoints.
Note that if you try to directly connect to your web site hosting endpoint with your browser, you will get a timeout error.
The REST endpoint https://your-bucket.s3.amazonaws.com will work for providing HTTPS between bucket and CDN, as long as there are no dots in the name of your bucket
Or if you need the web site hosting features (index documents and redirects), you can place CloudFront between Akamai and S3, encrypting the traffic inside CloudFront as it left the AWS network on its way to Akamai (it would still be in the clear from S3 to CloudFront, but this is internal traffic on the AWS network). CloudFront automatically provides HTTPS support on the dddexample.cloudfront.net hostname it assigns to each distribution.
I admit, it sounds a bit silly, initially, to put CloudFront behind another CDN but it's really pretty sensible -- CloudFront was designed in part to augment the capabilities of S3. CloudFront also provides Lambda#Edge, which allows injection of logic at 4 trigger points in the request processing cycle (before and after the CloudFront cache, during the request and during the response) where you can modify request and response headers, generate dynamic responses, and make external network requests if needed to implement processing logic.

I faced this problem currently and as mentioned by Michael - sqlbot, putting the CloudFront between Akamai and S3 Bucket could be a workaround, but doing that you're using a CDN behind another CDN. I strongly recommend you to configure the redirects and also customize the response when origin error directly in Akamai (using REST API endpoint in your bucket). You'll need to create three rules, but first, go to CDN > Properties and select your property, Edit New Version based on the last one and click on Add Rule in Property Configuration Settings section. The first rule will be responsible for redirect empty paths to index.html, create it just like the image below:
builtin.AK_PATH is an Akamai's variable. The next step is responsible for redirect paths different from the static ones (html, ico, json, js, css, jpg, png, gif, etc) to \index.html:
The last step is responsible for customize an error response when origin throws an HTTP error code (just like the CloudFront Error Pages). When the origin returns 404 or 403 HTTP status code, the Akamai will call the Failover Hostname Edge Server (which is inside the Akamai network) with the /index.html path. This setup will be triggered when refreshing pages in the browser and when the application has redirection links (which opens new tabs for example). In the Property Hostnames section, add a new hostname that will work as the Failover Hostname Edge Server, the name should has less than 16 characters, then, add the -a.akamaihd.net suffix to it (that's the Akamai pattern). For example: failover-a.akamaihd.net:
Finally, create a new empty rule just like the image below (type the hostname that you just created in the Alternate Hostname in This Property section):

Since you are already using Akamai as a CDN, you could simply use their NetStorage product line to achieve this in a simplified manner.
All you would need to do is to move the content from s3 to Akamai and it would take care of the rest(hosting, distribution, scaling, security, redundancy).
The origin settings on Luna control panel could simply point to the Netstorage FTP location. This will also remove the network latency otherwise present when accessing the S3 bucket from the Akamai Network.

Related

CloudFront or S3 ACL: This XML file does not appear to have... AccessDenied / Failed to contact the origin

I'm using custom domain and CloudFront for S3 static hosting site to serve https.
It's working fine when I open the pages through the app's internal buttons or link,
but if I input direct URL in the address bar, or click the browser refresh button, it shows
This XML file does not appear to have any style information associated with it. The document tree is shown below.... Access Denied error screen.
I searched related answers and tried to /index.html in the CloudFront general setting as Default Root Object but it didn't work. (Before this try, it was index.html)
When I updated it as /index.html, even the domain itself didn't work.
I have another S3 static hosting site without CloudFront and certificate just for testing.
This site is working fine even I input direct url or click the refresh button.
Above two S3 bucket have same settings (root object is index.html and error document is also index.html)
After this, I changed CloudFront Origin Domain Name from REST endpoint to website endpoint referred to this docs (https://aws.amazon.com/premiumsupport/knowledge-center/s3-website-cloudfront-error-403/)
But now getting this error when I refresh the screen.
All the object in S3 is owned to bucket owner and has public access.
This app is made by React and using react-router-dom.
Could you give me any hint or advice?
Thanks.
Solved...
My S3 bucket region requires . instead of - when I use website endpoint for cloudfront.
And FYI..
In my case, there are some little difference with the document and some tutorial. My CloudFront distribution doesn't need to use default root object, and individual objects in S3 has no public access but the bucket has it.
There are some specific endpoints to be used for website hosting buckets, which are listed in the Amazon Simple Storage Service endpoints and quotas document. For example, when hosting in eu-west-1, cloudfront will prepopulate the dropdown with example.s3.eu-west-1.amazonaws.com, but if you look into the bucket settings, Static website hosting section, it will show you the correct url example.s3-website-eu-west-1.amazonaws.com
Carefully read the table! The url scheme is not fully consistent, eg. s3-website.us-east-2.amazonaws.com but s3-website-us-east-1.amazonaws.com - just to make your day a bit more joyful.
So I had the exact same issue and was able to resolve it after taking the s3 bucket endpoint located in the properties of the s3 bucket and then pasting it into the cloudfront origins section into the origin domain. I removed the beginning of the endpoint for example: "http://website.com.s3-website.us-east-2.amazonaws.com" you would just remove the "http://" and then post the rest into the cloudfront origin domain and click save. That should solve the problem!
I tried all kinds of different options such as making sure every object was public as well in the s3 bucket. Make sure your s3 bucket is also publicly available.
Certain regions do have different endpoints for your s3 buckets. Here is a link that shows more of that: https://aws.amazon.com/premiumsupport/knowledge-center/s3-rest-api-cloudfront-error-403/

AWS S3 Static hosing: Routing rules doesn't work with cloudfront

I am using AWS S3 static web hosting for my VueJs SPA app. I have setup routing rules in S3 and it works perfectly fine when I access it using S3 static hosting url. But, I also have configured CloudFront to use it with my custom domain. Since single page apps need to be routed via index.html, I have setup custom error page in cloudfront to redirect 404 errors to index.html. So now routing rules I have setup in S3 no longer works.
What is the best way to get S3 routing rules to work along with CloudFront custom error page setup for SPA?
I think I am a bit late but here goes anyway,
Apparently you can't do that if you are using S3 REST_API endpoints (example-bucket.s3.amazonaws.com) as your origin for your CloudFront distribution, you have to use the S3 website url provided by S3 as the origin (example-bucket.s3-website-[region].amazonaws.com). Also, objects must be public you can't lock your bucket to the distribution by origin policy.
So,
Objects must be public.
S3 bucket website option must be turned on.
Distribution origin has to come from the S3 website url, not the rest api endpoint.
EDIT:
I was mistaking, actually, you can do it with the REST_API endpoint too, you only have to create a Custom Error Response inside your CloudFront distribution, probably only for the 404 and 403 error codes, set the "Customize Error Response" option to "yes", Response Page Path to "/index.html" and HTTP Response Code to "200". You can find that option inside your distribution and the error pages tab if you are using the console.

CloudFront - Editing Origin - Restrict Bucket Access

I have a situation which I am unable to understand easily why and I am not able to find any documentation either.
I have done the following:
Created a S3 bucket
Given public access to it
Enabled it for static website hosting
Created a CloudFront distribution to it
Enabled HTTPS at cloudfront
Now I am trying to restrict the access of S3 bucket only to CloudFront.
I tried the steps presented at
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
Unfortunately, when I tried to edit the origin I don't see all the options in the UI especially Restrict Bucket Access is missing.
I only see options to edit Origin Domain Name, Origin Path, Origin Id (grayedout), Origin Custom Headers - No option to enter OAI or setting Restrict Bucket Access etc.
Is it because of enabling HTTPS?
S3 masters, please help!
Origin access identities are only applicable when using the S3 REST endpoint (e.g. example-bucket.s3.amazonaws.com) for the bucket -- not when you are using the website hosting endpoint (e.g. example-bucket.s3-website.us-east-2.amazonaws.com), because website hosting endpoints do not support authenticated requests -- they are only for public content... but OAI is an authentication mechanism.
When using the website endpoint, CloudFront does not treat the origin as an S3 Origin -- it is treated as a Custom Origin, and these options are not available, because if they were available, they wouldn't work anyway (for the reason mentioned above).
For those who have since changed some S3 settings to not be public and intend it to be only retrievable via Cloudfront, it is now there but hidden. You just have to cut copy the value from Origin Domain Name in origin tab and then re-paste it in again (if its the name bucket) and the UI will now render with the Restrict Access input options.

Serving Angular JS HTML templates from S3 and CloudFront - CORS problems

I'm having a doozy of a time trying to serve static HTML templates from Amazon CloudFront.
I can perform a jQuery.get on Firefox for my HTML hosted on S3 just fine. The same thing for CloudFront returns an OPTIONS 403 Forbidden. And I can't perform an ajax get for either S3 or CloudFront files on Chrome. I assume that Angular is having the same problem.
I don't know how it fetches remote templates, but it's returning the same error as a jQuery.get. My CORS config is fine according to Amazon tech support and as I said I can get the files directly from S3 on Firefox so it works in one case.
My question is, how do I get it working in all browsers and with CloudFront and with an Angular templateUrl?
For people coming from google, a bit more
Turns out Amazon actually does support CORS via SSL when the CORS settings are on an S3 bucket. The bad part comes in when cloudfront caches the headers for the CORS response. If you're fetching from an origin that could be mixed http & https you'll run into the case where the allowed origin from CloudFront will say http but you want https. That of course causes the browser to blow up. To make matters worse, CloudFront will cache slightly differing versions if you accept compressed content. Thus if you try to debug this with curl, you'll think all is well then find it isn't in the browser (try passing --compressed to curl).
One, admittedly frustrating, solution is just ditch the entire CloudFront thing and serve directly from the S3 bucket.
It looks like Amazon does not currently support SSL and CORS on CloudFront or S3, which is the crux of the problem. Other CDNs like Limelight or Akamai allow you to add your SSL cert to a CNAME which circumvents the problem, but Amazon does not allow that either and other CDNs are cost prohibitive. The best alternative seems to be serving the html from your own server on your domain. Here is a solution for Angular and Rails: https://stackoverflow.com/a/12180837/256066

How can I hide a custom origin server from the public when using AWS CloudFront?

I am not sure if this exactly qualifies for StackOverflow, but since I need to do this programatically, and I figure lots of people on SO use CloudFront, I think it does... so here goes:
I want to hide public access to my custom origin server.
CloudFront pulls from the custom origin, however I cannot find documentation or any sort of example on preventing direct requests from users to my origin when proxied behind CloudFront unless my origin is S3... which isn't the case with a custom origin.
What technique can I use to identify/authenticate that a request is being proxied through CloudFront instead of being directly requested by the client?
The CloudFront documentation only covers this case when used with an S3 origin. The AWS forum post that lists CloudFront's IP addresses has a disclaimer that the list is not guaranteed to be current and should not be relied upon. See https://forums.aws.amazon.com/ann.jspa?annID=910
I assume that anyone using CloudFront has some sort of way to hide their custom origin from direct requests / crawlers. I would appreciate any sort of tip to get me started. Thanks.
I would suggest using something similar to facebook's robots.txt in order to prevent all crawlers from accessing all sensitive content in your website.
https://www.facebook.com/robots.txt (you may have to tweak it a bit)
After that, just point your app.. (eg. Rails) to be the custom origin server.
Now rewrite all the urls on your site to become absolute urls like :
https://d2d3cu3tt4cei5.cloudfront.net/hello.html
Basically all urls should point to your cloudfront distribution. Now if someone requests a file from https://d2d3cu3tt4cei5.cloudfront.net/hello.html and it does not have hello.html.. it can fetch it from your server (over an encrypted channel like https) and then serve it to the user.
so even if the user does a view source, they do not know your origin server... only know your cloudfront distribution.
more details on setting this up here:
http://blog.codeship.io/2012/05/18/Assets-Sprites-CDN.html
Create a custom CNAME that only CloudFront uses. On your own servers, block any request for static assets not coming from that CNAME.
For instance, if your site is http://abc.mydomain.net then set up a CNAME for http://xyz.mydomain.net that points to the exact same place and put that new domain in CloudFront as the origin pull server. Then, on requests, you can tell if it's from CloudFront or not and do whatever you want.
Downside is that this is security through obscurity. The client never sees the requests for http://xyzy.mydomain.net but that doesn't mean they won't have some way of figuring it out.
[I know this thread is old, but I'm answering it for people like me who see it months later.]
From what I've read and seen, CloudFront does not consistently identify itself in requests. But you can get around this problem by overriding robots.txt at the CloudFront distribution.
1) Create a new S3 bucket that only contains one file: robots.txt. That will be the robots.txt for your CloudFront domain.
2) Go to your distribution settings in the AWS Console and click Create Origin. Add the bucket.
3) Go to Behaviors and click Create Behavior:
Path Pattern: robots.txt
Origin: (your new bucket)
4) Set the robots.txt behavior at a higher precedence (lower number).
5) Go to invalidations and invalidate /robots.txt.
Now abc123.cloudfront.net/robots.txt will be served from the bucket and everything else will be served from your domain. You can choose to allow/disallow crawling at either level independently.
Another domain/subdomain will also work in place of a bucket, but why go to the trouble.