Is it possible to serve gzipped content with the correct headers on Akamai NetStorage? - gzip

As an Akamai NetStorage customer I would like to upload static files and have NetStorage serve them as gzipped content with the correct content-encoding header.
Preferably it would encode these files from the originals to serve accept-encoding gzip or not gzipped content.
In Amazon S3 you do this by adding metadata but I'm unable to locate a similar process for this on S3.

The short answer is No - it is not possible. Akamai is a CDN and its value is in content delivery via edge network. NetStorage is intended to serve as an origin for the edge network. Optimizing delivery from NetStorage would undermine the very business model.

Related

Is there a way to reupload a file to S3 without having to reset a custom MIME type?

I have two xml files that need (or really want) to be served with specific MIME types that S3 doesn't serve by default. The files are sitemap.xml and rss.xml served as application/xml and application/rss+xml respectively.
I am able to set the Content-Type header for these files no problem.
The problem is every time my site changes these files change. I should say that my site is completely static from a web server perspective. My site is updated by me building the files locally and uploading them to S3. When I upload my updated sitemap.xml and rss.xml files though, S3 nukes my custom Content-Type settings.
Is there a way to get it to associate these settings with the name of the file as opposed to the instance of the file?

How to use Akamai infront of S3 buckets?

I have a static website that is currently hosted in apache servers. I have an akamai server which routes requests to my site to those servers. I want to move my static websites to Amazon S3, to get away from having to host those static files in my servers.
I created a S3 bucket in amazon, gave it appropriate policies. I also set up my bucket for static website hosting. It told me that I can access the site at
http://my-site.s3-website-us-east-1.amazonaws.com
I modified my akamai properties to point to this url as my origin server. When I goto my website, I get Http 504 errors.
What am i missing here?
Thanks
K
S3 buckets don't support HTTPS?
Buckets support HTTPS, but not directly in conjunction with the static web site hosting feature.
See Website Endpoints in the S3 Developer Guide for discussion of the feature set differences between the REST endpoints and the web site hosting endpoints.
Note that if you try to directly connect to your web site hosting endpoint with your browser, you will get a timeout error.
The REST endpoint https://your-bucket.s3.amazonaws.com will work for providing HTTPS between bucket and CDN, as long as there are no dots in the name of your bucket
Or if you need the web site hosting features (index documents and redirects), you can place CloudFront between Akamai and S3, encrypting the traffic inside CloudFront as it left the AWS network on its way to Akamai (it would still be in the clear from S3 to CloudFront, but this is internal traffic on the AWS network). CloudFront automatically provides HTTPS support on the dddexample.cloudfront.net hostname it assigns to each distribution.
I admit, it sounds a bit silly, initially, to put CloudFront behind another CDN but it's really pretty sensible -- CloudFront was designed in part to augment the capabilities of S3. CloudFront also provides Lambda#Edge, which allows injection of logic at 4 trigger points in the request processing cycle (before and after the CloudFront cache, during the request and during the response) where you can modify request and response headers, generate dynamic responses, and make external network requests if needed to implement processing logic.
I faced this problem currently and as mentioned by Michael - sqlbot, putting the CloudFront between Akamai and S3 Bucket could be a workaround, but doing that you're using a CDN behind another CDN. I strongly recommend you to configure the redirects and also customize the response when origin error directly in Akamai (using REST API endpoint in your bucket). You'll need to create three rules, but first, go to CDN > Properties and select your property, Edit New Version based on the last one and click on Add Rule in Property Configuration Settings section. The first rule will be responsible for redirect empty paths to index.html, create it just like the image below:
builtin.AK_PATH is an Akamai's variable. The next step is responsible for redirect paths different from the static ones (html, ico, json, js, css, jpg, png, gif, etc) to \index.html:
The last step is responsible for customize an error response when origin throws an HTTP error code (just like the CloudFront Error Pages). When the origin returns 404 or 403 HTTP status code, the Akamai will call the Failover Hostname Edge Server (which is inside the Akamai network) with the /index.html path. This setup will be triggered when refreshing pages in the browser and when the application has redirection links (which opens new tabs for example). In the Property Hostnames section, add a new hostname that will work as the Failover Hostname Edge Server, the name should has less than 16 characters, then, add the -a.akamaihd.net suffix to it (that's the Akamai pattern). For example: failover-a.akamaihd.net:
Finally, create a new empty rule just like the image below (type the hostname that you just created in the Alternate Hostname in This Property section):
Since you are already using Akamai as a CDN, you could simply use their NetStorage product line to achieve this in a simplified manner.
All you would need to do is to move the content from s3 to Akamai and it would take care of the rest(hosting, distribution, scaling, security, redundancy).
The origin settings on Luna control panel could simply point to the Netstorage FTP location. This will also remove the network latency otherwise present when accessing the S3 bucket from the Akamai Network.

How to enable Keep Alive connection in AWS S3 or CloudFront?

How to enable Keep Alive connection in AWS S3 or CloudFront? I uploaded images to S3 and found that the urls don't have keep alive connection. They cannot be cached by client application even I added cache-control headers to each image file.
From the tag wiki for Keep-Alive:
A feature of HTTP where the same connection is used for multiple
requests, speeding up downloading of web pages with multiple
resources.
I'm not aware of any relation that this has to cache behavior. I usually see mentions of Keep-Alive headers in relation to long-polling, which wouldn't make any sense to enable on S3.
I think you are incorrectly linking keep-alive headers with your browser's ability to cache static content. The cache-control headers should be all that is needed for caching of static content in the browser.
Are you verifying that the response from CloudFront includes the cache-control headers you have set on the S3 objects? Perhaps you need to invalidate the CloudFront cache after you updated the headers.
Related to your question I think the problem is in setting correct TTL(>0) to your origin/behaviours in Cloudfront.
Also AWS Cloudfront (from 30 March 2017) enables you to set up custom read and keep-alive timeouts for custom origins.

Is it possible to enable CORS on AWS CloudFront without S3?

I'm using CloudFront CDN to simply cache my static contents in "Origin Pull" mode. The CloudFront origin is my website.
However I've encountered a CORS problem. My browser doesn't let my web pages load my fonts files from CloudFront ... The ironic thing about it is that those fonts were fetched and cached from my website in the first place :(
After googling this matter a bit, I noticed that all blogs/tutorials explain how to enable CORS on an S3 bucket used as the origin for CloudFront, and letting CloudFront forward the Access-Control-Allow-XXX headers from S3 to the client.
I don't need an S3 bucket and would like to keep it that way for the sake of simplicity, if possible.
Is it possible to enable CORS on CloudFront ? Even a quick and dirty solution, such as setting the access control header on all responses would be good enough.
Or what other alternatives do I have on CloudFront ? If the easiest other alternative is indeed to use an S3 bucket, what are the drawbacks (modifications to do on my website, service performance, and cost) ?

Serving Angular JS HTML templates from S3 and CloudFront - CORS problems

I'm having a doozy of a time trying to serve static HTML templates from Amazon CloudFront.
I can perform a jQuery.get on Firefox for my HTML hosted on S3 just fine. The same thing for CloudFront returns an OPTIONS 403 Forbidden. And I can't perform an ajax get for either S3 or CloudFront files on Chrome. I assume that Angular is having the same problem.
I don't know how it fetches remote templates, but it's returning the same error as a jQuery.get. My CORS config is fine according to Amazon tech support and as I said I can get the files directly from S3 on Firefox so it works in one case.
My question is, how do I get it working in all browsers and with CloudFront and with an Angular templateUrl?
For people coming from google, a bit more
Turns out Amazon actually does support CORS via SSL when the CORS settings are on an S3 bucket. The bad part comes in when cloudfront caches the headers for the CORS response. If you're fetching from an origin that could be mixed http & https you'll run into the case where the allowed origin from CloudFront will say http but you want https. That of course causes the browser to blow up. To make matters worse, CloudFront will cache slightly differing versions if you accept compressed content. Thus if you try to debug this with curl, you'll think all is well then find it isn't in the browser (try passing --compressed to curl).
One, admittedly frustrating, solution is just ditch the entire CloudFront thing and serve directly from the S3 bucket.
It looks like Amazon does not currently support SSL and CORS on CloudFront or S3, which is the crux of the problem. Other CDNs like Limelight or Akamai allow you to add your SSL cert to a CNAME which circumvents the problem, but Amazon does not allow that either and other CDNs are cost prohibitive. The best alternative seems to be serving the html from your own server on your domain. Here is a solution for Angular and Rails: https://stackoverflow.com/a/12180837/256066