I need to host some javascript files on Amazon s3 and would like to serve it through the amazon s3 url not through my domain.com using https
https://my.website.s3.amazonaws.com/js/custom.js
Is this offered for free or do I need to buy a SSL certificate?
It's free. Basically Amazon provides the certificate for https://*.s3.amazonaws.com . If you are using your own DNS entry https:// then you need your own certificate.
Keep in mind that you might have to make cross domain javascript requests unless all of your site goes to: https://my.website.s3.amazonaws.com .
To do cross domain Javascript you need to use JSONP. You can read more about it here: Make cross-domain ajax JSONP request with jQuery
Or you can look at something like easyXDM
[Edit]
If for some reason your bucket has a dot or more in its name, for example: mybycket.is.great you can use the old style URL
https://s3.amazonaws.com/admobius.qa/mybucket.is.great/<bucket object>
Related
Our application has 2 domains.
http://www.example.org
and
https://secure.example.org
We are planning to decommission https://secure.example.org and have just 1 secure domain name:https://www.example.org
But we want to make sure any old URL still works and gets redirect to the new URL.
http://www.example.org/my-url should redirect you to https://www.example.org/my-url
https://secure.example.org/my-url should redirect you to https://www.example.org/my-url.
The question is - should the redirect be done at the CDN or WAF. We could also do it at the apache webserver, but would like to avoid hops. What is the best approach with their pros and cons.
AWS CloudFront does not support redirects, but it can achieved with using lambda or by using S3. But is there any concern if we use WAF for redirects.
I'm not sure why you need a CDN for this and I'm fairly certain this is not a feature of AWS WAF. If your domain names are managed inside AWS (Route53) you can simply create an alias record that points the old record at the new one.
If your domain names are managed outside of AWS try migrating them to Route53. If you were going to use CloudFront (AWS CDN) to do this you could put it infront of your old URL but it would still require that you place an alias on the CDN. With CloudFront you can configure HTTP to HTTPS redirects if that is your interest in using the CDN.
I have a static website that is currently hosted in apache servers. I have an akamai server which routes requests to my site to those servers. I want to move my static websites to Amazon S3, to get away from having to host those static files in my servers.
I created a S3 bucket in amazon, gave it appropriate policies. I also set up my bucket for static website hosting. It told me that I can access the site at
http://my-site.s3-website-us-east-1.amazonaws.com
I modified my akamai properties to point to this url as my origin server. When I goto my website, I get Http 504 errors.
What am i missing here?
Thanks
K
S3 buckets don't support HTTPS?
Buckets support HTTPS, but not directly in conjunction with the static web site hosting feature.
See Website Endpoints in the S3 Developer Guide for discussion of the feature set differences between the REST endpoints and the web site hosting endpoints.
Note that if you try to directly connect to your web site hosting endpoint with your browser, you will get a timeout error.
The REST endpoint https://your-bucket.s3.amazonaws.com will work for providing HTTPS between bucket and CDN, as long as there are no dots in the name of your bucket
Or if you need the web site hosting features (index documents and redirects), you can place CloudFront between Akamai and S3, encrypting the traffic inside CloudFront as it left the AWS network on its way to Akamai (it would still be in the clear from S3 to CloudFront, but this is internal traffic on the AWS network). CloudFront automatically provides HTTPS support on the dddexample.cloudfront.net hostname it assigns to each distribution.
I admit, it sounds a bit silly, initially, to put CloudFront behind another CDN but it's really pretty sensible -- CloudFront was designed in part to augment the capabilities of S3. CloudFront also provides Lambda#Edge, which allows injection of logic at 4 trigger points in the request processing cycle (before and after the CloudFront cache, during the request and during the response) where you can modify request and response headers, generate dynamic responses, and make external network requests if needed to implement processing logic.
I faced this problem currently and as mentioned by Michael - sqlbot, putting the CloudFront between Akamai and S3 Bucket could be a workaround, but doing that you're using a CDN behind another CDN. I strongly recommend you to configure the redirects and also customize the response when origin error directly in Akamai (using REST API endpoint in your bucket). You'll need to create three rules, but first, go to CDN > Properties and select your property, Edit New Version based on the last one and click on Add Rule in Property Configuration Settings section. The first rule will be responsible for redirect empty paths to index.html, create it just like the image below:
builtin.AK_PATH is an Akamai's variable. The next step is responsible for redirect paths different from the static ones (html, ico, json, js, css, jpg, png, gif, etc) to \index.html:
The last step is responsible for customize an error response when origin throws an HTTP error code (just like the CloudFront Error Pages). When the origin returns 404 or 403 HTTP status code, the Akamai will call the Failover Hostname Edge Server (which is inside the Akamai network) with the /index.html path. This setup will be triggered when refreshing pages in the browser and when the application has redirection links (which opens new tabs for example). In the Property Hostnames section, add a new hostname that will work as the Failover Hostname Edge Server, the name should has less than 16 characters, then, add the -a.akamaihd.net suffix to it (that's the Akamai pattern). For example: failover-a.akamaihd.net:
Finally, create a new empty rule just like the image below (type the hostname that you just created in the Alternate Hostname in This Property section):
Since you are already using Akamai as a CDN, you could simply use their NetStorage product line to achieve this in a simplified manner.
All you would need to do is to move the content from s3 to Akamai and it would take care of the rest(hosting, distribution, scaling, security, redundancy).
The origin settings on Luna control panel could simply point to the Netstorage FTP location. This will also remove the network latency otherwise present when accessing the S3 bucket from the Akamai Network.
We are considering moving our file delivery to Cloudfront.
Currently, we generate "secure" URLs for our file delivery on a individual basis which look like:
http://downloads.xxxxx.com/1/2005-01-01_2006-01-01.csv?AWSAccessKeyId=012NFZM3D44FSG20CP82&Expires=1495287427&response-cache-control=No-cache&response-content-disposition=attachment%3B%20filename%3D2005-01-01_2006-01-01.csv&Signature=tWAeES3rhAlv2SQoZkqyYJEexH0%3D
Is there an easy way to apply Cloudfront to the URL above, or do we need to configure them from scratch using Signed URLs? Would the S3 authenticated URL "pass-through" using Cloudfront if I create a simple distribution there?
It is theoretically possible to configure CloudFront to pass-through an S3 signed URL, but doing so would defeat all caching, so... no, it's not a viable solution.
CloudFront uses an entirely different algorithm for signed URLs, so there's not a way to simply transform an existing S3 signed URL into a CloudFront signed URL.
Note also that you'll need to embed the existing response-* parameters in the URL before signing it. CloudFront should still pass them through so that S3 can modify its response as indicated.
Unrelated: one feature you might find interesting is that with CloudFront signed URLs, you actually have the optional ability to embed the client's IP address in the URL in such a way that the URL can only be used from a single IP address. This isn't something that can be done in a straightforward way with an S3 signed URL.
I am trying to have links in my emails from my application register as SSL/HTTPS secure links. This helps deliverability and other things email clients may do treating links as http vs https.
Our application is using SendGrid to send emails, which also supports click tracking on our links for us. In order to do this SendGrid, and most other email sender services replace the original link we put in, which was an https://blahblah.com link with their own link, http://clicktrack.sendgrid.net or something that is not https, but rather http.
SendGrid supports "white labeling" the click tracking link with something like
http://subdomain.blahblah.com and also https version if we set it up properly. SendGrids requirements for https/ssl link are shown here
https://sendgrid.com/docs/Classroom/Build/Add_Content/content_delivery_networks.html
Basically they are asking us to setup a CDN or other server that will host our SSL certificates, terminate the SSL, and then forward the request on to their servers. Once that is in place they can "turn on" ssl on their end for our email links.
I tried setting this up in AWS CloudFront with the origin as sendgrid.net and the distribution having our SSL certificate and a route 53 CNAME pointing to our distribution. So the subdomain.blahblah.com points to distribution CDN, CDN points to sendgrid, and all should work.
Testing this though it does NOT work. If I go to the http version of subdomain it does work, CDN forwards properly. AWS support has suggested it was an issue related to host headers and the CDN not being able to validate the origin when I had a 2nd CNAME for the origin on my subdomain2.blahblah.com. That led me to remove 2nd cname and direclty put sendgrid as origin, but that hasn't worked and they haven't provided a solution yet. I get error like this..
ERROR
The request could not be satisfied.
CloudFront wasn't able to connect to the origin.
Generated by cloudfront (CloudFront)
Request ID: pl1bS3OObC6mUd2vyyhM6bNFt3xyLsfzVIqNmiPkEO7mQgJyQCn_pA==
Any ideas welcome or a different way to do this?
The issue was in behaviors I was forwarding all headers. Should NOT forward "Host" header in this situation or the origin ssl call will break as it wont match expected. AWS support did finally figure this out and recommend to me :)
I'm having a doozy of a time trying to serve static HTML templates from Amazon CloudFront.
I can perform a jQuery.get on Firefox for my HTML hosted on S3 just fine. The same thing for CloudFront returns an OPTIONS 403 Forbidden. And I can't perform an ajax get for either S3 or CloudFront files on Chrome. I assume that Angular is having the same problem.
I don't know how it fetches remote templates, but it's returning the same error as a jQuery.get. My CORS config is fine according to Amazon tech support and as I said I can get the files directly from S3 on Firefox so it works in one case.
My question is, how do I get it working in all browsers and with CloudFront and with an Angular templateUrl?
For people coming from google, a bit more
Turns out Amazon actually does support CORS via SSL when the CORS settings are on an S3 bucket. The bad part comes in when cloudfront caches the headers for the CORS response. If you're fetching from an origin that could be mixed http & https you'll run into the case where the allowed origin from CloudFront will say http but you want https. That of course causes the browser to blow up. To make matters worse, CloudFront will cache slightly differing versions if you accept compressed content. Thus if you try to debug this with curl, you'll think all is well then find it isn't in the browser (try passing --compressed to curl).
One, admittedly frustrating, solution is just ditch the entire CloudFront thing and serve directly from the S3 bucket.
It looks like Amazon does not currently support SSL and CORS on CloudFront or S3, which is the crux of the problem. Other CDNs like Limelight or Akamai allow you to add your SSL cert to a CNAME which circumvents the problem, but Amazon does not allow that either and other CDNs are cost prohibitive. The best alternative seems to be serving the html from your own server on your domain. Here is a solution for Angular and Rails: https://stackoverflow.com/a/12180837/256066