I have an ALB that currently routes traffic to multiple urls. I'd like to be able to route traffic to a Static S3 site in the event that we need to perform maintenance. We would then display a static "Maintenance" page instead of our login page.
I have created a CloudFront Distribution that allows a S3 site to be loaded with an SSL cert but I am not sure how to connect that distribution to send all of the traffic to the S3 maintenance site.
This is the Terraform ALB listener I'm using. Can I specify my CloudFront distribution arn at the target_group and have it route all traffic to the static site?
Or could I simply link my S3 arn here with an S3 policy allowing the ALB access to get the bucket objects?
resource "aws_alb_listener" "ssl_alb_httpslistener" {
load_balancer_arn = "${aws_alb.alb_lis.arn}"
port = "443"
protocol = "HTTPS"
ssl_policy = "Sec-TLS"
certificate_arn = "${var.ssl_cert_arn}"
default_action {
target_group_arn = "${data.terraform_remote_state.php.target_arn}"
type = "forward"
}
}
I would I expect that I could route traffic that passes through an ALB to a Static S3 site from the target_group. Curious if this is the best way to go about this.
The simple answer is to use a redirect option on the ALB to forward traffic to a new url. My Route53 url is connected to a CloudFront Distribution linked to the S3 bucket. Here I was able to specify a single redirect url and keep my HTTPS traffic options with minimal infrastructure modifications.
You can now have Lambda function as target group and with Lambda, you can trigger S3 , make cloudfront(http) GET request etc.
I faced the same issue of 1MBytes limit with ALB and lambdas and used compression as a workaround.
Set in response headers Content-Encoding= and compress your file(s) accordingly.
Related
We have a (django) wep application that is running at example.com and we would like to serve some static assets on s3 via cloudfront from this same domain. So if we had a file with key assets/img.jpg we would be able to access it via example.com/assets/img.jpg.
We have been attempting to use this guide but have only been able to get it working with a subdomain to access cloudfront so static.example.com/assets/img.jpg
The issue we are running into is the DNS setup for this, there is already a CNAME for example.com (web app) that routes traffic to the server, but we are unable to create a second entry with the same name example.com to the cloudfront distribution.
In order to do this, go to the configuration for your Cloudfront distribution.
From there, you need to create another "origin" that points to that S3 bucket, and then a "behavior" for the "/assets/*" path to send the traffic to that S3 bucket.
I've got a pretty specific problem here, we've got a system that we already have and maintain, the system involves using subdomains to route people to specific apps.
on a traditional server that goes like follows; we have a wildcard subdomain, *.domain.com that routes to nginx and serves up a folder
so myapp.domain.com > nginx > serves up myapp app folder > myapp folder contains a static site
I'm trying to migrate this in some way to AWS, I basically need to do a similar thing in AWS, I toyed with the idea of putting each static app into an s3 bucket and then the wildcard domain in route 53 but i'm unsure how s3 would know which folder to serve up as that functionality isn't part of route 53
Anyone have any suggestions?
Thanks for all your help
CloudFront + Lambda#Edge + S3 can do this "serverless."
Lambda#Edge is a CloudFront enhancement that allows attributes of requests and responses to be represented and manipulated as simple JavaScript objects. Triggers can be provisioned to fire during request processing, either before the cache is checked ("viewer request" trigger) or before the request proceeds to the back-end ("origin server", an S3 web site hosting endpoint, in this case) following a cache miss ("origin request" trigger)... or during response processing, after the response is received from the origin but before it is considered for storing in the CloudFront cache ("origin response" trigger), or when finalizing the response to the browser ("viewer response" trigger). Response triggers can also examine the original request object.
The following snippet is something I originally posted at the AWS Forums. It is an Origin Request trigger which compares the original hostname to your pattern (e.g. the domain must match *.example.com) and if it does, the hostname prefix subdomain-here.example.com is request is served from a folder named for the subdomain.
lol.example.com/cat.jpg -> my-bucket/lol/cat.jpg
funny-pics.example.com/cat.jpg -> my-bucket/funny-pics/cat.jpg
In this way, static content from as many subdomains as you like can all be served from a single bucket.
In order to access the original incoming Host header, CloudFront needs to be configured to whitelist the Host header for forwarding to the origin even though the net result of the Lambda function's execution will be to modify that value before the origin acually sees it.
The code is actually very simple -- most of the following is explanatory comments.
'use strict';
// if the end of incoming Host header matches this string,
// strip this part and prepend the remaining characters onto the request path,
// along with a new leading slash (otherwise, the request will be handled
// with an unmodified path, at the root of the bucket)
const remove_suffix = '.example.com';
// provide the correct origin hostname here so that we send the correct
// Host header to the S3 website endpoint
const origin_hostname = 'example-bucket.s3-website.us-east-2.amazonaws.com'; // see comments, below
exports.handler = (event, context, callback) => {
const request = event.Records[0].cf.request;
const headers = request.headers;
const host_header = headers.host[0].value;
if(host_header.endsWith(remove_suffix))
{
// prepend '/' + the subdomain onto the existing request path ("uri")
request.uri = '/' + host_header.substring(0,host_header.length - remove_suffix.length) + request.uri;
}
// fix the host header so that S3 understands the request
headers.host[0].value = origin_hostname;
// return control to CloudFront with the modified request
return callback(null,request);
};
Note that index documents and redirects from S3 may also require an Origin Response trigger to normalize the Location header against the original request. This will depend on exactly which S3 website features you use. But the above is a working example that illustrates the general idea.
Note that const origin_hostname needs to be set to the bucket's endpoint hostname as configured in the CloudFront origin settings. In this example, the bucket is in us-east-2 with the web site hosting feature active.
Create a Cloudfront distribution
Add all the Alternate CNAMEs records in the cloudfront distribution
Add a custom origin as the EC2 server.
Set behaviours as per your requirements.
Configure nginx virtualhosts in the server to route to specific folders.
I am using AWS S3 static web hosting for my VueJs SPA app. I have setup routing rules in S3 and it works perfectly fine when I access it using S3 static hosting url. But, I also have configured CloudFront to use it with my custom domain. Since single page apps need to be routed via index.html, I have setup custom error page in cloudfront to redirect 404 errors to index.html. So now routing rules I have setup in S3 no longer works.
What is the best way to get S3 routing rules to work along with CloudFront custom error page setup for SPA?
I think I am a bit late but here goes anyway,
Apparently you can't do that if you are using S3 REST_API endpoints (example-bucket.s3.amazonaws.com) as your origin for your CloudFront distribution, you have to use the S3 website url provided by S3 as the origin (example-bucket.s3-website-[region].amazonaws.com). Also, objects must be public you can't lock your bucket to the distribution by origin policy.
So,
Objects must be public.
S3 bucket website option must be turned on.
Distribution origin has to come from the S3 website url, not the rest api endpoint.
EDIT:
I was mistaking, actually, you can do it with the REST_API endpoint too, you only have to create a Custom Error Response inside your CloudFront distribution, probably only for the 404 and 403 error codes, set the "Customize Error Response" option to "yes", Response Page Path to "/index.html" and HTTP Response Code to "200". You can find that option inside your distribution and the error pages tab if you are using the console.
I have a static website that is currently hosted in apache servers. I have an akamai server which routes requests to my site to those servers. I want to move my static websites to Amazon S3, to get away from having to host those static files in my servers.
I created a S3 bucket in amazon, gave it appropriate policies. I also set up my bucket for static website hosting. It told me that I can access the site at
http://my-site.s3-website-us-east-1.amazonaws.com
I modified my akamai properties to point to this url as my origin server. When I goto my website, I get Http 504 errors.
What am i missing here?
Thanks
K
S3 buckets don't support HTTPS?
Buckets support HTTPS, but not directly in conjunction with the static web site hosting feature.
See Website Endpoints in the S3 Developer Guide for discussion of the feature set differences between the REST endpoints and the web site hosting endpoints.
Note that if you try to directly connect to your web site hosting endpoint with your browser, you will get a timeout error.
The REST endpoint https://your-bucket.s3.amazonaws.com will work for providing HTTPS between bucket and CDN, as long as there are no dots in the name of your bucket
Or if you need the web site hosting features (index documents and redirects), you can place CloudFront between Akamai and S3, encrypting the traffic inside CloudFront as it left the AWS network on its way to Akamai (it would still be in the clear from S3 to CloudFront, but this is internal traffic on the AWS network). CloudFront automatically provides HTTPS support on the dddexample.cloudfront.net hostname it assigns to each distribution.
I admit, it sounds a bit silly, initially, to put CloudFront behind another CDN but it's really pretty sensible -- CloudFront was designed in part to augment the capabilities of S3. CloudFront also provides Lambda#Edge, which allows injection of logic at 4 trigger points in the request processing cycle (before and after the CloudFront cache, during the request and during the response) where you can modify request and response headers, generate dynamic responses, and make external network requests if needed to implement processing logic.
I faced this problem currently and as mentioned by Michael - sqlbot, putting the CloudFront between Akamai and S3 Bucket could be a workaround, but doing that you're using a CDN behind another CDN. I strongly recommend you to configure the redirects and also customize the response when origin error directly in Akamai (using REST API endpoint in your bucket). You'll need to create three rules, but first, go to CDN > Properties and select your property, Edit New Version based on the last one and click on Add Rule in Property Configuration Settings section. The first rule will be responsible for redirect empty paths to index.html, create it just like the image below:
builtin.AK_PATH is an Akamai's variable. The next step is responsible for redirect paths different from the static ones (html, ico, json, js, css, jpg, png, gif, etc) to \index.html:
The last step is responsible for customize an error response when origin throws an HTTP error code (just like the CloudFront Error Pages). When the origin returns 404 or 403 HTTP status code, the Akamai will call the Failover Hostname Edge Server (which is inside the Akamai network) with the /index.html path. This setup will be triggered when refreshing pages in the browser and when the application has redirection links (which opens new tabs for example). In the Property Hostnames section, add a new hostname that will work as the Failover Hostname Edge Server, the name should has less than 16 characters, then, add the -a.akamaihd.net suffix to it (that's the Akamai pattern). For example: failover-a.akamaihd.net:
Finally, create a new empty rule just like the image below (type the hostname that you just created in the Alternate Hostname in This Property section):
Since you are already using Akamai as a CDN, you could simply use their NetStorage product line to achieve this in a simplified manner.
All you would need to do is to move the content from s3 to Akamai and it would take care of the rest(hosting, distribution, scaling, security, redundancy).
The origin settings on Luna control panel could simply point to the Netstorage FTP location. This will also remove the network latency otherwise present when accessing the S3 bucket from the Akamai Network.
I have an image fetching "dynamically" from amazon s3 bucket, here is the source example:
src = http://s3.amazonaws.com/kidslink_assets/logos/5082f5d279216d14d000001e/original.png?1350759890
i need this to be a http*s* (https) request.
If i understood S3 terminology, S3 response always serve as https . I tried to access your image using http, that's default redirecting to https (https://s3.amazonaws.com/kidslink_assets/logos/5082f5d279216d14d000001e/original.png?1350759890)
So you not need to worry about https/ssl.
As client needs to see it as https request, i added manually when the page is rendering the image. I means added "s" manually for the link before rendering the page as http and https are doing the same.