Use Cloudfront URL for iFrame - express

I have a Nodejs - Express middleware that alters incoming file requests to point to Cloudfront
function cloudfrontController({url}, res, next) {
const ext = url.substring(url.lastIndexOf('.') + 1);
const name = url.substring(url.lastIndexOf('/') + 1);
const location = url.substring(0, url.lastIndexOf('/'));
const newurl = `${config.aws.cloudfront}${location}/${encodeURIComponent(name)}`;
request(newurl).pipe(res);
}
I display the pdf in an iframe on the front end.
This seems to work fine when I use the S3 URL, however, the PDF does not load when using the the cloudfront URL. When I update the src of my frame with a new Cloudfront URL, the pdf downloads (but still doesn't display). In essence, this works but defeats the purpose of using cloudfront in the first place:
const newurl = `${ext === 'pdf' ? config.aws.s3 : config.aws.cloudfront}${location}/${encodeURIComponent(name)}`;
Examples:
S3 URL Works for iFrame
https://npg-cloud.s3-us-west-2.amazonaws.com/companies/ipadmin/users/57a8b211f77b5b3801255034/57a8b211f77b5b3801255034_1564598795092.pdf
CloudFront URL doesn't work for iFrame
https://d2eva5limbx0hv.cloudfront.net/companies/ipadmin/users/57a8b211f77b5b3801255034/57a8b211f77b5b3801255034_1564598795092.pdf

Related

Download signed image from s3 does not work

I have a strange issue relating to S3 signed URL
I want to download the file from S3 on my browser. Every file type worked as expected, except the image files. I do not know why
Here is my javascript
<html>
<script>
fetch('<s3 signed url>', {
method: 'GET',
// For the image file, I always got the CORS error but for other file types, it works as expected
// mode: 'no-cors',
})
.then((res) => {
return res.blob();
})
.then((blob) => {
var url = window.URL.createObjectURL(new Blob([blob]));
var a = document.createElement('a');
a.href = url;
a.download = 'file.png';
document.body.appendChild(a);
a.click();
});
</script>
If I generated a signed URL for pdf or doc ... then download it with the above code, it works
But if I generated a signed URL for an image file and then download it with the above code, it does not work.
I always got this error in the console
Access to fetch at 'https://.......' from origin 'null' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Then I added the
{mode: 'no-cors'}
Then it works but the context of the file is always empty (ZERO bytes)
Why? how can I download an image from S3?
Did you try config CORS on S3 bucket policy?
https://docs.aws.amazon.com/AmazonS3/latest/userguide/ManageCorsUsing.html

AWS static website - how to connect subdomains with subfolders

I want to setup S3 static website and connect with my domain (for example domain: example.com).
In this S3 bucket I want to create one particular folder (name content) and many different subfolders with in, then I want to connect these subfolders with appropriate subdomains, so for example
folder content/foo should be available from subdomain foo.example.com,
fodler content/bar should be available from subdomain bar.example.com.
Any content subfolder should be automatically available from subdomain with that same prefix name like folder name.
I will be grateful for any possible solutions for this problem. Should I use redirection option or there is any better solution? Thanks in advance for help.
My solution base on this video:
https://www.youtube.com/watch?v=mls8tiiI3uc
Because above video don’t explain subdomain problem, here is few additional things to do:
to AWS Route53 hostage zone we should add records A with “*.domainname” as record name and edge address as Value
to certificate domains we should add also “*.domainname”- to have certificate for wildcard domain
when setting up Cloudfront distribution we should add to “Alternate domain name (CNAME)“ section “www.domainname” and also “*.domainname”
redirection/forwarding from subdomain to subfolder is realizing via Lambda#Edge function (function should be improve a bit):
'use strict';
exports.handler = (event, context, callback) => {
const path = require("path");
const remove_suffix = ".domain.com";
const host_with_www = "www.domain.com"
const origin_hostname = "www.domain.com.s3-website.eu-west-1.amazonaws.com";
const request = event.Records[0].cf.request;
const headers = request.headers;
const host_header = headers.host[0].value;
if (host_header == host_with_www) {
return callback(null, request);
}
if (host_header.startsWith('www')) {
var new_host_header = host_header.substring(3,host_header.length)
}
if (typeof new_host_header === 'undefined') {
var new_host_header = host_header
}
if (new_host_header.endsWith(remove_suffix)) {
// to support SPA | redirect all(non-file) requests to index.html
const parsedPath = path.parse(request.uri);
if (parsedPath.ext === "") {
request.uri = "/index.html";
}
request.uri =
"/" +
new_host_header.substring(0, new_host_header.length - remove_suffix.length) +
request.uri;
}
headers.host[0].value = origin_hostname;
return callback(null, request);
};
Lambda#Edge is just Lambda function connected with particular Cloudfront distribution
need to add to Cloudfront distribution additional setting for Lambda execution (this setting is needed if we want to have different redirection for different subdomian, instead all redirection will point to main directory or probably to first directory which will be cached - first request to our Cloudfront domain):

How to Upload a csv file lager than 10MB on S3 using Lambda /API Gateway

Hello I am new here on AWS i was trying to upload a csv file on my bucket s3 but when the file is larger than 10mb it is returing "{"message":"Request Entity Too Large"}" I am using postman to do this. Below is the current code I created but in the future I will add some validation to change the name of the file that being uploaded into my format. Is there any way to do this with this kind of code or if you have any suggestion that can help me with the issue I have encountered?
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const bucket = process.env.UploadBucket;
const prefix = "csv-files/";
const filename = "file.csv";
exports.handler = (event, context, callback) => {
let data = event.body;
let buff = new Buffer(data, 'base64');
let text = buff.toString('ascii');
console.log(text);
let textFileSplit = text.split('?');
//get filename split
let getfilename = textFileSplit[0].split('"');
console.log(textFileSplit[0]);
console.log(textFileSplit[1]);
// //remove lower number on csv
let csvFileSplit = textFileSplit[1].split('--')
const params = {
Bucket: bucket,
Key: prefix + getfilename[3],
Body: csvFileSplit[0]
};
s3.upload(params, function (err, data) {
if (err) {
console.log('error uploading');
callback(err);
}
console.log("Uploaded")
callback(null, "Success")
});
}
For scenarios like this one, we normally use a different approach.
Instead of sending the file to lambda through API Gateway, you send the file directly to S3. This will make your solution more robust and cost you less because you don't need to transfer the data to API Gateway and you don't need to process the entire file inside the lambda.
The question is: How do you do this in a secure way, without opening your S3 Bucket to everyone on the internet and uploading anything to it? You use s3 signed urls. Signed Urls are a feature of S3 that allows you to bake in the url the correct permissions to upload an object to a secured bucket.
In summary the process will be:
Frontend sends a request to API Gateway;
API Gateway forward the request to a Lambda Function;
The Lambda Function generate a signed Url with the permissions to upload the object to a specific s3 bucket;
API Gateway sends back the response from Lambda Function to the Frontend. Frontend upload the file to the signed Url.
To generate the signed url you will need to use the normal aws-sdk in your lambda function. There you will call the method getSignedUrl (signature depends on your language). You can find more information about signed urls here.

Lambda#edge redirect gets in a redirect loop

I have a static website on aws s3. Have setup route 53 and cloud front and everything works smoothly. s3 Bucket is setup to serve index.html as index document.
Now I have added another file called index-en.html that should be served when the request country is any other country and not my home country.
For this I have added a lambda#edge function with the following code:
'use strict';
/* This is an origin request function */
exports.handler = (event, context, callback) => {
const request = event.Records[0].cf.request;
const headers = request.headers;
/*
* Based on the value of the CloudFront-Viewer-Country header, generate an
* HTTP status code 302 (Redirect) response, and return a country-specific
* URL in the Location header.
* NOTE: 1. You must configure your distribution to cache based on the
* CloudFront-Viewer-Country header. For more information, see
* http://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers
* 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer
* request event. To use this example, you must create a trigger for the
* origin request event.
*/
let url = 'prochoice.com.tr';
if (headers['cloudfront-viewer-country']) {
const countryCode = headers['cloudfront-viewer-country'][0].value;
if (countryCode === 'TR') {
url = 'prochoice.com.tr';
} else {
url = 'prochoice.com.tr/index-en.html';
}
}
const response = {
status: '302',
statusDescription: 'Found',
headers: {
location: [{
key: 'Location',
value: url,
}],
},
};
callback(null, response);
};
I have also edited cloud front behavior to whitelist Origin and Viewer-country headers and setup the cloudfront Viewer-Request event and lambda Function ARN relation.
However I get a "too many redirect error".
I have 2 questions:
How to correct the "too many redirects error"?
For viewers outside "TR" the default landing page should be index-en.html, from which 2 more pages in english are accessible via navigation menu. So when users request a specific page from page navigation they should be able to access those pages, when no page is requested the default landing page should be served.
Appreciate help.
Thanks.
You are creating a redirect loop because you are sending the viewer back to the same site, same page, no matter what the results of your test.
if (countryCode === 'TR') {
return callback(null, request);
} else {
...
callback(null,request) tells CloudFront to continue processing the request -- not generate a response. Using return before the callback causes the rest of the trigger code not to run.

Redirecting AWS API Gateway to S3 Binary

I'm trying to download large binaries from S3 via an API Gateway URL. Because the maximum download size in API Gateway is limited I thought I just could provide the basic URL to Amazon S3 (in the swagger file) and add the folder/item to the binary I want to download.
But all I find is redirection API Gateway via a Lambda function, but I don't want that.
I want a swagger file where the redirect is already configured.
So if I call <api_url>/folder/item I want to be redirected to s3-url/folder/item
Is this possible? And if so, how?
Example:
S3: https://s3.eu-central-1.amazonaws.com/folder/item (item = large binary file)
API Gateway: https://<id>.execute-api.eu-central-1.amazonaws.com/stage/folder/item -> redirect to s3 url
I am not sure if you can redirect the request to a presigned S3 url via API Gateway without a backend to calculate the presigned S3 url. The presigned S3 url feature is provided by the SDK instead of an API. You need to use a Lambda function to calculate the presigned S3 url and return.
var AWS = require('aws-sdk');
AWS.config.region = "us-east-1";
var s3 = new AWS.S3({signatureVersion: 'v4'});
var BUCKET_NAME = 'my-bucket-name'
exports.handler = (event, context, callback) => {
var params = {Bucket: BUCKET_NAME, Key: event.path};
s3.getSignedUrl('putObject', params, function (err, url) {
console.log('The URL is', url);
callback(null, url);
});
};