I'm trying to upload files to S3 via Cloudfront. I've created a bucket with name my-files. Bucket CORS settings:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I've created Cloudfront distribution. Here is configuration which can be important:
General:
Delivery Method: Web
Alternate Domain Names (CNAMEs) files.example.com
Origins:
Origin Domain Name: my-files.s3.amazonaws.com
Restrict Bucket Access: Yes
Grant Read Permissions on Bucket: Yes, Update Bucket Policy
Behaviour:
Path Pattern: Default (*)
Origin: S3-my-files
Viewer Protocol Policy: HTTP and HTTPS
Allowed HTTP Methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
Whitelist Headers: Access-Control-Request-Headers, Access-Control-Request-Method, Origin
Object Caching: Use Origin Cache Headers
Restrict Viewer Access (Use Signed URLs or Signed Cookies): Yes
Trusted Signers: Self
I can create signed URL for file download and it works correctly. I can create CNAME for S3 bucket, upload a file to S3 using pre-signed URL and this also works correctly. When I'm trying to upload file via Cloudfront then I'm getting a 403 response (OPTIONS):
XMLHttpRequest cannot load http://files.example.com/. Response to
preflight request doesn't pass access control check: No
'Access-Control-Allow-Origin' header is present on the requested
resource. Origin 'http://0.0.0.0:5000' is therefore not allowed
access. The response had HTTP status code 403.
Is it possible to use Cloudfront with signed URLs for uploading files? How can I set allowed origins to allow file uploads from localhost?
I suspect that error is relevant with the CORS options of your request.
With jQuery, this is currently my sample code that successfully uploading file to S3 via CloudFront (notice the crossDomain option). If you don't use jQuery, you can base on this to write your code:
$.ajax({
type: 'POST',
url: 'YourGetSignatureMethod', //return your signed url
data: {
fileName: yourFileName,
expiration: yourPolicyExpirationDate
},
success: function (signedUrl) {
//signedUrl= 'http://sampleId.cloudfront.net/video.mp4?Policy=examplePolicy&Signature=exampleSignature&Key-Pair-Id=exampleKey'
let fileObject = yourGetFileFunction(); //returns File API
let reader = new FileReader(); //using the FileReader API to read files
reader.onload = function () {
$.ajax({
url: signedUrl,
type: 'PUT',
contentType: fileObject.type,
data: reader.result,
processData: false,
crossDomain: true,
success: function(){
//upload success
}
});
}
reader.readAsArrayBuffer(fileObject);
}
});
Related
I'm trying to upload images to a Digital Ocean space from the browser. These images should be public. I'm able to upload the images successfully.
However, though the ACL is set to public-read, the uploaded files are always private.
I know they're private because a) the dashboard says that the permissions are "private", and b) because the public urls don't work, and c) manually changing the permissions to "public" in the dashboard fixes everything.
Here's the overall process I'm using.
Create a pre-signed URL on the backend
Send that url to the browser
Upload the image to that pre-signed url
Any ideas why the images aren't public?
Code
The following examples are written in TypeScript and use AWS's v3 SDK.
Backend
This generates the pre-signed url to upload a file.
import { S3Client, PutObjectCommand } from '#aws-sdk/client-s3'
import { getSignedUrl } from '#aws-sdk/s3-request-presigner'
const client = new S3Client({
region: 'nyc3',
endpoint: 'https://nyc3.digitaloceanspaces.com',
credentials: {
accessKeyId: process.env.DIGITAL_OCEAN_SPACES_KEY,
secretAccessKey: process.env.DIGITAL_OCEAN_SPACES_SECRET,
},
})
const command = new PutObjectCommand({
ACL: 'public-read',
Bucket: 'bucket-name',
Key: fileName,
ContentType: mime,
})
const url = await getSignedUrl(client, command)
The pre-signed url is then sent to the browser.
Frontend
This is the code on the client to actually upload the file to Digital Ocean. file is a File object.
const uploadResponse = await fetch(url, {
headers: {
'Content-Type': file.type,
'Cache-Control': 'public,max-age=31536000,immutable',
},
body: file,
method: 'PUT',
})
Metadata
AWS SDK: 3.8.0
Turns out that for Digital Ocean, you also need to set the public-read ACL as a header in the put request.
//front-end
const uploadResponse = await fetch(url, {
headers: {
'Content-Type': file.type,
'Cache-Control': 'public,max-age=31536000,immutable',
'x-amz-acl': 'public-read', // add this line
},
body: file,
method: 'PUT',
})
I don't have the reputation to comment, hence adding a response. Thank you #Nick ... this is one of the few working examples of code I have seen for DigitalOcean pre-signed url. While the official DigitalOcean description here mentions Content-Type is needed for uploading with pre-signed urls, there is no example code.
Another mistake that prevented me from uploading a file using pre-signed URLs in DigitalOcean was using 'Content-Type':'multipart/form-data' and FormData().
After seeing this post, I followed #Nick's suggestion of using a File() object and 'Content-Type':'<relevant_mime>'. Then, the file upload worked like a charm. This is also not covered in official docs.
Try this to force ACL to Public in Digital Ocean Spaces:
s3cmd --access_key=YOUR_ACCESS_KEY --secret_key=YOUR_SECRET_KEY --host=YOUR_BUCKET_REGION.digitaloceanspaces.com --host-bucket=YOUR_BUCKET_NAME.YOUR_BUCKET_REGION.digitaloceanspaces.com --region=YOUR_BUCKET_REGION setacl s3://YOUR_BUCKET_NAME --acl-public
This is a Shopify shop pulling images from a public S3 bucket. A javascript function checks via AJAX if the images exist before putting them on an array to be used when rendering the product:
function set_gallery(sku) {
var bucket = 'https://[xbucket].s3.amazonaws.com/img/sku/';
var folder = sku.slice(0,4);
var nombre = sku.replace(' SG OPT ', '');
var nombre = nombre.replace(' ', '');
var idx='';
var ciclo = variant_gallery.attempts;
var fallos = variant_gallery.failed;
if (ciclo > 0) {
idx = '-'+ciclo;
}
var picURL = bucket+folder+'/'+nombre+idx+'.jpg';
$.ajax({
url:picURL,
type:'GET',
error: function()
{
fallos++;
ciclo++;
variant_gallery.failed = fallos;
variant_gallery.attempts = ciclo;
if ( fallos < 2 ) {
set_gallery(sku);
} else {
variant_gallery.isReady = true;
build_gallery();
}
},
success: function()
{
ciclo++;
variant_gallery.attempts = ciclo;
variant_gallery.gallery_urls.push(picURL);
if ( ciclo < 15 ) {
set_gallery(sku);
} else {
variant_gallery.isReady = true;
build_gallery();
}
}
});
}
This is how the Bucket Policy looks like...
{
"Version": "2012-10-17",
"Id": "Policy1600291283718",
"Statement": [
{
"Sid": "Stmt1600291XXXXXX",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[xbucket]/img/sku/*"
}
]
}
...and CORS Configuration...
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://shopifystore.myshopify.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>https://shopifystore.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
The problem is that, on Chrome, it renders as expected around 98% of the time (an error every 50 attempts), but in Safari I'm getting a CORS error about once every two or three attempts:
Origin https://shopifystore.com is not allowed by Access-Control-Allow-Origin.
XMLHttpRequest cannot load https://[bucket].s3.amazonaws.com/img/sku/image-to-load.jpg due to access control checks.
What can I do to make it as consistent in Safari as it is from Chrome? Hopefully even more reliable than that.
I have already checked these other SO questions:
AWS S3 bucket: CORS Configuration
AWS S3 CORS Error: Not allowed access
Fix CORS "Response to preflight..." header not present with AWS API gateway and amplify
Chrome is ignoring Access-Control-Allow-Origin header and fails CORS with preflight error when calling AWS Lambda
Intermittent 403 CORS Errors (Access-Control-Allow-Origin) With Cloudfront Using Signed URLs To GET S3 Objects
Cross-origin requests AJAX requests to AWS S3 sometimes result in CORS error
Cached non CORS response conflicts with new CORS request
Some of those won't apply to this scenario. Some others I tried without success.
After reading several possible solutions I finally solved with a mix of those. It turns out that this was a cache problem as illustrated here:
Cross-origin requests AJAX requests to AWS S3 sometimes result in CORS error
Cached non CORS response conflicts with new CORS request
I tried that solution first but didn't implemented right with Jquery and I just took another way.
Then tried a few hours later with this solution to avoid cache on jQuery AJAX:
How to prevent a jQuery Ajax request from caching in Internet Explorer?
Finally only added one line of code and got it solved:
$.ajax({
url:picURL,
type:'GET',
cache: false, // <- do this to avoid CORS on AWS S3
error: function()
{
...
}
I'm setting up a vuejs / DropzoneJS - app loosely based on kfei's vue-s3-dropzone app. It's designed to upload files (by using a PUT method) to AWS S3 serverlessly using a AWS Lambda function and a AWS S3 bucket.I'm basically getting a XMLHttpRequest at 'https://xxxxxxxxxxxxxxxxxxx' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: It does not have HTTP ok status and a 403 error code when I try to upload an image to the S3 bucket. Is there anything I can do to
This is what I've did:
created an S3 bucket
Set up a bucket policy and a CORS configuration in the S3 bucket settings:
enter image description here
enter image description here
Created a lambda function that is supposed to sign a URL which allows a PUT upload for each file to S3, with the Role executing the Lambda having a PutObject and PutObjectAcl permission on the S3 bucket:
enter image description here
Set up an Api Gateway API with an OPTIONS method (to pass the preflight check) and a PUT method with these CORS settings:
b. The OPTIONS method has a Mock backend integration with the Integration Response returning the following:
Access-Control-Allow-Headers 'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token,x-requested-with'
Access-Control-Allow-Methods ‘PUT,OPTIONS'
Access-Control-Allow-Origin '*'
c. The PUT method has:
"Access-Control-Allow-Origin": "*"
In AWS Api Gateway: Setup a api-key and a usage plan
The lambda code:
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var bucketName = process.env.AWS_BUCKET_NAME;
exports.handler = (event, context) => {
if (!event.hasOwnProperty('contentType')) {
context.fail({ err: 'Missing contentType' });
}
if (!event.hasOwnProperty('filePath')) {
context.fail({ err: 'Missing filePath' });
}
var params = {
Bucket: bucketName,
Key: event.filePath,
Expires: 3600,
ContentType: event.contentType
};
s3.getSignedUrl('putObject', params, (err, url) => {
if (err) {
context.fail({ err });
} else {
context.succeed({ url });
}
});
};
Expected: Successful upload of files
Actual: Possible CORS issues.
getSignedUrl will work fine if you were uploading the file from an API client like Postman or a Node.js server, but as you state you are seeing a preflight check fail, I'm assuming you are using some kind of HTML form & frontend js.
From the AWS JavaScript SDK Docs regarding getSignedUrl:
Note: Not all operation parameters are supported when using pre-signed
URLs. Certain parameters, such as SSECustomerKey, ACL, Expires,
ContentLength, or Tagging must be provided as headers when sending a
request. If you are using pre-signed URLs to upload from a browser and
need to use these fields, see createPresignedPost().
As you are setting the 'Expires' param when calling getSignedUrl and are sending from the browser, you need to use createPresignedPost instead of getSignedUrl in your Lambda code.
You will then need to POST instead of PUT from the browser to S3.
NB: Remember to update your CORS rules for S3 with POST
I am trying to upload an image to my S3 bucket through a pre-signed url. Everything works well except that when I hit the public URL for that image, the browser downloads it instead of showing it. When I upload the same image from the AWS Console, everything works well and the image gets displayed in the browser.
Here how I do it:
Generation of the pre-signed URL:
s3.getSignedUrl('putObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
Upload of the file with axios:
const response = await axios.put(url, formElement.files[0])
Should I configure headers somewhere in the process to tell S3 the mime type of the content I'm uploading or something like this?
Thank you for your help
There are two places you can do this.
If you know the type of image ahead of time, then you can explicitly set the ContentType in the s3.getSignedUrl params. This is because those params will be encoded and passed with the signed put request: getSignedUrl docs / putObject docs. So for example:
s3.getSignedUrl('putObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds,
ContentType: 'image/png'
});
Alternatively, you can set the Content-Type header on the Axios request REST PUT docs, for example:
const response = await axios.put(
url,
formElement.files[0],
{ headers: { 'Content-Type': formElement.files[0].type } });
I'm using AWS S3 and I've configured my Bucket to use CORS:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I'm requesting SVG images from the Bucket, in a client-side React application. I'm rendering them inline so the response needs to have CORS headers enabled. Sometimes this works, and sometimes it doesn't. I can't isolate exactly what is causing the issue. I was retrieving one image fine; then I uploaded a new image to the bucket, and that image, once downloaded, was giving me the error:
XMLHttpRequest cannot load https://s3.amazonaws.com/.../example.svg. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access.
I've tried adding <AllowedHeader>*</AllowedHeader> and <ExposeHeader>ETAG</ExposeHeader>, and clearing my cache with every change, to no effect. I'm confused. Why aren't the headers coming through?
It doesn't always return CORs headers, it seems that you need to provide a origin header and you don't always do so.
To make it consistent and always return the CORs headers you need to add one lambda function:
'use strict';
// If the response lacks a Vary: header, fix it in a CloudFront Origin Response trigger.
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
if (!headers['vary'])
{
headers['vary'] = [
{ key: 'Vary', value: 'Access-Control-Request-Headers' },
{ key: 'Vary', value: 'Access-Control-Request-Method' },
{ key: 'Vary', value: 'Origin' },
];
}
callback(null, response);
};
See the full answer and more details here: https://serverfault.com/a/856948/46223