Amazon S3 direct upload: CORS error - amazon-s3

I'm unable to do a direct upload (javascript XHR) on my S3 bucket, because of CORS blocking system.
I'm using PHP to generate direct a upload link, with an upload policy and S3 signature:
{"key": "501/source/${filename}", "AWSAccessKeyId": "AKIAIIG**********", "acl": "private","policy": "ey JleHBpcmF0aW***************", "signature": "j2UnJRfj+uC+FazEF+wPnuJpdcs=", "success_action_status": "201"}
But when I try to upload a file to generated link, I get following error from Firefox:
Request Blocked: The Same Origin Policy disallows reading the remote
resource at https://my.bucket.s3.amazonaws.com. This can be fixed by
moving the resource to the same domain or enabling CORS.
My bucket is correctly configured with a CORS policy to allow POST from everywhere:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
What more should I do?
Here is the PHP code I use to generate the policy & S3 signature :
$key = '42/source/';
$policy = json_encode(array(
'expiration' => date('Y-m-d\TG:i:s\Z', strtotime('+6 hours')),
'conditions' => array(
array('bucket' => 'my.bucket'),
array('acl' => 'private'),
array('starts-with', '$key', $key),
array('success_action_status' => '201')
)
));
$policy = base64_encode($policy);
$signature = base64_encode(hash_hmac('sha1', $policy, 'G3wzaTNwnQC2mQB3****************', true));
return array(
'key' => $key.'${filename}',
'AWSAccessKeyId' => 'AKIAIIG**********',
'acl' => 'private',
'policy' => $policy,
'signature' => $signature,
'success_action_status' => '201'
);
I then use this array of params in my JavaScript fileupload() script to direct upload to Amazon S3 (XHR request).
Thanks for your help,
Philippe S.

If anyone is stuck... NEVER use a dot "." in your bucket name.
It's causing some SSL certificate troubles, as a new sub domain.
Eg: you named your bucket "my.bucket", then it will be understood as "my" subdomain of "bucket".
Just use "-" or "_" instead of the dot.

Related

Slingshot fail to upload to S3

Slingshot package is used with Meteor to upload images to S3 directly from the client. Same code that I've used in other projects approved to be working. Even at my local setup, I can upload images to cloud, but not with its deployed version, which is identical. The error is as follows:
Failed to upload file to cloud storage [Bad Request - 400]
the region 'us-east-1' is wrong; expecting 'eu-central-1'
(but it doesn't tell where...)
Any ideas?
This is the initialisation of the Meteor Slingshot directive:
const s3Settings = Meteor.settings.private.S3settings;
Slingshot.createDirective("userProfileImages", Slingshot.S3Storage, {
AWSAccessKeyId: s3Settings.AWSAccessKeyId,
AWSSecretAccessKey: s3Settings.AWSSecretAccessKey,
bucket: s3Settings.AWSBucket,
region: s3Settings.AWSRegion,
acl: "public-read",
authorize: function () {
if (!this.userId) {
const message = "Please login before posting images";
throw new Meteor.Error("Login Required", message);
}
return true;
},
key: function (file) {
const user = Meteor.users.findOne(this.userId);
return user.username + "/" + file.name;
}
});
This is my Amazon S3 CORS configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>10000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I have no bucket policy.
Access control is all public.
Help appreciated.
The problem was me. I defined the region in my settings as AWSregion (r), whereas I called it AWSRegion (R) in my code to setup. So it was undefined and didn't work.
The solution is to make sure cases are typed right.

Ember-uploader s3 upload invalid preflight response?

I am using Ember-uploader to upload files directly from browser to s3. The cors policy that i have set is as follows
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>http://localhost:4200</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Api response from nodejs server is as follows
{
"acl":"public-read",
"awsaccesskeyid":"AKIAJJC5RMTQE7RNOUAA",
"bucket":"georoot",
"Cache-Control":"max-age=630720000, public",
"Content-Type":"image/png",
"expires":"2018-06-06T13:00:19.000Z",
"key":"uploads/2018-05-30-212251_1366x768_scrot.png",
"policy":"eyJleHBpcmF0aW9uIjoiMjAxO9",
"signature":"li7WlpwEYqX+jWqkjw72QE2DWug=",
"success_action_status":"201"
}
The issue is the upload fails because of
Failed to load http://georoot.s3.amazonaws.com/: Response for preflight is invalid (redirect)
On inspecting the network logs it shows me a 307 redirect and the upload fails. The configuration i have for the uploader plugin is as follows
import Component from '#ember/component';
import EmberUploader from 'ember-uploader';
import config from '../config/environment';
export default EmberUploader.FileField.extend({
filesDidChange(files) {
const uploader = EmberUploader.S3Uploader.create({
url: config.APP.UPLOAD_ENDPOINT
});
uploader.on('didUpload', response => {
let uploadedUrl = $(response).find('Location')[0].textContent;
uploadedUrl = decodeURIComponent(uploadedUrl);
});
if (!Ember.isEmpty(files)) {
uploader.upload(files[0], { });
}
}
});
Can someone explain where i am going wrong ?

CORS. Presigned URL. S3

I've generated a presigned S3 POST URL. Using the return parameters, I then pass it into my code, but I keep getting this error Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource..
Whereas on Postman, I'm able to submit the form-data with one attached file.
On PostMan, I manually entered the parameters
The same parameters are then entered into my code.
You must edit the CORS Configuration to be public , something like:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Unable to comment so adding this here. Contains Harvey's answer, but in the form of a text to make it easy to copy.
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
I encountered this issue as well. My CORs configuration on my bucket seemed correct yet my presigned URLs were hitting CORs problems. It turns out my AWS_REGION for my presigner was not set to the aws region of the bucket. After setting AWS_REGION to the correct region, it worked fine. I'm annoyed that the CORS issue was such a red herring to a simple problem and wasted several hours of my time.
On my case I fixed it by having allowedMethods, and origins in S3. The menu is under the Permissions tab
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
In my case I specifically needed to allow the PUT method in the S3 Bucket's CORS Configuration to use the presigned URL, not the GET method as in the accepted answer:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
My issue was that for some reason - getSignedUrl returned a url like so:
https://my-bucket.s3.us-west-2.amazonaws.com/bucket-folder/file.jpg
I've removed the region part - us-west-2 - and that fixed it 🤷🏼‍♂️
So instead it is now
https://my-bucket.s3.amazonaws.com/bucket-folder/file.jpg
My issue was I had a trailing slash (/) at the end of the domain in "AllowedOrigins". Once I removed the slash, requests worked.
I used boto3 to add the cors policy, and this is what worked for me. Used the logic by #Pranav Joglekar
cors_configuration = {
'CORSRules': [{
'AllowedHeaders': ['*'],
'AllowedMethods': ['GET', 'PUT', 'POST'],
'AllowedOrigins': ['*'],
'ExposeHeaders': [],
'MaxAgeSeconds': 3000
}]
}
s3_client = get_s3_client()
s3_client.put_bucket_cors(Bucket='my_bucket_name',
CORSConfiguration=cors_configuration)
For me,it was because my bucket name had a hyphen in it (e.g. my-bucket). The signed URL would replace the hyphen in the bucket name with an underscore and then sign it. So this meant two things:
CORS wouldn't work because the URL technically wasn't correct
I couldn't just change the underscore back to the hyphen because then the signature would be wrong when AWS validated the signed URL
I eventually had to rename my bucket to something without a hyphen (e.g. mybucket) and then it worked fine with the following configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
We have to specify only the required HTTP method. we were using the POST method for Presigned URL so removed the "GET" and "PUT" methods from "AllowedMethods"
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
I was getting similar CORS errors even with things properly configured.
Thanks to this answer, I discovered my Lambda#Edge that presigns was using a region that wasn't the right one for this bucket. (which was on us-east-1 for some default stack reason).
So I had to be explicit about the region when generating the presignedPost
reference:
https://stackoverflow.com/a/13703595/11832970
Check the url encoding. I had a url encoded version of the pre-resigned URL and that failed until I decoded it.

AWS S3 Inconsistently Provides CORS Headers

I'm using AWS S3 and I've configured my Bucket to use CORS:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I'm requesting SVG images from the Bucket, in a client-side React application. I'm rendering them inline so the response needs to have CORS headers enabled. Sometimes this works, and sometimes it doesn't. I can't isolate exactly what is causing the issue. I was retrieving one image fine; then I uploaded a new image to the bucket, and that image, once downloaded, was giving me the error:
XMLHttpRequest cannot load https://s3.amazonaws.com/.../example.svg. No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://localhost:3000' is therefore not allowed access.
I've tried adding <AllowedHeader>*</AllowedHeader> and <ExposeHeader>ETAG</ExposeHeader>, and clearing my cache with every change, to no effect. I'm confused. Why aren't the headers coming through?
It doesn't always return CORs headers, it seems that you need to provide a origin header and you don't always do so.
To make it consistent and always return the CORs headers you need to add one lambda function:
'use strict';
// If the response lacks a Vary: header, fix it in a CloudFront Origin Response trigger.
exports.handler = (event, context, callback) => {
const response = event.Records[0].cf.response;
const headers = response.headers;
if (!headers['vary'])
{
headers['vary'] = [
{ key: 'Vary', value: 'Access-Control-Request-Headers' },
{ key: 'Vary', value: 'Access-Control-Request-Method' },
{ key: 'Vary', value: 'Origin' },
];
}
callback(null, response);
};
See the full answer and more details here: https://serverfault.com/a/856948/46223

Amazon S3 CORS request fails for uploaded files

I am using Amazon S3 as backend. I have the bucket correctly configured to allow CORS to anything from my domain. I have tested that it works for regular files (ie. uploaded via the Amazon AWS console or with the S3 command line tools).
My app also uploads JSON files itself to the S3 bucket. Interestingly, it needs CORS correctly configured for the upload to succeed. It does and my JSON file is placed into the bucket.
The problem is, when I make a CORS GET request (jquery $.ajax) for these files I previously uploaded, the request fails with the typical message
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Please mind that with any other file in the same bucket, same path, that was not uploaded by the application, but from the console or comnmand line tools, the request succeeds.
Why is this happening?
My CORS configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>http://example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>https://example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Somewhere in jQuery documentation you find a option for $.ajax.
jQuery.support.cors = true;
...
$.ajax(
url,
{
crossDomain: true,
data: {
sampleData
},
success: function() {
alert('Yeaaahhh')
},
error: function(a,b,c) {
alert('failed');
},
type: 'post'
}
);
But better you use a XMLHTTPRequest for that. Like:
var xhr = new XMLHTTPRequest;
xhr.open('POST', url);
xhr.onreadystatechange = function(state, status){
//do something
};
xhr.onload = function(event){
//do something
};
xhr.onerror = function(event){
//do something
};
xhr.send(data);
Greets.