Cannot Access S3 resources with CORS configuration - amazon-s3

Good day,
I am using the following tutorial to create an S3 bucket to store a .csv file that is updated hourly from google drive via a Lambda routine:
https://labs.mapbox.com/education/impact-tools/sheetmapper-advanced/#cors-configuration
When I try to access the .csv from its S3 object URL by inserting it into the browser
https://mapbox-sheet-mapper-advanced-bucket.s3.amazonaws.com/SF+Food+Banks.csv
I get the following error
error image
The CORS permission given in the tutorial is in XML format:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I have tried to convert it into JSON format, as it seems the S3 console no longer supports CORS permissions in XML format:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
Any advice/help would be greatly appreciated!

Please make sure that you have you account permissions able to support public access to S3. There are four things that I ran into today while trying to make a public S3 resource.
Account settings for block public access has to be disabled. (MAKE SURE TO ENABLE IT FOR ANY PRIVATE BUCKETS OR OBJECTS)
Individual block public access has to be disabled. (As shown in your tutorial)
ACL must allow read access. You can find this under S3 - Buckets - your_bucket - permissions - Access Control list. Edit this for read access.
Go to the individual object and ensure that it also has permissions to be read from the public.

Related

Amazon S3 Bucket Setup For Recording Sinch Calls

I have contacted Sinch support and they have informed me that my access key, secret key, and bucket name are ready to go from their point of view.
My next step (I believe) is to configure the Amazon S3 bucket itself.
I'm not sure if this is through
1. the bucket Public Access (I have "Block all Public Access" turned off for this bucket)
2. the CORS configuration
3. the ACL, or
4. the Bucket Control Policy
Any guidance would be greatly appreciated from the Amazon S3 bucket side of things.
I have set my bucket up to have the following CORS configuration and would like some insight into whether this is correct to work with Sinch. I don't know if AllowedOrigin for sinch is correct.
I have also pasted my callback ICE response in PHP if there are errors there for recording.
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://www.mywebsite.com</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Below is my ICE response. It does not connect calls
when I have it configured this way but I'm not sure
if it's due to an ICE response structure error or amazon bucket setup error.
I have been using this documentation link but find it somewhat confusing:
https://developers.sinch.com/docs/voice-rest-api-recording
$ice_response =
array(
"instructions"=>array(
array("name"=>"Say","text"=>"Hello listeners","locale"=>"en-US"),
array(
"name:=>"StartRecording",
"options"=>array(
"destinationUrl"=>"s3://mybucketname/test-file.mp3",
"credentials"=>"accesskey:secretkey:region_code",
"format"=>"mp3",
"notificationEvents"=>true
)
),
array("name"=>"Say","text"=>"Recording started","locale"=>"en-US")
),
"action"=>array(
"name" => "connectConf",
"conferenceId" => $post["to"]["endpoint"],
"record" => true
)
);
echo json_encode($ice_response);

Font requests to CloudFront are "(cancelled)"

I have a CloudFront distribution at (static.example.com) which has an S3 bucket as Origin. I am using this distribution to store all the artifacts for client code (JavaScript files, stylesheets, images and fonts).
All the requests to JavaScript files, stylesheets and images succeed without any problem however, requests to font files have the status cancelled in Google Chrome.
Here is how I request those fonts:
#font-face {
font-family: Material Design Icons;
font-style: normal;
font-weight: 400;
src: url(https://static.example.com/5022976817.eot);
src: url(https://static.example.com/5022976817.eot) format("embedded-opentype"),
url(https://static.example.com/611c53b432.woff2) format("woff2"),
url(https://static.example.com/ee55b98c3c.woff) format("woff"),
url(https://static.example.com/cd8784a162.ttf) format("truetype"),
url(https://static.example.com/73424aa64e.svg) format("svg")
}
The request to the svg font file is ok, but the other ones are not ok.
What have I done wrong? Every file in the S3 bucket has public-read ACL.
It seems like this is an issue with CORS preventing to load the fonts from a different domain.
You need to:
Enable CORS on your S3 bucket:
Go to the s3 bucket and on the Permissions tab select CORS configuration, add the permission with your AllowedOrigin:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://static.example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
You can add multiple AllowedOrigin:
<AllowedOrigin>https://static.example.com</AllowedOrigin>
<AllowedOrigin>http://static.example.com</AllowedOrigin>
<AllowedOrigin>https://example.com</AllowedOrigin>
<AllowedOrigin>http://otherexample.com</AllowedOrigin>
Or use a wildcard:
<AllowedOrigin>*.example.com</AllowedOrigin>
Whitelist the appropriate headers on CloudFront to be passed to S3:
Go to the Behaviors tab on your CloudFront distribution and select Create Behavior and add the patter you want:
Path Pattern: 5022976817.eot
Cache Based on Selected Request Headers: Whitelist
Add the following headers to the whitelisted headers:
Access-Control-Request-Headers
Access-Control-Request-Method
Origin
You can test that CORS is working properly with curl:
curl -X GET -H "Origin: https://static.example.com" -I https://static.example.com/5022976817.eot
Everything is ok if you get a response header like:
access-control-allow-origin: https://static.example.com

Transfer From S3 to Google Storage - Incorrect Key

I've been trying for past couple of hours to setup a transfer from S3 to my google storage bucket.
The error that i keep getting, when creating the transfer is: "Invalid access key. Make sure the access key for your S3 bucket is correct, or set the bucket permissions to Grant Everyone."
Both the access key and the secret are correct, given that they are currently in use in production for S3 full access.
Couple of things to note:
CORS-enabled on S3 bucket
Bucket policy only allows authenticated AWS users to list/view its contents
S3 requires signed URLs for access
Bucket Policy:
{
"Version": "2008-10-17",
"Id": "Policy234234234",
"Statement": [
{
"Sid": "Stmt234234",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:AbortMultipartUpload",
"s3:GetObjectAcl",
"s3:RestoreObject",
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:PutObjectVersionAcl",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:PutObject",
"s3:GetObjectVersionAcl"
],
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity xyzmatey"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
},
{
"Sid": "3",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::mybucket"
}
]
}
CORS Policy
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>http://www.mywebsite.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>AUTHORIZATION</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedHeader>AUTHORIZATION</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Any idea where i have gone wrong?
EDIT: I've setup the gsutil tool on a google compute instance and did a copy with the same AWS keys on the exact bucket. Worked like a charm..
I'm one of the devs on Transfer Service.
You'll need to add "s3:GetBucketLocation" to your permissions.
It would be preferable if the error you received was more specifically about your ACLs, however, rather than an invalid key. I'll look into that.
EDIT: Adding more info to this post. There is documentation which lists this requirement: https://cloud.google.com/storage/transfer/
Here's a quote from the section on "Configuring Access":
"If your source data is an Amazon S3 bucket, then set up an AWS Identity and Access Management (IAM) user so that you give the user the ability to list the Amazon S3 bucket, get the location of the bucket, and read the objects in the bucket." [Emphasis mine.]
EDIT2: Much of the information provided in this answer could be useful for others, so it will remain here, but John's answer actually got to the bottom of OP's issue.
I am an engineer on Transfer service.
The reason you encountered this problem is that AWS S3 region ap-southeast-1 (Singapore) is not yet supported by the Transfer service, because GCP does not have networking arrangement with AWS S3 in that region. We can consider to support that region now but your transfer will be much slower than other regions.
On our end, we are making a fix to display a clearer error message.
You can also get the 'Invalid access key' error if you try to transfer a subdirectory rather than a root S3 bucket. For example, I tried to transfer s3://my-bucket/my-subdirectory and it kept failing with the invalid access key error, despite me giving read permissions to google for the entire S3 bucket. It turns out the google transfer service doesn't support transferring subdirectories of the S3 bucket, you must specify the root as the source for the transfer: s3://my-bucket.
May Be this can help:
First, specify the S3_host in you boto config file, i.e., the endpoint-url containing region (No need to specify s3-host if the region is us-east-1, which is default). eg,
vi ~/.boto
s3_host = s3-us-west-1.amazonaws.com
That is it,
Now you can proceed with any one of these commands:
gsutil -m cp -r s3://bucket-name/folder-name gs://Bucket/
gsutil -m cp -r s3://bucket-name/folder-name/specific-file-name gs://Bucket/
gsutil -m cp -r s3://bucket-name/folder-name/ gs://Bucket/*
gsutil -m cp -r s3://bucket-name/folder-name/file-name-Prefix gs://Bucket/**
You can also try rsync.
https://cloud.google.com/storage/docs/gsutil/commands/rsync
I encountered the same problem couple of minutes ago. And I was easily able to solve it by giving admin access key and secret key.
It worked for me. just FYI, my s3 bucket was north-Virginia.

Fine Uploader S3: Refused to get unsafe header "ETag"

I'm trying to upload to S3 with the jQuery fineuploader (v 3.9.1) and have enabled debugging. All of the parts of the upload succeed but then I get an error "Problem asking Amazon to combine the parts!"
I've enabled debug on the console and get the errors [Refused to get unsafe header "ETag"] as well as this from Amazon:
Received response status 400 with body:
InvalidPartOne or more of the specified
parts could not be found. The part may not have been uploaded, or the
specified entity tag may not match the part's entity
tag.eTvPFvkXEm07T17tvZvFacR4vn95EUTqXyoPvlLh1a6AADlc94v7H9.a2jcmow1pjfN1xcdw_xMx60APpXn6rGwhHYtzE0NT90Bs0IVqrkaFHW75yRl5E4nfO3Od6rWZnull0CD2DC02D0870E61R4Kpfe66IDvL44Jx9Aoicxgh9Frqd4qr8ILWHbu5YhlqGomxIBOZvfkgy4R4VsYS1
It seems your Amazon S3 CORS XML configuration file is incorrect. Make sure you add <ExposeHeader>ETag</ExposeHeader> to the <CORSRule> section as detailed below,
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>ETag</ExposeHeader>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
More information in the documentation on Amazon S3 servers and the official blog post on the same thing.

Uploadifive and Amazon s3 - Origin is not allowed by Access-Control-Allow-Origin

I am trying to get Uploadifive (the HTML5 version of Uploadify) to work with Amazon S3. We already have Uploadify working, but a lost of our visitors use Safari without flash so we need Uploadifive as well.
I am looking to make a POST but the problem is that the pre-flight OPTIONS request that Uploadifive sends gets a "403 Origin is not allowed by Access-Control-Allow-Origin".
The CORS rules on Amazon are set to allow * origin, so I see no reason for Amazon to refuse the request (and note that it accepts the requests coming from Flash, although I don't know if Flash sends an OPTIONS request before the POST). If I haven't made some big mistake in my settings on Amazon I assume this has something to do with Uploadifive not being set up for cross-origin requests, but I can find no info on how to check/do this or even how to change the headers sent on the request.
Has anyone tried using Uploadifive with Amazon S3, and how have you gotten around this problem?
My S3 CORS setting:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
</CORSRule>
Edit:
After testing Chrome with the --disable-web-security flag I stopped getting 403:s, so it does seem like it is Uploadifive that is not setting cross domain headers properly. The question has now become, how do you modify the cross domain settings of Uploadifive?
Victory!
After banging my head against the wall for a few hours I found two errors.
1) Any headers (beyond the most basic ones) that you want to send to Amazon must be specified in the CORS settings via the AllowedHeader tag. So I changed the POST part of my settings on Amazon to this:
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
2) Uploadifive was adding the "file" field first in the formData, Amazon requires that it is the last field. So I modified the Uploadifive js to add the file field last. In 1.1.1 this was around line 393 and this is the change:
Before:
// Add the form data
formData.append(settings.fileObjName, file);
// Add the rest of the formData
for (var i in settings.formData) {
formData.append(i, settings.formData[i]);
}
After:
// Add the rest of the formData
for (var i in settings.formData) {
formData.append(i, settings.formData[i]);
}
// Add the form data
formData.append(settings.fileObjName, file);
This solved the issue for me. There might still be some work to do in order for Uploadify to understand the response, but the upload itself now works and returns a 201 Created as it should.