I have contacted Sinch support and they have informed me that my access key, secret key, and bucket name are ready to go from their point of view.
My next step (I believe) is to configure the Amazon S3 bucket itself.
I'm not sure if this is through
1. the bucket Public Access (I have "Block all Public Access" turned off for this bucket)
2. the CORS configuration
3. the ACL, or
4. the Bucket Control Policy
Any guidance would be greatly appreciated from the Amazon S3 bucket side of things.
I have set my bucket up to have the following CORS configuration and would like some insight into whether this is correct to work with Sinch. I don't know if AllowedOrigin for sinch is correct.
I have also pasted my callback ICE response in PHP if there are errors there for recording.
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://www.mywebsite.com</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Below is my ICE response. It does not connect calls
when I have it configured this way but I'm not sure
if it's due to an ICE response structure error or amazon bucket setup error.
I have been using this documentation link but find it somewhat confusing:
https://developers.sinch.com/docs/voice-rest-api-recording
$ice_response =
array(
"instructions"=>array(
array("name"=>"Say","text"=>"Hello listeners","locale"=>"en-US"),
array(
"name:=>"StartRecording",
"options"=>array(
"destinationUrl"=>"s3://mybucketname/test-file.mp3",
"credentials"=>"accesskey:secretkey:region_code",
"format"=>"mp3",
"notificationEvents"=>true
)
),
array("name"=>"Say","text"=>"Recording started","locale"=>"en-US")
),
"action"=>array(
"name" => "connectConf",
"conferenceId" => $post["to"]["endpoint"],
"record" => true
)
);
echo json_encode($ice_response);
Related
Good day,
I am using the following tutorial to create an S3 bucket to store a .csv file that is updated hourly from google drive via a Lambda routine:
https://labs.mapbox.com/education/impact-tools/sheetmapper-advanced/#cors-configuration
When I try to access the .csv from its S3 object URL by inserting it into the browser
https://mapbox-sheet-mapper-advanced-bucket.s3.amazonaws.com/SF+Food+Banks.csv
I get the following error
error image
The CORS permission given in the tutorial is in XML format:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
I have tried to convert it into JSON format, as it seems the S3 console no longer supports CORS permissions in XML format:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
Any advice/help would be greatly appreciated!
Please make sure that you have you account permissions able to support public access to S3. There are four things that I ran into today while trying to make a public S3 resource.
Account settings for block public access has to be disabled. (MAKE SURE TO ENABLE IT FOR ANY PRIVATE BUCKETS OR OBJECTS)
Individual block public access has to be disabled. (As shown in your tutorial)
ACL must allow read access. You can find this under S3 - Buckets - your_bucket - permissions - Access Control list. Edit this for read access.
Go to the individual object and ensure that it also has permissions to be read from the public.
I have seen the answers to similar questions but I am wondering that in 2017 what is the best way of configure CORS for S3/CF if I would like to restrict the legitimate access to *.domain.tld. The Javascript is loading from CF and renders a web app using Ajax requests to api.domain.tld.
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*.domain.tld</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>OPTIONS</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Is there anything else I could add to improve on the CORS settings?
The following are the general rules for making a CORS configuration:
1)A valid CORS configuration consists of 0 to 100 CORS rules.
2)Each rule must include at least one origin.
3)An origin may contain at most one wildcard *
4)Each rule must include at least one method.
5)The supported methods are: GET, HEAD, PUT, POST, DELETE.
6)Each rule may contain an identifying string of up to 255 characters.
7)Each rule may specify zero or more allowed request headers (which the client may include in the request).
8)Each rule may specify zero or more exposed response headers (which are sent back from the server to the client).
9)Each rule may specify a cache validity time of zero or more seconds. If not included, the client should supply their own default.
Recently I worked with one of JS/CF project and here is my CORS Configuration.
<CORSConfiguration>
<CORSRule>
<ID>example.com: Allow PUT & POST with AWS S3 JS
SDK</ID>
<AllowedOrigin>https://www.example.com</AllowedOrigin>
<AllowedOrigin>http://www.example.com</AllowedOrigin>
<AllowedOrigin>https://example.com</AllowedOrigin>
<AllowedOrigin>http://example.com</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedHeader>Origin</AllowedHeader>
<AllowedHeader>Content-Length</AllowedHeader>
<AllowedHeader>Content-Type</AllowedHeader>
<AllowedHeader>Content-MD5</AllowedHeader>
<AllowedHeader>X-Amz-User-Agent</AllowedHeader>
<AllowedHeader>X-Amz-Date</AllowedHeader>
<AllowedHeader>Authorization</AllowedHeader>
<ExposeHeader>ETag</ExposeHeader>
<MaxAgeSeconds>1800</MaxAgeSeconds>
</CORSRule>
<CORSRule>
<ID>example.com: Allow GET with AWS S3 JS SDK</ID>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
<ExposeHeader>ETag</ExposeHeader>
<MaxAgeSeconds>1800</MaxAgeSeconds>
</CORSRule>
</CORSConfiguration>
More details you can find here
Thanks
I recently implemented AWS Signature version 4 using the REST API. This is verified by an extensive regression test working perfectly.
The problem I'm experiencing is that the regression test succeeds when run against a bucket residing in the eu-central-1 region, but consistently fails with the Accessed Denied error message for buckets residing in us-east-1 or us-west-2.
Here are snippets from successful and failed attempts.
eu-central-1 : successful
HTTP request:
GET./
host:s3.eu-central-1.amazonaws.com.x-amz-content-sha256:e3b0...b855.x-amz-date:Wed, 25 May 2016 03:13:21 +0000
host;x-amz-content-sha256;x-amz-date.e3b0...b855
Signed string:
AWS4-HMAC-SHA256
Credential=AKIAJZN7UY6XHIZPWIKQ/20160525/eu-central-1/s3/aws4_request,
SignedHeaders=host;x-amz-content-sha256;x-amz-date,
Signature=cf5f...4dc8
Server response:
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult
xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Owner>
<ID>100a...a575</ID>
</Owner>
<Buckets>
<Bucket>
. . .
</Bucket>
</Buckets>
</ListAllMyBucketsResult>
us-east-1 : failed
HTTP request:
GET./
host:s3.us-east-1.amazonaws.com.x-amz-content-sha256:e3b0...b855.x-amz-date:Wed, 25 May 2016 03:02:27 +0000
host;x-amz-content-sha256;x-amz-date.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Signed string:
AWS4-HMAC-SHA256
Credential=AKIAJZN7UY6XHIZPWIKQ/20160525/us-east-1/s3/aws4_request,
SignedHeaders=host;x-amz-content-sha256;x-amz-date,
Signature=01e97...4d00
Server response:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>92EEF2A86ECA88EF</RequestId>
<HostId>i3wTU6OzBrlX89xR4KnnezBx1Tb2IGN2wtgPJMRtKLjHxF/B6VdCQqPz1279J7e5</HostId>
</Error>
us-west-2 : failed
HTTP request:
GET./
host:s3.us-west-2.amazonaws.com.x-amz-content-sha256:e3b0...b855.x-amz-date:Wed, 25 May 2016 07:04:47 +0000
host;x-amz-content-sha256;x-amz-date.e3b0...b855
Signed string:
AWS4-HMAC-SHA256
Credential=AKIAJZN7UY6XHIZPWIKQ/20160525/us-west-2/s3/aws4_request,
SignedHeaders=host;x-amz-content-sha256;x-amz-date,
Signature=cf70...36b9
Server response:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>DB143DBF0F316EB8</RequestId>
<HostId>5hWJ0AHM466QcT+BK4UaEFpqXFNaJFEuAPlN/ZZPBhL+NDYBoGaySRkXQ3BRdyfy9PBDuSb0oHA=</HostId>
</Error>
Attempts made to date include:
I found references (like here) where when using US Standard (i.e., us-east-1) the REST endpoint should not include "us-east-1". I have not yet found this written officially. I therefore created a us-west-2 bucket, in the hope that the REST endpoint needs to contain "us-west-2", but that also fails.
I searched on Google and StackOverflow for possible reasons for "Access Denied", which led me to adding a bucket policy that gives permissions to all -- to no avail.
The permissions of the EU and US accounts in the AWS console look the same, so no hint there, yet.
I added logging to the buckets in the hope of seeing a failure entry, but nothing is logged until authentication is completed.
Does anyone have an idea why AWS v4 authentication will consistently succeed for an eu-central-1 bucket, but equally fail for us-east-1 and us-east-2 buckets?
Here's your issue.
For unknown reasons,¹ eu-central-1 is an oddball in S3. The REST endpoint works with two variations in hostname: bucket.s3.eu-central-1.amazonaws.com or bucket.s3-eu-central-1.amazonaws.com.
The difference is the dot or dash after s3.
All other regions (as of now) except us-east-1 and ap-northeast-2 (which is just like eu-central-1) work only with the dash after s3, e.g. bucket.s3-us-west-2.amazonaws.com... not with a dot.
And us-east-1 expects either bucket.s3.amazonaws.com or bucket.s3-external-1.amazonaws.com.
And finally, any region will work with just bucket.s3.amazonaws.com within a few minutes after the original creation of a bucket, because the DNS is integrated with the bucket location database and automatically routes requests to the right place, for each bucket.
But note that when you sign the requests, you always use the actual region name in the signing algorithm itself -- not the endpoint -- as you appear to already be doing.
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
¹I'll speculate that this convention is actually the "new normal" for new regions -- it's more consistent with other AWS services. S3 is one of the oldest, so it makes sense that legacy design decisions are more likely to exist, as seems to be the case, here.
I'm trying to upload to S3 with the jQuery fineuploader (v 3.9.1) and have enabled debugging. All of the parts of the upload succeed but then I get an error "Problem asking Amazon to combine the parts!"
I've enabled debug on the console and get the errors [Refused to get unsafe header "ETag"] as well as this from Amazon:
Received response status 400 with body:
InvalidPartOne or more of the specified
parts could not be found. The part may not have been uploaded, or the
specified entity tag may not match the part's entity
tag.eTvPFvkXEm07T17tvZvFacR4vn95EUTqXyoPvlLh1a6AADlc94v7H9.a2jcmow1pjfN1xcdw_xMx60APpXn6rGwhHYtzE0NT90Bs0IVqrkaFHW75yRl5E4nfO3Od6rWZnull0CD2DC02D0870E61R4Kpfe66IDvL44Jx9Aoicxgh9Frqd4qr8ILWHbu5YhlqGomxIBOZvfkgy4R4VsYS1
It seems your Amazon S3 CORS XML configuration file is incorrect. Make sure you add <ExposeHeader>ETag</ExposeHeader> to the <CORSRule> section as detailed below,
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>ETag</ExposeHeader>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
More information in the documentation on Amazon S3 servers and the official blog post on the same thing.
I am trying to get Uploadifive (the HTML5 version of Uploadify) to work with Amazon S3. We already have Uploadify working, but a lost of our visitors use Safari without flash so we need Uploadifive as well.
I am looking to make a POST but the problem is that the pre-flight OPTIONS request that Uploadifive sends gets a "403 Origin is not allowed by Access-Control-Allow-Origin".
The CORS rules on Amazon are set to allow * origin, so I see no reason for Amazon to refuse the request (and note that it accepts the requests coming from Flash, although I don't know if Flash sends an OPTIONS request before the POST). If I haven't made some big mistake in my settings on Amazon I assume this has something to do with Uploadifive not being set up for cross-origin requests, but I can find no info on how to check/do this or even how to change the headers sent on the request.
Has anyone tried using Uploadifive with Amazon S3, and how have you gotten around this problem?
My S3 CORS setting:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
</CORSRule>
Edit:
After testing Chrome with the --disable-web-security flag I stopped getting 403:s, so it does seem like it is Uploadifive that is not setting cross domain headers properly. The question has now become, how do you modify the cross domain settings of Uploadifive?
Victory!
After banging my head against the wall for a few hours I found two errors.
1) Any headers (beyond the most basic ones) that you want to send to Amazon must be specified in the CORS settings via the AllowedHeader tag. So I changed the POST part of my settings on Amazon to this:
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
2) Uploadifive was adding the "file" field first in the formData, Amazon requires that it is the last field. So I modified the Uploadifive js to add the file field last. In 1.1.1 this was around line 393 and this is the change:
Before:
// Add the form data
formData.append(settings.fileObjName, file);
// Add the rest of the formData
for (var i in settings.formData) {
formData.append(i, settings.formData[i]);
}
After:
// Add the rest of the formData
for (var i in settings.formData) {
formData.append(i, settings.formData[i]);
}
// Add the form data
formData.append(settings.fileObjName, file);
This solved the issue for me. There might still be some work to do in order for Uploadify to understand the response, but the upload itself now works and returns a 201 Created as it should.