Unable to get the location of the bucket in s3 - amazon-s3

I am sending the following request
GET http://bucketname.s3.amazonaws.com/?location
Host: bucketname.s3.amazonaws.com
x-amz-date: 20170531T082529Z
Authorization: AWS
I am getting following error
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>?location</Key>
<RequestId>7E63980AAE</RequestId>
<HostId>HostID</HostId>
</Error>
Seems like it is trying to get the object with ?location name. but As per Amazon API page http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGETlocation.html
it should return the location of the bucket

Related

Postman call to get S3 Bucket Location Fails for regions other than "us-east-1"

In POSTMAN,
I am using the below GET request to get the location of my S3 bucket.
Request Type : GET
API : https://mybucketname.s3.amazonaws.com/?location
Authorization: I am choosing AWS Signature and I am passing the Access and
secret keys. and given Service Name as s3.
But the Problem is,
By default Auhorization takes "us-east-1" as the region and creates the signature out of it.
So for us-east-1 region buckets, this call works well.
But when i use this reuqest to get the location of bucket(which are present in other regions than "us-east-1") , this call fails as below.
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AuthorizationHeaderMalformed</Code>
<Message>The authorization header is malformed; the region 'us-east-1'
is wrong; expecting 'us-west-2'</Message>
<Region>us-west-2</Region>
....
....
</Error>
Can anyone suggest a solution, if there any?

AWS signature v4 authentication succeeds for EU bucket but fails for US bucket?

I recently implemented AWS Signature version 4 using the REST API. This is verified by an extensive regression test working perfectly.
The problem I'm experiencing is that the regression test succeeds when run against a bucket residing in the eu-central-1 region, but consistently fails with the Accessed Denied error message for buckets residing in us-east-1 or us-west-2.
Here are snippets from successful and failed attempts.
eu-central-1 : successful
HTTP request:
GET./
host:s3.eu-central-1.amazonaws.com.x-amz-content-sha256:e3b0...b855.x-amz-date:Wed, 25 May 2016 03:13:21 +0000
host;x-amz-content-sha256;x-amz-date.e3b0...b855
Signed string:
AWS4-HMAC-SHA256
Credential=AKIAJZN7UY6XHIZPWIKQ/20160525/eu-central-1/s3/aws4_request,
SignedHeaders=host;x-amz-content-sha256;x-amz-date,
Signature=cf5f...4dc8
Server response:
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult
xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Owner>
<ID>100a...a575</ID>
</Owner>
<Buckets>
<Bucket>
. . .
</Bucket>
</Buckets>
</ListAllMyBucketsResult>
us-east-1 : failed
HTTP request:
GET./
host:s3.us-east-1.amazonaws.com.x-amz-content-sha256:e3b0...b855.x-amz-date:Wed, 25 May 2016 03:02:27 +0000
host;x-amz-content-sha256;x-amz-date.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Signed string:
AWS4-HMAC-SHA256
Credential=AKIAJZN7UY6XHIZPWIKQ/20160525/us-east-1/s3/aws4_request,
SignedHeaders=host;x-amz-content-sha256;x-amz-date,
Signature=01e97...4d00
Server response:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>92EEF2A86ECA88EF</RequestId>
<HostId>i3wTU6OzBrlX89xR4KnnezBx1Tb2IGN2wtgPJMRtKLjHxF/B6VdCQqPz1279J7e5</HostId>
</Error>
us-west-2 : failed
HTTP request:
GET./
host:s3.us-west-2.amazonaws.com.x-amz-content-sha256:e3b0...b855.x-amz-date:Wed, 25 May 2016 07:04:47 +0000
host;x-amz-content-sha256;x-amz-date.e3b0...b855
Signed string:
AWS4-HMAC-SHA256
Credential=AKIAJZN7UY6XHIZPWIKQ/20160525/us-west-2/s3/aws4_request,
SignedHeaders=host;x-amz-content-sha256;x-amz-date,
Signature=cf70...36b9
Server response:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>DB143DBF0F316EB8</RequestId>
<HostId>5hWJ0AHM466QcT+BK4UaEFpqXFNaJFEuAPlN/ZZPBhL+NDYBoGaySRkXQ3BRdyfy9PBDuSb0oHA=</HostId>
</Error>
Attempts made to date include:
I found references (like here) where when using US Standard (i.e., us-east-1) the REST endpoint should not include "us-east-1". I have not yet found this written officially. I therefore created a us-west-2 bucket, in the hope that the REST endpoint needs to contain "us-west-2", but that also fails.
I searched on Google and StackOverflow for possible reasons for "Access Denied", which led me to adding a bucket policy that gives permissions to all -- to no avail.
The permissions of the EU and US accounts in the AWS console look the same, so no hint there, yet.
I added logging to the buckets in the hope of seeing a failure entry, but nothing is logged until authentication is completed.
Does anyone have an idea why AWS v4 authentication will consistently succeed for an eu-central-1 bucket, but equally fail for us-east-1 and us-east-2 buckets?
Here's your issue.
For unknown reasons,¹ eu-central-1 is an oddball in S3. The REST endpoint works with two variations in hostname: bucket.s3.eu-central-1.amazonaws.com or bucket.s3-eu-central-1.amazonaws.com.
The difference is the dot or dash after s3.
All other regions (as of now) except us-east-1 and ap-northeast-2 (which is just like eu-central-1) work only with the dash after s3, e.g. bucket.s3-us-west-2.amazonaws.com... not with a dot.
And us-east-1 expects either bucket.s3.amazonaws.com or bucket.s3-external-1.amazonaws.com.
And finally, any region will work with just bucket.s3.amazonaws.com within a few minutes after the original creation of a bucket, because the DNS is integrated with the bucket location database and automatically routes requests to the right place, for each bucket.
But note that when you sign the requests, you always use the actual region name in the signing algorithm itself -- not the endpoint -- as you appear to already be doing.
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
¹I'll speculate that this convention is actually the "new normal" for new regions -- it's more consistent with other AWS services. S3 is one of the oldest, so it makes sense that legacy design decisions are more likely to exist, as seems to be the case, here.

403 error when using fineUploader to upload directly to S3 with no server side code

I followed this tutorial http://blog.fineuploader.com/2014/01/15/uploads-without-any-server-code/ and am making good progress, but i'm stumped again.
Here is the CORS policy for my bucket (i'm assuming this is where the error is):
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://www.xxxdomainxxx.fr</AllowedOrigin>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<ExposeHeader>ETag</ExposeHeader>
<AllowedHeader>content-type</AllowedHeader>
<AllowedHeader>origin</AllowedHeader>
<AllowedHeader>x-amz-acl</AllowedHeader>
<AllowedHeader>x-amz-meta-qqfilename</AllowedHeader>
<AllowedHeader>x-amz-date</AllowedHeader>
<AllowedHeader>authorization</AllowedHeader>
<AllowedHeader>x-amz-security-token</AllowedHeader>
</CORSRule>
</CORSConfiguration>
The upload goes fine (I see the progress bar) and it goes through the entire file, but at the end of the upload, the UI switches to "processing" and then i get an "access denied" message and no file is in the bucket. The console printed:
Failed to load resource: the server responded with a status of 403 (Forbidden) (xxxxxxx.s3.amazonaws.com, line 0)
I'm getting a 403 error back from AWS and i'm not sure how to debug this. See anything missing?
Thanks for any pointers.
Update 1
I figured i'd try with a dumb down version of the CORS file to start with
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
But i'm getting the same error.
Here is the failing POST request:
key test test/c8491b98-284a-4d5d-90d4-f6ec7151bc1d.diff
AWSAccessKeyId XXXXXXXXXXX
success_action_status 200
x-amz-security-token XXXXXXXX
acl public-read
x-amz-meta-qqfilename opentok.diff
policy XXXXXX
signature XXXXXXX
file opentok.diff
And the response
HTTP/1.1 403 Forbidden
Access-Control-Allow-Origin *
Access-Control-Allow-Methods GET, POST, PUT
Access-Control-Max-Age 3000
Vary Origin, Access-Control-Request-Headers, Access-Control-Request-Method
x-amz-request-id 8B619A5A96A954F6
x-amz-id-2 ZUPdtFRIdSKDK0ealKUKUCtHDW3GkNU5ZVZPDxlXPi/9J2oZiNcV3TltougJuhXnzY/BlbZrc1c=
Content-Type application/xml
Transfer-Encoding chunked
Date Wed, 07 Oct 2015 08:11:24 GMT
Server AmazonS3
The issue was not with the CORS configuration, but with the policy used for the bucket. The resource string was
arn:was:s3:::bucketName
and needed to be
arn:was:s3:::bucketName/*
The devil is in the details as usual...

How to export file names and urls from s3 to csv?

Is it possible to export all the files and their URLs from an S3 bucket to csv file?
I tried using this tool but it doesn't export URLs.
It is certainly possible using the Get Bucket command from the REST API. But you'll need to do something programmatically to parse the response for your asset names formatted into your CSV as you like. Since there is an API available most tools out there (like the one you've found) are not super rich with features.
GET / HTTP/1.1
Host: BucketName.s3.amazonaws.com
Date: Wed, 12 Oct 2009 17:50:00 GMT
Authorization: AWS AKIAIOSFODNN7EXAMPLE:xQE0diMbLRepdf3YB+FIEXAMPLE=
Content-Type: text/plain
Response example:
<?xml version="1.0" encoding="UTF-8"?> <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Name>bucket</Name>
<Prefix/>
<Marker/>
<MaxKeys>1000</MaxKeys>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>my-image.jpg</Key>
<LastModified>2009-10-12T17:50:30.000Z</LastModified>
<ETag>"fba9dede5f27731c9771645a39863328"</ETag>
<Size>434234</Size>
<StorageClass>STANDARD</StorageClass>
<Owner>
<ID>75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
<DisplayName>mtd#amazon.com</DisplayName>
</Owner>
</Contents>
<Contents>
<Key>my-third-image.jpg</Key>
<LastModified>2009-10-12T17:50:30.000Z</LastModified>
<ETag>"1b2cf535f27731c974343645a3985328"</ETag>
<Size>64994</Size>
<StorageClass>STANDARD</StorageClass>
<Owner>
<ID>75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a</ID>
<DisplayName>mtd#amazon.com</DisplayName>
</Owner>
</Contents> </ListBucketResult>
Thought this might help. Wrote a quick JS script to export the objects in an S3 Bucket to a JSON file instead of XML. Hope it helps!
https://github.com/springerkc/s3-bucket-exporter

What exactly will Amazon return if asked to list all the buckets?

As I have to parse it, I must know what the returning data will be structured into?
The GET operation on the Service endpoint (s3.amazonaws.com) returns a list of all of the buckets owned by the authenticated sender of the request.
Sample Request:
GET / HTTP/1.1
Host: s3.amazonaws.com
Date: Wed, 01 Mar 2009 12:00:00 GMT
Authorization: AWS 15B4D3461F177624206A:xQE0diMbLRepdf3YB+FIEXAMPLE=
Sample Response:
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://doc.s3.amazonaws.com/2006-03-01">
<Owner>
<ID>bcaf1ffd86f461ca5fb16fd081034f</ID>
<DisplayName>webfile</DisplayName>
</Owner>
<Buckets>
<Bucket>
<Name>quotes;/Name>
<CreationDate>2006-02-03T16:45:09.000Z</CreationDate>
</Bucket>
<Bucket>
<Name>samples</Name>
<CreationDate>2006-02-03T16:41:58.000Z</CreationDate>
</Bucket>
</Buckets>
</ListAllMyBucketsResult>
Source: S3 REST API » Operations on the Service » GET Service
The S3 API is described here.