AWS signature v4 authentication succeeds for EU bucket but fails for US bucket? - amazon-s3

I recently implemented AWS Signature version 4 using the REST API. This is verified by an extensive regression test working perfectly.
The problem I'm experiencing is that the regression test succeeds when run against a bucket residing in the eu-central-1 region, but consistently fails with the Accessed Denied error message for buckets residing in us-east-1 or us-west-2.
Here are snippets from successful and failed attempts.
eu-central-1 : successful
HTTP request:
GET./
host:s3.eu-central-1.amazonaws.com.x-amz-content-sha256:e3b0...b855.x-amz-date:Wed, 25 May 2016 03:13:21 +0000
host;x-amz-content-sha256;x-amz-date.e3b0...b855
Signed string:
AWS4-HMAC-SHA256
Credential=AKIAJZN7UY6XHIZPWIKQ/20160525/eu-central-1/s3/aws4_request,
SignedHeaders=host;x-amz-content-sha256;x-amz-date,
Signature=cf5f...4dc8
Server response:
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult
xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Owner>
<ID>100a...a575</ID>
</Owner>
<Buckets>
<Bucket>
. . .
</Bucket>
</Buckets>
</ListAllMyBucketsResult>
us-east-1 : failed
HTTP request:
GET./
host:s3.us-east-1.amazonaws.com.x-amz-content-sha256:e3b0...b855.x-amz-date:Wed, 25 May 2016 03:02:27 +0000
host;x-amz-content-sha256;x-amz-date.e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
Signed string:
AWS4-HMAC-SHA256
Credential=AKIAJZN7UY6XHIZPWIKQ/20160525/us-east-1/s3/aws4_request,
SignedHeaders=host;x-amz-content-sha256;x-amz-date,
Signature=01e97...4d00
Server response:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>92EEF2A86ECA88EF</RequestId>
<HostId>i3wTU6OzBrlX89xR4KnnezBx1Tb2IGN2wtgPJMRtKLjHxF/B6VdCQqPz1279J7e5</HostId>
</Error>
us-west-2 : failed
HTTP request:
GET./
host:s3.us-west-2.amazonaws.com.x-amz-content-sha256:e3b0...b855.x-amz-date:Wed, 25 May 2016 07:04:47 +0000
host;x-amz-content-sha256;x-amz-date.e3b0...b855
Signed string:
AWS4-HMAC-SHA256
Credential=AKIAJZN7UY6XHIZPWIKQ/20160525/us-west-2/s3/aws4_request,
SignedHeaders=host;x-amz-content-sha256;x-amz-date,
Signature=cf70...36b9
Server response:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>DB143DBF0F316EB8</RequestId>
<HostId>5hWJ0AHM466QcT+BK4UaEFpqXFNaJFEuAPlN/ZZPBhL+NDYBoGaySRkXQ3BRdyfy9PBDuSb0oHA=</HostId>
</Error>
Attempts made to date include:
I found references (like here) where when using US Standard (i.e., us-east-1) the REST endpoint should not include "us-east-1". I have not yet found this written officially. I therefore created a us-west-2 bucket, in the hope that the REST endpoint needs to contain "us-west-2", but that also fails.
I searched on Google and StackOverflow for possible reasons for "Access Denied", which led me to adding a bucket policy that gives permissions to all -- to no avail.
The permissions of the EU and US accounts in the AWS console look the same, so no hint there, yet.
I added logging to the buckets in the hope of seeing a failure entry, but nothing is logged until authentication is completed.
Does anyone have an idea why AWS v4 authentication will consistently succeed for an eu-central-1 bucket, but equally fail for us-east-1 and us-east-2 buckets?

Here's your issue.
For unknown reasons,¹ eu-central-1 is an oddball in S3. The REST endpoint works with two variations in hostname: bucket.s3.eu-central-1.amazonaws.com or bucket.s3-eu-central-1.amazonaws.com.
The difference is the dot or dash after s3.
All other regions (as of now) except us-east-1 and ap-northeast-2 (which is just like eu-central-1) work only with the dash after s3, e.g. bucket.s3-us-west-2.amazonaws.com... not with a dot.
And us-east-1 expects either bucket.s3.amazonaws.com or bucket.s3-external-1.amazonaws.com.
And finally, any region will work with just bucket.s3.amazonaws.com within a few minutes after the original creation of a bucket, because the DNS is integrated with the bucket location database and automatically routes requests to the right place, for each bucket.
But note that when you sign the requests, you always use the actual region name in the signing algorithm itself -- not the endpoint -- as you appear to already be doing.
http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
¹I'll speculate that this convention is actually the "new normal" for new regions -- it's more consistent with other AWS services. S3 is one of the oldest, so it makes sense that legacy design decisions are more likely to exist, as seems to be the case, here.

Related

rclone failing with "AccessControlListNotSupported" on cross-account copy -- AWS CLI Works

Quick Summary now that I think I see the problem:
rclone seems to always send ACL with a copy request, with a default value of "private". This will fail in a (2022) default AWS bucket which (correctly) assumes "No ACL". Need a way to suppress ACL send in rclone.
Detail
I assume an IAM role and attempt to do an rclone copy from a data center Linux box to a default options private no-ACL bucket in the same account as the role I assume. It succeeds.
I then configure a default options private no-ACL bucket in another account than the role I assume. I attach a bucket policy to the cross-account bucket that trusts the role I assume. The role I assume has global permissions to write S3 buckets anywhere.
I test the cross-account bucket policy by using the AWS CLI to copy the same linux box source file to the cross-account bucket. Copy works fine with AWS CLI, suggesting that the connection and access permissions to the cross account bucket are fine. DataSync (another AWS service) works fine too.
Problem: an rclone copy fails with the AccessControlListNotSupported error below.
status code: 400, request id: XXXX, host id: ZZZZ
2022/08/26 16:47:29 ERROR : bigmovie: Failed to copy: AccessControlListNotSupported: The bucket does not allow ACLs
status code: 400, request id: XXXX, host id: YYYY
And of course it is true that the bucket does not support ACL ... which is the desired best practice and AWS default for new buckets. However the bucket does support a bucket policy that trusts my assumed role, and that role and bucket policy pair works just fine with the AWS CLI copy across account, but not with the rclone copy.
Given that AWS CLI copies just fine cross account to this bucket, am I missing one of rclone's numerous flags to get the same behaviour? Anyone think of another possible cause?
Tested older, current and beta rclone versions, all behave the same
Version Info
os/version: centos 7.9.2009 (64 bit)
os/kernel: 3.10.0-1160.71.1.el7.x86_64 (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.18.5
go/linking: static
go/tags: none
Failing Command
$ rclone copy bigmovie s3-standard:SOMEBUCKET/bigmovie -vv
Failing RClone Config
type = s3
provider = AWS
env_auth = true
region = us-east-1
endpoint = https://bucket.vpce-REDACTED.s3.us-east-1.vpce.amazonaws.com
#server_side_encryption = AES256
storage_class = STANDARD
#bucket_acl = private
#acl = private
Note that I've tested all permutations of the commented out lines with similar result
Note that I have tested with and without the private endpoint listed with same results for both AWS CLI and rclone, e.g. CLI works, rclone fails.
A log from the command with the -vv flag
2022/08/25 17:25:55 DEBUG : Using config file from "PERSONALSTUFF/rclone.conf"
2022/08/25 17:25:55 DEBUG : rclone: Version "v1.55.1" starting with parameters ["/usr/local/rclone/1.55/bin/rclone" "copy" "bigmovie" "s3-standard:SOMEBUCKET" "-vv"]
2022/08/25 17:25:55 DEBUG : Creating backend with remote "bigmovie"
2022/08/25 17:25:55 DEBUG : fs cache: adding new entry for parent of "bigmovie", "MYDIRECTORY/testbed"
2022/08/25 17:25:55 DEBUG : Creating backend with remote "s3-standard:SOMEBUCKET/bigmovie"
2022/08/25 17:25:55 DEBUG : bigmovie: Need to transfer - File not found at Destination
2022/08/25 17:25:55 ERROR : bigmovie: Failed to copy: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>
AccessControlListNotSupported The bucket does not allow ACLs8DW1MQSHEN6A0CFAd3Rlnx/XezTB7OC79qr4QQuwjgR+h2VYj4LCZWLGTny9YAy985be5HsFgHcqX4azSDhDXefLE+U=
2022/08/25 17:25:55 ERROR : Attempt 1/3 failed with 1 errors and: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8"?>

When listing of objects using the IBM Cloud Object Storage CLI, and also in my Java program, I get a TLS handshake error

I can list my buckets in the COS CLI:
ibmcloud cos buckets
OK
2 buckets found in your account:
Name Date Created
cloud-object-storage-kc-cos-standard-8e7 May 20, 2020 at 14:40:37
cloud-object-storage-kc-cos-standard-nw6 Dec 14, 2020 at 16:35:48
But if I try to list the objects in the second bucket I get the following:
ibmcloud cos objects -bucket cloud-object-storage-kc-cos-standard-nw6 -region us-east
FAILED
RequestError: send request failed
caused by: Get https://cloud-object-storage-kc-cos-standard-nw6.s3.us-east.cloud-object-storage.appdomain.cloud/: tls: first record does not look like a TLS handshake
I do not know why I would get a TLS handshake error on such a call. If I try any other region, I get a "The specified bucket was not found in your IBM Cloud account. This may be because you provided the wrong region. Provide the bucket's correct region and try again."
My Cloud Object Storage configuration is (X's are redacted data):
Last Updated Tuesday, December 15 2020 at 11:16:46
Default Region us-geo
Download Location /Users/xxxxxx#us.ibm.com/Downloads
CRN b6cc5f87-5867-4736-XXXX-cf70c34a1fb7
AccessKeyID
SecretAccessKey
Authentication Method IAM
URL Style VHost
Service Endpoint
To find the exact location of your COS bucket, you can try running the below command
ibmcloud cos buckets-extended
buckets-extended: List all the extended buckets with pagination support.
Pass the Location Constraint for the bucket in the below command
ibmcloud cos objects --bucket vmac-code-engine-bucket --region us-standard

403 Error when using fetch to call Cloudfront S3 endpoint with custom domain and signed cookies

I'm trying to create a private endpoint for an S3 bucket via Cloudfront using signed cookies. I've been able to successfully create a signed cookie function in Lambda that adds a cookie for my root domain.
However, when I call the Cloudfront endpoint for the S3 file I'm trying to access, I am getting a 403 error. To make things weirder, I'm able to copy & paste the URL into the browser and can access the file.
We'll call my root domain example.com. My cookie domain is .example.com, my development app URL is test.app.example.com and my Cloudfront endpoint URL is tilesets.example.com
Upon inspection of the call, it seems that the cookies aren't being sent. This is strange because my fetch call has credentials: "include" and I'm calling a subdomain of the cookie domain.
Configuration below:
S3:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>https://*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Cloudfront:
Not sure what I could be doing wrong here. It's especially weird that it works when I go directly to the link in the browser but not when I fetch, so guessing that's a CORS issue.
I've been logging the calls to Cloudfront, and as you can see, the cookies aren't being sent when using fetch in my main app:
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields
2019-09-13 22:38:40 IAD79-C3 369 <IP> GET <CLOUDFRONT ID>.cloudfront.net <PATH URL>/metadata.json 403 https://test.app.<ROOT DOMAIN>/ Mozilla/5.0%2520(Macintosh;%2520Intel%2520Mac%2520OS%2520X%252010_14_6)%2520AppleWebKit/537.36%2520(KHTML,%2520like%2520Gecko)%2520Chrome/76.0.3809.132%2520Safari/537.36 - - Error 5kPxZkH8n8dVO57quWHurLscLDyrOQ0L-M2e0q6X5MOe6K9Hr3wCwQ== tilesets.<ROOT DOMAIN> https 281 0.000 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Error HTTP/2.0 - -
Whereas when I go to the URL directly in the browser:
#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields
2019-09-13 22:32:38 IAD79-C1 250294 <IP> GET <CLOUDFRONT ID>.cloudfront.net <PATH URL>/metadata.json 200 - Mozilla/5.0%2520(Macintosh;%2520Intel%2520Mac%2520OS%2520X%252010_14_6)%2520AppleWebKit/537.36%2520(KHTML,%2520like%2520Gecko)%2520Chrome/76.0.3809.132%2520Safari/537.36 - CloudFront-Signature=<SIGNATURE>;%2520CloudFront-Key-Pair-Id=<KEY PAIR>;%2520CloudFront-Policy=<POLICY> Miss gRkIRkKtVs3WIR-hI1fDSb_kTfwH_S2LsJhv9bmywxm_MhB7E7I8bw== tilesets.<ROOT DOMAIN> https 813 0.060 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Miss HTTP/2.0 - -
Any thoughts?
You have correctly diagnosed that the issue is that your cookies aren't being sent.
A cross-origin request won't include cookies with credentials: "include" unless the origin server also includes permission in its response headers:
Access-Control-Allow-Credentials: true
And the way to get S3 to allow that is not obvious, but I stumbled on the solution following a lead found in this answer.
Modify your bucket's CORS configuration to remove this:
<AllowedOrigin>*</AllowedOrigin>
...and add this, instead, specifically listing the origin you want to allow to access your bucket (from your description, this will be the parent domain):
<AllowedOrigin>https://example.com</AllowedOrigin>
(If you need http, that needs to be listed separately, and each domain you need to allow to access the bucket using CORS needs to be listed.)
This changes S3's behavior to include Access-Control-Allow-Credentials: true. It doesn't appear to be explicitly documented.
Do not use the following alternative, even though it would also work, without understanding the implications.
<AllowedOrigin>https://*</AllowedOrigin>
This also results in Access-Control-Allow-Credentials: true, so it "works" -- but it allows cross-origin from anywhere, which you likely do not want. With that said, do bear in mind that CORS is nothing more than a permissions mechanism that is applicable to well-behaved, non-malicious web browsers, only -- so setting the allowed origin settings to only allow the correct domain is important, but it does not magically secure your content against unauthorized access from elsewhere. I suspect you are aware of this, but it is important to keep in mind.
After these changes, you'll need to clear the browser cache and invalidate the CloudFront cache, and re-test. Once the CORS headers are being set correctly, your browser should send cookies, and the issue should be resolved.

Postman call to get S3 Bucket Location Fails for regions other than "us-east-1"

In POSTMAN,
I am using the below GET request to get the location of my S3 bucket.
Request Type : GET
API : https://mybucketname.s3.amazonaws.com/?location
Authorization: I am choosing AWS Signature and I am passing the Access and
secret keys. and given Service Name as s3.
But the Problem is,
By default Auhorization takes "us-east-1" as the region and creates the signature out of it.
So for us-east-1 region buckets, this call works well.
But when i use this reuqest to get the location of bucket(which are present in other regions than "us-east-1") , this call fails as below.
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AuthorizationHeaderMalformed</Code>
<Message>The authorization header is malformed; the region 'us-east-1'
is wrong; expecting 'us-west-2'</Message>
<Region>us-west-2</Region>
....
....
</Error>
Can anyone suggest a solution, if there any?

Cloudfront - cannot invalidate objects that used to return 403

The setting
I have an Amazon Cloudfront distribution that was originally set as secured. Objects in this distribution required a URL signing. For example, a valid URL used to be of the following format:
https://d1stsppuecoabc.cloudfront.net/images/TheImage.jpg?Expires=1413119282&Signature=NLLRTVVmzyTEzhm-ugpRymi~nM2v97vxoZV5K9sCd4d7~PhgWINoTUVBElkWehIWqLMIAq0S2HWU9ak5XIwNN9B57mwWlsuOleB~XBN1A-5kzwLr7pSM5UzGn4zn6GRiH-qb2zEoE2Fz9MnD9Zc5nMoh2XXwawMvWG7EYInK1m~X9LXfDvNaOO5iY7xY4HyIS-Q~xYHWUnt0TgcHJ8cE9xrSiwP1qX3B8lEUtMkvVbyLw__&Key-Pair-Id=APKAI7F5R77FFNFWGABC
The distribution points to an S3 bucket that also used to be secured (it only allowed access through the cloudfront).
What happened
At some point, the URL singing expired and would return a 403.
Since we no longer need to keep the same security level, I recently changed the setting of the cloudfront distribution and of the S3 bucket it is pointing to, both to be public.
I then tried to invalidate objects in this distribution. Invalidation did not throw any errors, however the invalidation did not seem to succeed. Requests to the same cloudfront URL (with or without the query string) still return 403.
The response header looks like:
HTTP/1.1 403 Forbidden
Server: CloudFront
Date: Mon, 18 Aug 2014 15:16:08 GMT
Content-Type: text/xml
Content-Length: 110
Connection: keep-alive
X-Cache: Error from cloudfront
Via: 1.1 3abf650c7bf73e47515000bddf3f04a0.cloudfront.net (CloudFront)
X-Amz-Cf-Id: j1CszSXz0DO-IxFvHWyqkDSdO462LwkfLY0muRDrULU7zT_W4HuZ2B==
Things I tried
I tried to set another cloudfront distribution that points to the same S3 as origin server. Requests to the same object in the new distribution were successful.
The question
Did anyone encounter the same situation where a cloudfront URL that returns 403 cannot be invalidated? Is there any reason why wouldn't the object get invalidated?
Thanks for your help!
First check that Invalidation is not in progress. If it is then wait till it is completed.
If you are accessing S3 Object through CloudFront using Public URL then you need to have public read permission on that S3 Object.
If you are trying to access S3 Object through CloudFront using Signed URL then make sure that time that are mention while generating sign url, must be greater then current time.