Uploaded S3 file from CloudFront can't access - amazon-s3

It may look a little strange that I want to upload file to S3 bucket through cloudfront, and access it with CloudFront.
And AWS declared that CloudFront support this putObject action according to
https://aws.amazon.com/blogs/aws/amazon-cloudfront-content-uploads-post-put-other-methods/
Now we have configured the CloudFront settings(Origin/Behavior) and S3 policy to complete this.
Only one block issue found that:
The uploaded file via CloudFront can't be accessed by any account or any roles. It's owner named "cf-host-credentials-global".
Just tried several ways to fix this issue, base on a quite simple solution:
CloudFront can access the S3 bucket(This s3 bucket is not public accesible.) with OAC which has putObject and getObject permission on it.
We can use a CloudFront URL mapping to S3 bucket origin for uploading a file.
Note: No signed CloudFront or signed S3 URL, but I also tested those cases actually.
We still always get such accessDenied issue, most of time it can be uploaded with the expected size and file name.
But it can't be downloaded or accesible.
I endeavor to fix this on this simple solution, but all of them are failed as below:
add x-amz-acl header, according to answer on stackoverflow
The file upload by CloudFront Origin Access Identity signed url can't be access by boto3 or IAM role?
I add the x-amz-acl header, but got this error with failed uploading:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>There were headers present in the request which were not signed</Message>
<HeadersNotSigned>x-amz-acl</HeadersNotSigned>
<RequestId>NDOLGKOGSF883</RequestId>
<HostId>CD9cDmGbSuk34Gy3mK2Znfdd9klmfew0s2dsflks3</HostId>
</Error>
Even use a pre-signed S3 url(put the x-amz-acl header in boto3 generate_presigned_url), it still the same error.
seems someone said x-amz-acl can be put into query parameter, then I have tried it in the URL(with signed URL and unsigned URL), it doesn't work anyway.
Pre-signed URLs and x-amz-acl
someone said we need to add x-amz-content-sha256 header in the client request, according to
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-origin-access-identity-signature-version-4
add x-amz-content-sha256 header, it can be uploaded successfully, but still failed with AccessDenied on that S3 Object uploaded.
add Content-MD5 header, it got the issue that header is not signed as above, and uploading failed.
Anyone has an idea about this? How to fix this AccessDenied issue?
Thanks in advance.

It looks like x-amz-acl header via OAC is not getting signed when the request is being sent from CloudFront to S3 bucket.
So if you insist on using OAC, there's only one way: change the "Object Ownership" to ACLs disabled in S3 bucket permissions.
And that works for me.

Related

Cloudfront gives "Access denied" when accessing index document

I have an S3 website configured to be used as a static website, and a Cloudfront distribution with the S3 bucket as the origin.
My index files are called index and I have specified this as the index files in both the bucket configuration and the Cloudfront distribution configuration.
However if I go to https://example.com/directory/ I get an error document as follows, where the RequestID and HostID change on each request:
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>7F09A18821FE70FD</RequestId><HostId>H5OR+AZpNSGlhrUX1ECegTbWkio728A1MIdGkO4bkCZIJa/XQ6Uv7Hu0GgNgyxL+snerPPDnzr8=</HostId></Error>
If I go to https://example.com/directory/index then the page shows correctly.
If I access the website from either the cloudfront URL or the S3 website endpoint, rather than my custom domain, then the problem does not happen.
How can I get the directory index pages to be served correctly when accessing the S3 bucket?
Changing the origin for the distribution from <bucket-name>.s3.amazonaws.com to <bucket-name>.s3-website.eu-west-2.amazonaws.com fixed the issue.
Unfortunately when I started typing in the dropdown in the Origin settings, Amazon started to suggest S3 buckets, and it suggested the bucket using the first format.

Presigned URL dont work for PUT but it works for GET S3

I'm having a trouble with Amazon S3 presigned URL. In my configuration of bucket policy I give access only to an specific IAM User, I mean, is not public. So, If I navigate in the browser to a file url of my S3 bucket, I receive an access denied message.
So, I use the aws-cli tool to generate a presigned url of that file. With that URL I'm able to get the file correctly, but the issue is when I try to put a file to the bucket. Using that url I cannot put a file beacuse I'm getting this message error:
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
What I'm missing?
You'll need a different presigned URL for PUT methods and GET methods. This is because the HTTP verb (PUT, GET, etc.) is part of the "CanonicalResource" used to construct the signature. See "Authenticating Requests" in the Amazon S3 reference docs for details.

How to generate s3 custom error page when url expire

We have generated s3 pre-signed download url using java sdk. Now when it expires an xml error page comes like
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>43198</X-Amz-Expires>
<Expires>2016-07-11T20:32:43Z</Expires>
<ServerTime>2016-07-12T05:53:18Z</ServerTime>
<RequestId>76FF61E84B37E053</RequestId>
<HostId>
S3YVhGnr+7C9fFbjaKGhGyBHIaq/Y8j8jHmfC7P31zgydJr`enter code here`QAYqROb8U1+Eq5CyV7u+OLItkd+0=
</HostId>
</Error>
Instead of this we want a custom page to come. We are not hosting any website, simply wish to download some excel file from bucket.
This is not possible with S3 pre signed urls at the moment.
Only seems to be supported for S3 static website hosting solution : http://docs.aws.amazon.com/AmazonS3/latest/dev/CustomErrorDocSupport.html
To see if it is supported yet check this search on the s3 docs

serverSideEncryption with chunking (multi-part)

Using Fine-Uploader 4.3.1 at the moment, and ran into an Access Denied response from Amazon S3 when using serverSideEncryption and chunking. Each one of them seems to work fine individually.
I read this issue thinking I had the same problem, however I do not have any bucket policy requiring encryption: https://github.com/Widen/fine-uploader/issues/1147
Could someone run a sanity check that chunking and serverSideEncryption both work together?
Thanks!
If you're using the Amazon S3 serverSideEncryption then you'll also need to make sure your CORS configuration on the bucket allows for the proper header.
If your CORS configuration contains a wildcard to allow all headers, then you won't need to change this part. But if you're trying to be more secure and you're specifically defining your allowed headers, then this is necessary to avoid the "access denied" response.
Login to your Amazon AWS S3 Management console and navigate to your s3 bucket properties. Under the bucket permissions, edit the CORS configuration.
Insert this line among the other allowed headers.
<AllowedHeader>x-amz-server-side-encryption</AllowedHeader>
Save your configuration and that should do it.

Using S3 error document with https

We have an S3 bucket with website hosting enabled, and an error document set. We want to use it to serve images over https.
Over http, the 404 works fine: example. But for https, we need to use a different URL scheme, and the 404 no longer works: example. (That URL scheme also fails with http: example.)
Is there some way to do this? Have I misconfigured the S3 bucket, or something along those lines? (I've given 'list' permission to everyone, which turned the failure from a 403 to a 404, but not the 404 I want.)
We solved this by setting up a Cloudfront as an interface to the S3 bucket.
One tricky bit: the origin for the CloudFront distribution needs to have origin protocol policy set to HTTP only. That means it can't directly be an S3 bucket, which always has 'match viewer' policy. Instead you can set it to the URL of an S3 bucket. Instead of BUCKET.s3.amazonaws.com, use the endpoint given by S3's static website hosting: BUCKET.s3-website-REGION.amazonaws.com.
This might have unintended side effects, and there might be better ways to do it, but it works.