How to generate s3 custom error page when url expire - amazon-s3

We have generated s3 pre-signed download url using java sdk. Now when it expires an xml error page comes like
<Error>
<Code>AccessDenied</Code>
<Message>Request has expired</Message>
<X-Amz-Expires>43198</X-Amz-Expires>
<Expires>2016-07-11T20:32:43Z</Expires>
<ServerTime>2016-07-12T05:53:18Z</ServerTime>
<RequestId>76FF61E84B37E053</RequestId>
<HostId>
S3YVhGnr+7C9fFbjaKGhGyBHIaq/Y8j8jHmfC7P31zgydJr`enter code here`QAYqROb8U1+Eq5CyV7u+OLItkd+0=
</HostId>
</Error>
Instead of this we want a custom page to come. We are not hosting any website, simply wish to download some excel file from bucket.

This is not possible with S3 pre signed urls at the moment.
Only seems to be supported for S3 static website hosting solution : http://docs.aws.amazon.com/AmazonS3/latest/dev/CustomErrorDocSupport.html
To see if it is supported yet check this search on the s3 docs

Related

Uploaded S3 file from CloudFront can't access

It may look a little strange that I want to upload file to S3 bucket through cloudfront, and access it with CloudFront.
And AWS declared that CloudFront support this putObject action according to
https://aws.amazon.com/blogs/aws/amazon-cloudfront-content-uploads-post-put-other-methods/
Now we have configured the CloudFront settings(Origin/Behavior) and S3 policy to complete this.
Only one block issue found that:
The uploaded file via CloudFront can't be accessed by any account or any roles. It's owner named "cf-host-credentials-global".
Just tried several ways to fix this issue, base on a quite simple solution:
CloudFront can access the S3 bucket(This s3 bucket is not public accesible.) with OAC which has putObject and getObject permission on it.
We can use a CloudFront URL mapping to S3 bucket origin for uploading a file.
Note: No signed CloudFront or signed S3 URL, but I also tested those cases actually.
We still always get such accessDenied issue, most of time it can be uploaded with the expected size and file name.
But it can't be downloaded or accesible.
I endeavor to fix this on this simple solution, but all of them are failed as below:
add x-amz-acl header, according to answer on stackoverflow
The file upload by CloudFront Origin Access Identity signed url can't be access by boto3 or IAM role?
I add the x-amz-acl header, but got this error with failed uploading:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>There were headers present in the request which were not signed</Message>
<HeadersNotSigned>x-amz-acl</HeadersNotSigned>
<RequestId>NDOLGKOGSF883</RequestId>
<HostId>CD9cDmGbSuk34Gy3mK2Znfdd9klmfew0s2dsflks3</HostId>
</Error>
Even use a pre-signed S3 url(put the x-amz-acl header in boto3 generate_presigned_url), it still the same error.
seems someone said x-amz-acl can be put into query parameter, then I have tried it in the URL(with signed URL and unsigned URL), it doesn't work anyway.
Pre-signed URLs and x-amz-acl
someone said we need to add x-amz-content-sha256 header in the client request, according to
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-origin-access-identity-signature-version-4
add x-amz-content-sha256 header, it can be uploaded successfully, but still failed with AccessDenied on that S3 Object uploaded.
add Content-MD5 header, it got the issue that header is not signed as above, and uploading failed.
Anyone has an idea about this? How to fix this AccessDenied issue?
Thanks in advance.
It looks like x-amz-acl header via OAC is not getting signed when the request is being sent from CloudFront to S3 bucket.
So if you insist on using OAC, there's only one way: change the "Object Ownership" to ACLs disabled in S3 bucket permissions.
And that works for me.

Accessing documents stored in Amazon's S3

We are building a system which will allow users to store documents (images or pdf) in s3. These files can later be accessed via their URL like (https://my-bucket.s3.amazonaws.com/person/2/provider/test1.png)). We have no problem uploading and deleting documents in s3 using proper IAM keys and AWS SDK. However, when we try to access documents using their URL, we get the following error:
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>V0DTW6T3F6J3ZFDG</RequestId>
<HostId>ZOiybrTAfx8t+NZQW2cpS4nw8vNhmQaemFfinQSBP41K2mZhDItF29156LTUwZh+SqZacfssLIE=</HostId>
</Error>
I understand the reason for the error but don't know how to resolve it.
Basically, we are building a health related portal. Patients can upload their documents (health records). Later on we want them to be able to view their documents while they are logged in. The idea was that the documents could be displayed via their URL.
The only solution I can think of is to first download the document locally (whether that is on the browser's local filesystem or on the mobile device if accessed through a mobile app) and then displayed from there. That does not sound like an ideal way. The other alternative is to make the bucket completely public which is not acceptable because these are health care records.
Any help/advice will be greatly appreciated.

Post Method error while uploding content to s3 bucket using cdn in aws

I am facing some issues While using cdn to upload some content to my S3 bucket.
Using POST method to upload content not working
While PUT and GET is Working for the same bucket policy
S3 bucket region used is us-east-1
Followed aws doc but not got solution.
demo cloudfront url
https://dqjesdd0w1dnq.cloudfront.net
and trying to post method to upload file
using aws auth in postman and adding file to upload.
Documentation urls
URL1 URL2 URL3 URL4
Error
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>InvalidArgument</Code>
<Message>x-amz-content-sha256 must be UNSIGNED-PAYLOAD, STREAMING-AWS4-HMAC-SHA256-PAYLOAD, STREAMING-AWS4-ECDSA-P256-SHA256-PAYLOAD or a valid sha256 value.</Message>
<ArgumentName>x-amz-content-sha256</ArgumentName>
<ArgumentValue>null</ArgumentValue>
<RequestId>1CHMT0X9A80DDJ1D</RequestId>
<HostId>r0xIE3tcRGPjrdmKMXvqJHPKtFPOxQ8UWpYA5OxYva+BkgZEH+m/YfsRshls+u/X4xB1ugIxZU4=</HostId>
Any Help will be appreciated

Nuxeo : Upload using presigned URL

I want to generate a presigned URL for an S3 bucket, and upload files using the url, not through nuxeo server or the direct upload option.
The documentation, says, that I need to set the CloudFrontBinaryManager as the binary manager to be used. Despite, setting the configuration in nuxeo.conf, I am not able to upload directly to the bucket. I still see the request made to /upload, which routes the upload through the nuxeo server to the s3 bucket.
The downloads happen through the presigned url, but the upload doesn't. How can I make the upload work?

Presigned URL dont work for PUT but it works for GET S3

I'm having a trouble with Amazon S3 presigned URL. In my configuration of bucket policy I give access only to an specific IAM User, I mean, is not public. So, If I navigate in the browser to a file url of my S3 bucket, I receive an access denied message.
So, I use the aws-cli tool to generate a presigned url of that file. With that URL I'm able to get the file correctly, but the issue is when I try to put a file to the bucket. Using that url I cannot put a file beacuse I'm getting this message error:
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
What I'm missing?
You'll need a different presigned URL for PUT methods and GET methods. This is because the HTTP verb (PUT, GET, etc.) is part of the "CanonicalResource" used to construct the signature. See "Authenticating Requests" in the Amazon S3 reference docs for details.