Using Fine-Uploader 4.3.1 at the moment, and ran into an Access Denied response from Amazon S3 when using serverSideEncryption and chunking. Each one of them seems to work fine individually.
I read this issue thinking I had the same problem, however I do not have any bucket policy requiring encryption: https://github.com/Widen/fine-uploader/issues/1147
Could someone run a sanity check that chunking and serverSideEncryption both work together?
Thanks!
If you're using the Amazon S3 serverSideEncryption then you'll also need to make sure your CORS configuration on the bucket allows for the proper header.
If your CORS configuration contains a wildcard to allow all headers, then you won't need to change this part. But if you're trying to be more secure and you're specifically defining your allowed headers, then this is necessary to avoid the "access denied" response.
Login to your Amazon AWS S3 Management console and navigate to your s3 bucket properties. Under the bucket permissions, edit the CORS configuration.
Insert this line among the other allowed headers.
<AllowedHeader>x-amz-server-side-encryption</AllowedHeader>
Save your configuration and that should do it.
Related
It may look a little strange that I want to upload file to S3 bucket through cloudfront, and access it with CloudFront.
And AWS declared that CloudFront support this putObject action according to
https://aws.amazon.com/blogs/aws/amazon-cloudfront-content-uploads-post-put-other-methods/
Now we have configured the CloudFront settings(Origin/Behavior) and S3 policy to complete this.
Only one block issue found that:
The uploaded file via CloudFront can't be accessed by any account or any roles. It's owner named "cf-host-credentials-global".
Just tried several ways to fix this issue, base on a quite simple solution:
CloudFront can access the S3 bucket(This s3 bucket is not public accesible.) with OAC which has putObject and getObject permission on it.
We can use a CloudFront URL mapping to S3 bucket origin for uploading a file.
Note: No signed CloudFront or signed S3 URL, but I also tested those cases actually.
We still always get such accessDenied issue, most of time it can be uploaded with the expected size and file name.
But it can't be downloaded or accesible.
I endeavor to fix this on this simple solution, but all of them are failed as below:
add x-amz-acl header, according to answer on stackoverflow
The file upload by CloudFront Origin Access Identity signed url can't be access by boto3 or IAM role?
I add the x-amz-acl header, but got this error with failed uploading:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>There were headers present in the request which were not signed</Message>
<HeadersNotSigned>x-amz-acl</HeadersNotSigned>
<RequestId>NDOLGKOGSF883</RequestId>
<HostId>CD9cDmGbSuk34Gy3mK2Znfdd9klmfew0s2dsflks3</HostId>
</Error>
Even use a pre-signed S3 url(put the x-amz-acl header in boto3 generate_presigned_url), it still the same error.
seems someone said x-amz-acl can be put into query parameter, then I have tried it in the URL(with signed URL and unsigned URL), it doesn't work anyway.
Pre-signed URLs and x-amz-acl
someone said we need to add x-amz-content-sha256 header in the client request, according to
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-origin-access-identity-signature-version-4
add x-amz-content-sha256 header, it can be uploaded successfully, but still failed with AccessDenied on that S3 Object uploaded.
add Content-MD5 header, it got the issue that header is not signed as above, and uploading failed.
Anyone has an idea about this? How to fix this AccessDenied issue?
Thanks in advance.
It looks like x-amz-acl header via OAC is not getting signed when the request is being sent from CloudFront to S3 bucket.
So if you insist on using OAC, there's only one way: change the "Object Ownership" to ACLs disabled in S3 bucket permissions.
And that works for me.
I'm currently looking to host an app with the Angular frontend in a AWS S3 bucket connecting to a PHP backend using the AWS Elastic Beanstalk. I've got it set up and it's working nicely.
However, using S3 to create a static website, anyone can view your code, including the various Angular JS files. This is mostly fine, but I want to create either a file or folder to keep sensitive information in that cannot be viewed by anyone, but can be included/required by all other files. Essentially I want a key that I can attach to all calls to the backend to make sure only authorised requests get through.
I've experimented with various permissions but always seems to be able to view all files, presumably because the static website hosting bucket policy ensures everything is public.
Any suggestions appreciated!
Cheers.
The whole idea of static website hosting on S3 means the content to be public, for example, you have maintenance of your app/web, so you redirect users to the S3 static page notifying there is maintenance ongoing.
I am not sure what all have you tried when you refer to "experimented with various permissions", however, have you tried to setup a bucket policy or maybe setup the bucket as a CloudFront origin and set a Signed URL. This might be a bit tricky considering you want to call these sensitive files by other files. But the way to hide those sensitive files will either be by using some sort of bucket policy or by restricting using some sort of signed URL in my opinion.
I am using HLS streaming with the Amazon S3 and Cloud Front using the JWplayer.(With Rails)
I used the Signed URL to encrypt the URL and created an Origin Access Identity as given in the Amazon Cloud Front documentation.
The Signed URL's are generated fine.
I also have a 'crossdomain.xml' file in my bucket which is allowing all the origins(I have given '*')
Now, when I am trying to play my Hls video files from my bucket, I am getting crossdomain access denied issue
I think JW Player is trying to access the 'crossdomain.xml' file without the signed hash. So, it's getting that error.
I have tested my file in demo JWplayer Stream tester and this is the error I am getting in console.
Fetch API cannot load http://xxxxxxxx.cloudfront.net/xxx/1/1m_test.ts.
No 'Access-Control-Allow-Origin' header is present on the requested resource.
Origin 'http://demo.jwplayer.com' is therefore not allowed access.
The response had HTTP status code 403.
If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
Here is the ScreenShot.
Please help me out. Thank You.
This is the link I followed to configure my CloudFront Distribution
I just had the same problem (but with the Flowplayer). I am not sure yet about security risks (and if all steps are needed), but I got it running with:
adding permissions on the crossdomain.xml for everyone to open/download
adding a behaviour in the cloudfront distribution only for crossdomain.xml without restricting access (above the behaviour for * with restricted access)
and then I noticed that in the bucket, the link to the crossdomain.xml was something like "https://some-server.amazonaws.com/bucket.name/%1Fcrossdomain.xml" (notice the weird %1F) and that when I went on rename of the crossdomain.xml, I could delete one invisible character on first position of the name (I didn't make the crossdomain.xml, so I am not sure how this happened)
Edit:
I had hlsjs also running with this and making the crossdomain.xml accessible somehow disabled the CORS request. I am still looking into this.
I'm having a doozy of a time trying to serve static HTML templates from Amazon CloudFront.
I can perform a jQuery.get on Firefox for my HTML hosted on S3 just fine. The same thing for CloudFront returns an OPTIONS 403 Forbidden. And I can't perform an ajax get for either S3 or CloudFront files on Chrome. I assume that Angular is having the same problem.
I don't know how it fetches remote templates, but it's returning the same error as a jQuery.get. My CORS config is fine according to Amazon tech support and as I said I can get the files directly from S3 on Firefox so it works in one case.
My question is, how do I get it working in all browsers and with CloudFront and with an Angular templateUrl?
For people coming from google, a bit more
Turns out Amazon actually does support CORS via SSL when the CORS settings are on an S3 bucket. The bad part comes in when cloudfront caches the headers for the CORS response. If you're fetching from an origin that could be mixed http & https you'll run into the case where the allowed origin from CloudFront will say http but you want https. That of course causes the browser to blow up. To make matters worse, CloudFront will cache slightly differing versions if you accept compressed content. Thus if you try to debug this with curl, you'll think all is well then find it isn't in the browser (try passing --compressed to curl).
One, admittedly frustrating, solution is just ditch the entire CloudFront thing and serve directly from the S3 bucket.
It looks like Amazon does not currently support SSL and CORS on CloudFront or S3, which is the crux of the problem. Other CDNs like Limelight or Akamai allow you to add your SSL cert to a CNAME which circumvents the problem, but Amazon does not allow that either and other CDNs are cost prohibitive. The best alternative seems to be serving the html from your own server on your domain. Here is a solution for Angular and Rails: https://stackoverflow.com/a/12180837/256066
Having a heck of a time setting up a referer policy from my url to have access to an Amazon S3 bucket.
I can get it to work with my .myurl.com/ but anytime the request from a secure https request, my access is denied even with the wild card.
Thanks
Edit: Rushed my first initial post, here's more detail.
I have a bucket policy on amazon s3 that only allows access if it comes from my url(s).
"aws:Referer": [
"*.myurl.com/*",
"*.app.dev:3000/*" ]
This referer policy correctly only allows connections from my dev environment and also my staging url if accessed via http. However, if the user is located at https://www.myurl.com/* they are denied access from Amazon.
Is there a way to allow https connections to Amazon S3? Is it my Bucket Policy? I've tried hard coding the https url into the bucket policy, but this did not do the trick.
Sorry about being overly brief.
From the HTTP/1.1 RFC:
Clients SHOULD NOT include a
Referer header field in a (non-secure)
HTTP request if the referring page was
transferred with a secure protocol.