Having a heck of a time setting up a referer policy from my url to have access to an Amazon S3 bucket.
I can get it to work with my .myurl.com/ but anytime the request from a secure https request, my access is denied even with the wild card.
Thanks
Edit: Rushed my first initial post, here's more detail.
I have a bucket policy on amazon s3 that only allows access if it comes from my url(s).
"aws:Referer": [
"*.myurl.com/*",
"*.app.dev:3000/*" ]
This referer policy correctly only allows connections from my dev environment and also my staging url if accessed via http. However, if the user is located at https://www.myurl.com/* they are denied access from Amazon.
Is there a way to allow https connections to Amazon S3? Is it my Bucket Policy? I've tried hard coding the https url into the bucket policy, but this did not do the trick.
Sorry about being overly brief.
From the HTTP/1.1 RFC:
Clients SHOULD NOT include a
Referer header field in a (non-secure)
HTTP request if the referring page was
transferred with a secure protocol.
Related
I have a situation which I am unable to understand easily why and I am not able to find any documentation either.
I have done the following:
Created a S3 bucket
Given public access to it
Enabled it for static website hosting
Created a CloudFront distribution to it
Enabled HTTPS at cloudfront
Now I am trying to restrict the access of S3 bucket only to CloudFront.
I tried the steps presented at
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
Unfortunately, when I tried to edit the origin I don't see all the options in the UI especially Restrict Bucket Access is missing.
I only see options to edit Origin Domain Name, Origin Path, Origin Id (grayedout), Origin Custom Headers - No option to enter OAI or setting Restrict Bucket Access etc.
Is it because of enabling HTTPS?
S3 masters, please help!
Origin access identities are only applicable when using the S3 REST endpoint (e.g. example-bucket.s3.amazonaws.com) for the bucket -- not when you are using the website hosting endpoint (e.g. example-bucket.s3-website.us-east-2.amazonaws.com), because website hosting endpoints do not support authenticated requests -- they are only for public content... but OAI is an authentication mechanism.
When using the website endpoint, CloudFront does not treat the origin as an S3 Origin -- it is treated as a Custom Origin, and these options are not available, because if they were available, they wouldn't work anyway (for the reason mentioned above).
For those who have since changed some S3 settings to not be public and intend it to be only retrievable via Cloudfront, it is now there but hidden. You just have to cut copy the value from Origin Domain Name in origin tab and then re-paste it in again (if its the name bucket) and the UI will now render with the Restrict Access input options.
I want to allow s3 files only on my site. They should be able to download files from my site ony. Not from any other site. Please tell me how i can do that.
I tried bucket policy and cros configuration but it doesn't work.
You could restrict origin using CORS headers https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html this would only block cross-site resources directly displayed or processed, not files to be downloaded
As a solution - make the S3 bucket private (not public) and your site can generate signed expiring (temporary) url for S3 resources
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html
there are multiple blogs and exampled just search for it
You could code your Bucket Policy to Restrict Access to a Specific HTTP Referrer:
{
"Version":"2012-10-17",
"Id":"http referer policy example",
"Statement":[
{
"Sid":"Allow get requests originating from www.example.com and example.com.",
"Effect":"Allow",
"Principal":"*",
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::examplebucket/*",
"Condition":{
"StringLike":{"aws:Referer":["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
However, this can be faked so it is not a guaranteed method of controlling access.
If you really wish to control who can access your content, your application should verify the identity of the user (eg username/password) and then generate time-limited Amazon S3 Pre-Signed URLs. This is the way you would normally serve private content, such as personal photos and PDF files.
I have a private bucket in S3 and would like to only allow access only to requests that include a particular (secret) header, sent from a CDN (not CloudFront, as that would of course be simple to allow using its own id).
So that means writing a bucket policy to allow just those secret-header requests.
I've been doing some research (http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html) and I can see that you can test other attributes of the request like aws:Referer to do a comparison on the referer, and aws:SourceIp to do a comparison on the source IP - but how would I go about doing a comparison on a custom header e.g. "X-my-secret-header"?
Do bucket policies support testing header values? If so, how?
AWS recommends using the "Referer" header to store a token and then use an AWS Bucket Policy that restricts requests to aws:Referer headers that match the token. Note that the token is not the actual referer URL, it's just whatever secret you want to use.
You will need to make sure that the requests from your CDN to S3 are over HTTPS so that secret token is encrypted.
This is described in the AWS article How do I use CloudFront to serve a static website hosted on Amazon S3? under the section "Using a website endpoint as the origin, with access restricted by a Referer header" Assuming you can set the "Referer" header from your CDN to the origin, the same concepts should apply.
Using Fine-Uploader 4.3.1 at the moment, and ran into an Access Denied response from Amazon S3 when using serverSideEncryption and chunking. Each one of them seems to work fine individually.
I read this issue thinking I had the same problem, however I do not have any bucket policy requiring encryption: https://github.com/Widen/fine-uploader/issues/1147
Could someone run a sanity check that chunking and serverSideEncryption both work together?
Thanks!
If you're using the Amazon S3 serverSideEncryption then you'll also need to make sure your CORS configuration on the bucket allows for the proper header.
If your CORS configuration contains a wildcard to allow all headers, then you won't need to change this part. But if you're trying to be more secure and you're specifically defining your allowed headers, then this is necessary to avoid the "access denied" response.
Login to your Amazon AWS S3 Management console and navigate to your s3 bucket properties. Under the bucket permissions, edit the CORS configuration.
Insert this line among the other allowed headers.
<AllowedHeader>x-amz-server-side-encryption</AllowedHeader>
Save your configuration and that should do it.
We have an S3 bucket with website hosting enabled, and an error document set. We want to use it to serve images over https.
Over http, the 404 works fine: example. But for https, we need to use a different URL scheme, and the 404 no longer works: example. (That URL scheme also fails with http: example.)
Is there some way to do this? Have I misconfigured the S3 bucket, or something along those lines? (I've given 'list' permission to everyone, which turned the failure from a 403 to a 404, but not the 404 I want.)
We solved this by setting up a Cloudfront as an interface to the S3 bucket.
One tricky bit: the origin for the CloudFront distribution needs to have origin protocol policy set to HTTP only. That means it can't directly be an S3 bucket, which always has 'match viewer' policy. Instead you can set it to the URL of an S3 bucket. Instead of BUCKET.s3.amazonaws.com, use the endpoint given by S3's static website hosting: BUCKET.s3-website-REGION.amazonaws.com.
This might have unintended side effects, and there might be better ways to do it, but it works.