Only allow S3 files on my site - amazon-s3

I want to allow s3 files only on my site. They should be able to download files from my site ony. Not from any other site. Please tell me how i can do that.
I tried bucket policy and cros configuration but it doesn't work.

You could restrict origin using CORS headers https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html this would only block cross-site resources directly displayed or processed, not files to be downloaded
As a solution - make the S3 bucket private (not public) and your site can generate signed expiring (temporary) url for S3 resources
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html
there are multiple blogs and exampled just search for it

You could code your Bucket Policy to Restrict Access to a Specific HTTP Referrer:
{
"Version":"2012-10-17",
"Id":"http referer policy example",
"Statement":[
{
"Sid":"Allow get requests originating from www.example.com and example.com.",
"Effect":"Allow",
"Principal":"*",
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::examplebucket/*",
"Condition":{
"StringLike":{"aws:Referer":["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
However, this can be faked so it is not a guaranteed method of controlling access.
If you really wish to control who can access your content, your application should verify the identity of the user (eg username/password) and then generate time-limited Amazon S3 Pre-Signed URLs. This is the way you would normally serve private content, such as personal photos and PDF files.

Related

Cloudfront prohibit a user from using a read-presigned url to make an upload

Using the cloudfront to read and write in the same S3 bucket, how can I prohibit a user from using a read-presigned url to make an injection (upluad)?
Thanks.
Deny access to PUT for cloudfront and it won't be able to put data into s3. In s3 permissions you can allow read only calls from cloudfront only. Turn off public access on s3, to make sure it will allow gets only from cloudfront. On CloudFront use this to update s3 bucket access, before this disable any public access on s3. CF will update S3's policy & create CF origin access identity:
To put data into S3, use presigned URL and it all goes down to the place where you create the presigned URL for upload. You may have api gateway RESTfull tiny api which would authenticate user to get the upload url and would return the upload url, which would be used by user to upload data to S3.

CloudFront or S3 ACL: This XML file does not appear to have... AccessDenied / Failed to contact the origin

I'm using custom domain and CloudFront for S3 static hosting site to serve https.
It's working fine when I open the pages through the app's internal buttons or link,
but if I input direct URL in the address bar, or click the browser refresh button, it shows
This XML file does not appear to have any style information associated with it. The document tree is shown below.... Access Denied error screen.
I searched related answers and tried to /index.html in the CloudFront general setting as Default Root Object but it didn't work. (Before this try, it was index.html)
When I updated it as /index.html, even the domain itself didn't work.
I have another S3 static hosting site without CloudFront and certificate just for testing.
This site is working fine even I input direct url or click the refresh button.
Above two S3 bucket have same settings (root object is index.html and error document is also index.html)
After this, I changed CloudFront Origin Domain Name from REST endpoint to website endpoint referred to this docs (https://aws.amazon.com/premiumsupport/knowledge-center/s3-website-cloudfront-error-403/)
But now getting this error when I refresh the screen.
All the object in S3 is owned to bucket owner and has public access.
This app is made by React and using react-router-dom.
Could you give me any hint or advice?
Thanks.
Solved...
My S3 bucket region requires . instead of - when I use website endpoint for cloudfront.
And FYI..
In my case, there are some little difference with the document and some tutorial. My CloudFront distribution doesn't need to use default root object, and individual objects in S3 has no public access but the bucket has it.
There are some specific endpoints to be used for website hosting buckets, which are listed in the Amazon Simple Storage Service endpoints and quotas document. For example, when hosting in eu-west-1, cloudfront will prepopulate the dropdown with example.s3.eu-west-1.amazonaws.com, but if you look into the bucket settings, Static website hosting section, it will show you the correct url example.s3-website-eu-west-1.amazonaws.com
Carefully read the table! The url scheme is not fully consistent, eg. s3-website.us-east-2.amazonaws.com but s3-website-us-east-1.amazonaws.com - just to make your day a bit more joyful.
So I had the exact same issue and was able to resolve it after taking the s3 bucket endpoint located in the properties of the s3 bucket and then pasting it into the cloudfront origins section into the origin domain. I removed the beginning of the endpoint for example: "http://website.com.s3-website.us-east-2.amazonaws.com" you would just remove the "http://" and then post the rest into the cloudfront origin domain and click save. That should solve the problem!
I tried all kinds of different options such as making sure every object was public as well in the s3 bucket. Make sure your s3 bucket is also publicly available.
Certain regions do have different endpoints for your s3 buckets. Here is a link that shows more of that: https://aws.amazon.com/premiumsupport/knowledge-center/s3-rest-api-cloudfront-error-403/

How to securely configure s3 for website access

I want to setup an s3 bucket securely but provide public access to website assets such as images, pdfs, documents, etc. There doesn't seem to be an easy way to do this.
I have tried setting up a new bucket which has Block Public Access enabled. I assume this is the best way to secure the bucket but can't enable viewing/downloading of files in this bucket.
I expect to be able to view/download website files from a browser but always get an Access Denied error.
All content in Amazon S3 buckets are private by default.
If you wish to provide public access to content, this can be done in several ways:
At the Bucket level by providing a Bucket Policy: This is ideal for providing access to a whole bucket, or a portion of a bucket.
At the Object level by using an Access Control List (ACL): This allows fine-grained control on an object-by-object basis.
Selectively, by creating a pre-signed URL: This allows your application to determine whether a particular application user should be permitted access
All three methods allow an object in Amazon S3 to be accessed via a URL. This is totally separate to making API calls to Amazon S3 using AWS credentials, which would allow control at the user-level.
Based on your description, it would appear that a Bucket Policy would best meet your needs, such as:
{
"Version":"2012-10-17",
"Statement":[
{
"Sid":"PublicPermission",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::my-bucket/*"]
}
]
}
This is saying: Allow anyone to get an object from my-bucket
Note that the policy specifies which calls are permitted, so it can allow upload, download, delete, etc. In the above example, it is only allowing GetObject, which means objects can be accessed/downloaded but not uploaded, deleted, etc.
The /* in the Resource allows further control by specifying a path within the bucket, so it would be possible to grant access only to a portion of the bucket.
When using a Bucket Policy, it is also necessary to deactivate Block Public Access settings to allow the Bucket Policy to be used. This is an extra layer of protection that ensures buckets are not accidentally made publicly accessible.
If, on the other hand, your actual goal is to keep content private but selectively make it available to application users, then you could use a pre-signed URL. An example is a photo website where people are permitted to view their private photos, but the photos are not publicly accessible.
This would be handled by having users authenticate to the application. Then, when they wish to access a photo, the application would determine whether they are permitted to see the photo. If so, the application would generate a pre-signed URL that grants temporary access to an object. Once the expiry time is passed, the link will no longer work.

Amazon S3 permissions by the domain

The scenario:
Amazon S3 folder with private files
Goal: display videos on a HTML5 player, on a specific domain, without turning these files public (similar to vimeo allowing to embed a video only on a domain)
I already tried to change the bucket policy but without success!
You'll want to set these files to private.
You'll want to use pre-signed URLs to get the correct authentication token that is required for you to view the files after they're set to private.
If your player is web-based, you'll probably want to enable a CORS configuration.

Amazon S3 - Bucket Policy

Having a heck of a time setting up a referer policy from my url to have access to an Amazon S3 bucket.
I can get it to work with my .myurl.com/ but anytime the request from a secure https request, my access is denied even with the wild card.
Thanks
Edit: Rushed my first initial post, here's more detail.
I have a bucket policy on amazon s3 that only allows access if it comes from my url(s).
"aws:Referer": [
"*.myurl.com/*",
"*.app.dev:3000/*" ]
This referer policy correctly only allows connections from my dev environment and also my staging url if accessed via http. However, if the user is located at https://www.myurl.com/* they are denied access from Amazon.
Is there a way to allow https connections to Amazon S3? Is it my Bucket Policy? I've tried hard coding the https url into the bucket policy, but this did not do the trick.
Sorry about being overly brief.
From the HTTP/1.1 RFC:
Clients SHOULD NOT include a
Referer header field in a (non-secure)
HTTP request if the referring page was
transferred with a secure protocol.