I researched this and other topics. I create a new bucket "ilya_test_1" on Amazon S3. I add permission to Everyone with Upload/Delete enabled. I leave my default private permission untouched. I upload image to the bucket root.
I try to browse via browser to my image and I get AccessDenied. Shouldn't everything in this bucket be public accessible?
What I do not understand is why do I need to set the below bucket policy if I have already granted access to Everyone?
NOTE: access works if I set permissions to Everyone AND this bucket policy.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::ilya_test_1/*"
}
]
}
You are giving permission to upload. But in the upload, the headers can set the file to public or private. Perhaps something is setting the header to private on upload?
(Also, it's generally a bad idea to have public write on S3. You should have a server that hands out signed S3 URLs that your client can PUT/POST to. Why? Because you can more easily prevent abuse (limit size of upload, limit number of uploads per IP, etc.)
Related
I have a group policy that allows full access to several S3 buckets.
This policy allows read and write to the bucket.
The team that uses these buckets wants that one of the folders will be read-only for their group without the ability to write or delete its contents.
How do I provide that while still allowing them to have full access to all the other folders?
To allow access to "every folder except one", you will need to use an Allow policy and a Deny policy. This is because a Deny always overrides an Allow.
You should put the desired IAM Users into an IAM Group, then add a policy like this to the IAM Group:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket/*",
},
{
"Effect": "Deny",
"Action": [
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::my-bucket/read-only-path/*",
}
]
}
Be a little careful with the Allow — it is not good practice to grant s3:* to the bucket since this is probably giving them too much permission. So, trim it down to what they actually need.
We created a cloudfront distribution with 2 origins (1 s3 origin and 1 custom origin). We want the errors(5xx/4xx) from the custom origin to reach the client/user without modification but the error pages from s3 be served by cloudfront error pages configuration. Is this possible ? Currently Cloudfront does not support different custom error pages for different origin - if either of the origin returns an error, the same error page is served by cloudfront.
You can customize the error responses for your origins using Lambda#Edge.
You will need to associate an origin-response trigger to the behavior associated with your origin.
The origin-response is triggered after CloudFront receives the response from the origin:
This way you can add headers, issue redirects, dynamically generate a response or change the HTTP status code.
Depending on your use case, you may have to customize for both origins.
See also Lambda#Edge now Allows you to Customize Error responses From Your Origin.
I solved (and... ultimately abandoned my solution to) this problem by switching my S3 bucket to act as a static website that I use as a CloudFront custom origin (i.e. no Origin Access Identity). I used an aws:Referer bucket policy to restrict access to only requests that were coming through CloudFront.
NOTE A Referer header usually contains the request's URL. In this scenario, you are just overriding it with a unique, secret token that you share between CloudFront and S3.
This is described on this AWS Knowledge center page under "Using a website endpoint as the origin, with access restricted by a Referer header".
I eventually used a random UUID as my token and set that in my CloudFront Origin configuration.
With that same UUID, I ended up with a Bucket Policy like:
{
"Id": "Policy1603604021476",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1603604014855",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::example/*",
"Condition": {
"StringEquals": {
"aws:Referer": "b4355bde-9c68-4410-83cf-058540d83491"
}
}
},
{
"Sid": "Stmt1603604014855",
"Principal": "*",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::example",
"Condition": {
"StringEquals": {
"aws:Referer": "b4355bde-9c68-4410-83cf-058540d83491"
}
}
}
]
}
The s3:ListBucket policy is needed to get 404s to work properly. If that isn't in place, you'll get the standard S3 AccessDenied error pages.
Now each of my S3 Origins can have different error page behaviors that are configured on the S3 side of things (rather than in CloudFront).
Follow Up I don't recommend this approach for the following reasons:
It is not possible to have S3 host your static website on HTTPS so your Referer token will be sent in the clear. Granted this will likely be on the AWS network but it's still not what I was hoping for.
You only get one error document per S3 Bucket anyway so it's not that big of an improvement over the CloudFront behavior.
I am working on a demo website using AWS S3 and have restricted to certain number of IPs using a bucket policy (e.g).
{
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-wicked-awesome-bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "XX.XX.XX.XX/XX"
}
}
}
]
}
This works nicely. Now I want to use CloudFront to serve the website over HTTPS on a custom domain. I have created the distribution and the bucket policy has been modified (to allow CloudFront access) but I keep getting an access denied error when I try to access the CloudFront URL.
Is it possible to still use the bucket policy IP access list using CloudFront? If so, how do I do it?
You can remove the IP blacklisting/ whitelisting from S3 bucket policy and attach AWS WAF with the required access rules to the CloudFront distribution.
Note: Make sure when you make the S3 bucket private, to setup the Origin Access Identity User properly both in CloudFront and S3. Also if the bucket is in a different region than North Virginia, it can take some time for the DNS propagation.
A lambda function that changes the Bucket policy based on changes in the published list of IPs Here
The function can be invoked by SNS topic monitoring the list of IP's. Here is the documentation on that.
Here is the SNS topic for it.
arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged
I'm using an Amazon S3 bucket named images.example.com which successfully serves content through Cloudflare CDN using URLs like:
https://images.example.com/myfile.jpg
I would like to prevent hotlinking to images and other content buy limiting access to only the referring domain: example.com and possibly another domain which I use as a development server.
I've tried a bucket policy which both allows from specific domains and denies from any domains NOT the specific domains:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::images.example.com/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://example.com/*"
]
}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::images.example.com/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://www.example.com/*",
"http://example.com/*"
]
}
}
}
]
}
To test this. I uploaded a small webpage on a different server: www.notExample.com where I attempted to hotlink the image using:
<img src="https://images.example.com/myfile.jpg">
but the hotlinked image appears regardless.
I've also attempted the following CORS rule
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>http://www.example.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
Neither of these has worked to prevent hotlinking. I've tried purging the cached files in CloudFlare, using combinations of bucket policy and CORS (one or the other plus both) and nothing works.
This seems to be a relatively simple thing to want to do. What Am I doing wrong?
Cloudflare is a Content Distribution Network that caches information closer to end users.
When a user accesses content via Cloudflare, the content will be served out of Cloudflare's cache. If the content is not in the cache, Cloudflare will retrieve the content, store it in the cache and serve it back to the original request.
Your Amazon S3 bucket policy will therefore not work with Cloudflare, since the page request is either coming from Cloudflare (not the user's browser that generates a Referrer), or being served directly from Cloudflare's cache (so the request never reaches S3).
You would need to configure Cloudflare with your referrer rules, rather than S3.
See: What does enabling CloudFlare Hotlink Protection do?
Some alternatives:
Use Amazon CloudFront - It has the ability to serve private content by using signed URLs -- your application can grant access by generating a special URL when creating the HTML page.
Serve the content directly from Amazon S3, which can use the referer rules. Amazon S3 also supports pre-signed URLs that your application can generate.
I am looking into serving my static site with Amazon S3. I have created a bucket and uploaded my files; under the “Website” tab in the AWS Management Console I have checked “Enabled” and entered index.html in the “Index Document” field. I have the following bucket policy:
{
"Version": "2008-10-17",
"Id": "924a2348-de0e-43aa-bb06-83adbcd1db22",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
where I have my bucket’s name instead of my-bucket. Under the “Permissions” tab I have also granted “Everyone” the list ability.
If I try to access my-bucket.s3.amazonaws.com/index.html my page (and its images, CSS, etc.) shows up as expected. However, just going to my-bucket.s3.amazonaws.com or my-bucket.s3.amazonaws.com/ gives a directory-listing XML file instead of showing the page. If I try to go to my-bucket.s3.amazonaws.com/subdirectory I get an error (in XML) saying “The specified key does not exist.” Most bizarrely, if I try to go to my-bucket.s3.amazonaws.com/subdirectory/ (with a trailing slash), no page loads but my browser downloads an empty file named download. (Once again, going to my-bucket.s3.amazonaws.com/subdirectory/index.html shows the page as expected.)
Am I doing something wrong here? How do I get S3 to show the index.html file when a directory name is requested?
Looks like you need to configure a root (index) document:
http://docs.amazonwebservices.com/AmazonS3/latest/dev/IndexDocumentSupport.html
http://aws.typepad.com/aws/2011/02/host-your-static-website-on-amazon-s3.html