I have a problem where files are perfectly accessible from my bucket, but folders throw an access denied error. Does anyone know how I can solve this? I want my folders to be accessible too.
Here is my bucket policy:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "DenyS3PublicObjectACL",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:PutObjectAcl",
"Resource": [
"arn:aws:s3:::xxxxx/*”,
"arn:aws:s3:::xxxxx”
],
"Condition": {
"StringEqualsIgnoreCaseIfExists": {
"s3:x-amz-acl": [
"authenticated-read",
"public-read",
"public-read-write"
]
}
}
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXX”
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::xxxxx/*”
}
]
}
You won't get a directory listing page out of Amazon S3 when accessing the URL. S3 doesn't show directory listing.
Reference : S3 allow public directory listing of parent folder?
If you want to implement directory structure explorer see this plugin.
https://github.com/awslabs/aws-js-s3-explorer - Recommended
https://github.com/rufuspollock/s3-bucket-listing
Related
I am trying to make a S3 bucket policy that only allows GetObject from CloudFront but able to PutObject directly to the bucket.
Tried with several combinations but none of the worked.
Here is the latest attempt that I tried.
With, Block All Public Access: ALL OFF.
Bucket Policy:
{
"Version": "2012-10-17",
"Id": "Policy1604429581591",
"Statement": [
{
"Sid": "Stmt1605554261786",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
},
{
"Sid": "Stmt1605557746418",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
},
{
"Sid": "Stmt1605557857544",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:MYCLOUDFRONT"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
}
]
}
This allows me to PutObject to the bucket but GetObject using CloudFront URL got access denied. If I removed
{
"Sid": "Stmt1605557746418",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
}
I can GetObject from CloudFront as well as from bucket directly.
Please help!
Found the solution of it.
First follow the instructions here to setup CloudFront: https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-access-to-amazon-s3/. Key point is
5. For Restrict Bucket Access, select Yes.
I was using Multer-S3 to upload my image files. ACL needs to be set to
acl: 'authenticated-read',
Also I am using serverSideEncryption, in S3 bucket properties=>Default encryption
Default encryption: Enabled
Server-side encryption: Amazon S3 master-key(SSE-S3)
Multer-S3 config
serverSideEncryption: 'AES256',
S3 Bucket permission un-Block all public access and ACL only enables Bucket Owner's permissions.
The final bucket policy I have is:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E36AHAEXL422P3"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
},
{
"Sid": "Stmt1605745908405",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::MYBUCKET/*",
"Condition": {
"StringEqualsIfExists": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
]
}
With all above configurations, this allows PubObject from anyone as long as requests have server side encryption with 'AE256'. GetObject requests directly to the bucket will be blocked. All GetObject requests need to go through CloudFront.
I want to make an S3 bucket that receives internal documentation from Travis-CI whenever we build master which works fine on Travis-CI. However, I want to limit the internal documentation to only be visible to our company IPs which are static. The policy I am trying to use is the following.
{
"Version": "2012-10-17",
"Id": "InternalDocs",
"Statement": [
{
"Sid": "Docs User",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::000000000000:user/docs"
},
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::docs",
"arn:aws:s3:::docs/*"
]
},
{
"Sid": "Whitelisted Read",
"Effect": "Deny",
"Principal": "*",
"Action": "*",
"Resource": "arn:aws:s3:::docs/*",
"Condition": {
"StringNotLike": {
"aws:userid": "000000000000"
},
"NotIpAddress": {
"aws:SourceIp": [
"0.0.0.0/32",
"0.0.0.0/32"
]
}
}
}
]
}
Please note that 0.0.0.0, 000000000000, arn:aws:s3:::docs and arn:aws:iam::000000000000:user/docs are placeholders because I don't want to show our actual names, ids or IPs.
The problem I end up with is that since "Principal": "*" seems to match any user, public or not public it will then end up blocking everything. The result is that anyone from the company's static ip can access the bucket but Travis-CI is blocked from uploading new versions of the docs to the bucket.
How can I allow Travis-CI to upload to my S3 bucket while still only allowing specifc IPs to view the documentation using s3:GetObject while blocking anyone else?
I use the deploy function on Travis-CI with the AWS access key and access secret for the user arn:aws:iam::000000000000:user/docs.
I know this question has been asked a few times and I have gone through a some documents and examples on this. But I am still not able to get it working.
I want to block access to my S3 bucket from one particular IP address and allow all others. I do not want to block instances belonging to an IAM role and hence I am using NotIpAddress Condition for this. Below is the policy I applied on my bucket:
{
"Version": "2012-10-17",
"Id": "Policy1486984747194",
"Statement": [
{
"Sid": "AllowAllExceptOneIP",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-test-bucket",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "52.38.90.46"
}
}
}
]
}
But this policy isn't working. I am able to upload files to my bucket from this machine, I am using s3-curl.pl to temporarily upload my files.
Can someone please help me find what is wrong here. Thanks.
To block all actions to an S3 bucket from a particular IP, policy needs to have separate deny effect statement for that IP, sample:
{
"Version": "2012-10-17",
"Id": "Policy1487062767078",
"Statement": [
{
"Sid": "AllowAll",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-test-bucket",
"arn:aws:s3:::my-test-bucket/*"
]
},
{
"Sid": "DenyIP",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-test-bucket",
"arn:aws:s3:::my-test-bucket/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": "52.38.90.46"
}
}
}
]
}
Action and Resource can be changed based on what one needs to block.
Thanks a lot #SergeyKovalev for helping me with this solution.
I have a Amazon S3 bucket mybucket and only want to enable access to content in a specific nested folder (or in S3 terms, with a specific "prefix").
I tried the following S3 bucket policy but it doesn't work. After adding the condition I started getting access denied errors in the browser.
{
"Version": "2012-10-17",
"Id": "Policy for mybucket",
"Statement": [
{
"Sid": "Allow access to public content only from my.domain.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/public/content/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://my.domain.com/*"
]
}
}
}
]
}
What should the policy look like to achieve this?
You need to split the policy in to two statements. One to allow access to the folder (prefix), and one to deny access when the referer is not one of the white listed domains:
{
"Version": "2012-10-17",
"Id": "Policy for mybucket",
"Statement": [
{
"Sid": "Allow access to public content",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/public/content/*"
},
{
"Sid": "Deny access to public content when not on my.domain.com",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/public/content/*",
"Condition": {
"StringNotLike": {
"aws:Referer": [
"http://my.domain.com/*"
]
}
}
}
]
}
I'm trying to make my entire S3 bucket public, but when I try to add the policy:
{
"Id": "Policy1454540872039",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1454540868094",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::sneakysnap/*",
"Principal": {
"AWS": [
"985506495298"
]
}
}
]
}
It tells me that my "Resource is invalid", but that is definitely the right arn and that is definitely the right bucket name. Anyone know what's going on?
I had this "problem" when I was trying to set a policy on the wrong bucket. That is, my arn in the policy was reading arn:aws:s3:::my-bucket-A/* but I was attempting to set it on my-bucket-B
I had solved the problem by this
arn:aws:s3:::your-bucket-name-here/*
'
If you are creating a policy for an access point it appears that aws will only accept the following format:
i) account id and region must be specified; and
ii) the literal string object must be included (object is not my bucket name)
arn:aws:s3:region:accountid:accesspoint/myaccesspointname/object/*
I found this answer here -> https://forums.aws.amazon.com/thread.jspa?threadID=315596
I faced the same issue and the following could fix your error. I hope this helps anyone facing the same. You need to specify the account ID that corresponds to the region for your load balancer and bucket.
"Principal": {
"AWS": [
"*********"
Please refer to this & update accordingly. This would solve this issue.
See also the Bucket Permissions section of Access Logs for Your Application Load Balancer.
I also had the same problem!
I was using wrong bucket name so I correct it.
It worked for me!
Best of Luck !
I was getting this error as well. The following change fixed it... No idea why.
This bucket threw the error: bleeblahblo-stuff
This worked: bleeblahblostuff
Maybe it was the dash.... Maybe the bucket length... Or maybe a combination of the two?? Both buckets had the same settings. Hmmm.
I was facing the same problem. I was was not using the correct resource name.
I did change the resource name to exactly of that bucket for which I was creating the bucket policy e.g
"Resource": "arn:aws:s3:::abc/*"
to
"Resource": "arn:aws:s3:::abc12/*"
My problem was that when I created my S3 bucket, by default the following were true:
Manage public access control lists (ACLs)
Block new public ACLs and uploading public objects (Recommended)
True
Remove public access granted through public ACLs (Recommended)
True
Manage public bucket policies
Block new public bucket policies (Recommended)
True
Block public and cross-account access if bucket has public policies (Recommended)
True
I had to set these all to false in order for me to change my bucket policy.
If you're trying the AWS startup workshop, try to close the website-bucket-policy.json file, an re-open it. It works for me, and I guess the update of the json file is not saved automatically unless you close it.
See if the bucket name you are specifying in Resource exists or not. The above answer from Vitaly solved my issue.
the problem I realized I had was that my bucket had a ".com" extension which needs to be included in your arn
To add to iamsohel's answer. I had this same issue when trying to set an S3 policy for enabling Elastic load balancer access logs using Terraform.
Here's the policy I was trying to set:
Access logs for your Application Load Balancer
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::elb-account-id:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/AWSLogs/your-aws-account-id/*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/AWSLogs/your-aws-account-id/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::bucket-name"
}
]
}
But I wanted to add some variables to the policy. My initial policy looked like this:
bucket_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::${var.bucket_name.2}"
}
]
}
EOF
But this was throwing an error:
Error: Error putting S3 policy: MalformedPolicy: Policy has invalid resource
│ status code: 400, request id: 3HHH9QK9SKB1V9Z0, host id: 8mOrnGi/nsHIcz59kryeriVExU7v+XgGpTw64GHfhjgkwhA3WKSfG7eNbgkMgBMA8qYlyUTLYP8=
│
│ with module.s3_bucket_policy_1.aws_s3_bucket_policy.main,
│ on ../../../../modules/aws/s3-bucket-policy/main.tf line 1, in resource "aws_s3_bucket_policy" "main":
│ 1: resource "aws_s3_bucket_policy" "main" {
All I had to do was to add /* to the end of the arn for the bucket resource:
bucket_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}/*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::${var.bucket_name.2}"
}
]
}
EOF
In my case it was the missing region in the arn for govcloud - so the resource had to be
"arn:aws-us-gov:s3:::grcsimpletest"
rather than
"arn:aws:s3:::grcsimpletest"
Strangely the policy that failed was from an AWS doc.... That said, it kind of clicked when I edited the policy in the S3 console and it showed the bucket arm on the edit screen.