IAM bucket policy to allow cross-account Lambda function to write to S3 - amazon-s3

I'm having a tough time figuring out how to make this work. Our client runs a Lambda function to generate data to write to our bucket. Lambda assumes a role and because of that (I think) all our attempts to allow the client's entire account access to the bucket still result in an AccessDenied error.
In looking at our logs I see the AccessDenied is returned for the STS assumed-role. However, S3 console won't allow me to add a policy for a wildcard Principal, and the assumed role's session ID changes each session.
My guess from the sparse documentation is that we need to provide a trust relationship to the lambda.amazonaws.com service. But I can't find any documentation anywhere on how to limit that to just access from a specific Lambda function or account.
I would like to have something like this but with further constraints on the Principal so that it's not accessible by any account or Lambda function.
{
"Version": "2012-10-17",
"Id": "Policy11111111111111",
"Statement": [
{
"Sid": "Stmt11111111111111",
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-name-here/*",
"arn:aws:s3:::bucket-name-here"
]
}
]
}
UPDATE
This policy doesn't even work. It still returns an AccessDenied. The user listed in the logs is in the form of arn:aws:sts::111111222222:assumed-role/role-name/awslambda_333_201512111822444444.
So at the point I'm at a loss as to how to even allow a Lambda function to write to an S3 bucket.

We resolved this eventually with help from the IAM team.
IAM roles do not inherit any permission from the account so they need permissions assigned explicitly to the assumed role for the Lambda script.
In our case the Lambda script was also trying to grant the destination bucket owner full control of the copied file. The role assumed by the Lambda function was missing permissions for s3:PutObjectAcl.
After we added the permission the lambda function began working correctly.
The destination policy that we have working now is something like this:
{
"Version": "2012-10-17",
"Id": "Policy11111111111111",
"Statement": [
{
"Sid": "Stmt11111111111111",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListBucket*",
"Resource": "arn:aws:s3:::bucket-name",
"Condition": {
"StringLike": {
"aws:userid": "ACCOUNT-ID:awslambda_*"
}
}
},
{
"Sid": "Stmt11111111111111",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"StringLike": {
"aws:userid": "ACCOUNT-ID:awslambda_*"
}
}
},
{
"Sid": "Stmt11111111111111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::0000000000000:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name"
},
{
"Sid": "Stmt11111111111111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::0000000000000:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}

To Allow Cross account lambda function to get access of s3 bucket
following policy we need to add to s3 bucket policy externally
{
"Sid": "AWSLambda",
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com",
"AWS": "arn:aws:iam::<AccountID>:root"
},
"Action": "s3:GetObject",
"Resource": "<AWS_S3_Bucket_ARN>/*"
}
Following Template will help you to allow cross account Lambda function to access s3 bucket
Parameters:
LamdaAccountId:
Description: AccountId to which allow access
Type: String
Resources:
myBucket:
Type: 'AWS::S3::Bucket'
Properties: {}
Metadata:
'AWS::CloudFormation::Designer':
id: e5eb9fcf-5fe2-468c-ad54-b9b41ba1926a
myPolicy:
Type: 'AWS::S3::BucketPolicy'
Properties:
Bucket: !Ref myBucket
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: Stmt1580304800238
Action: 's3:*'
Effect: Allow
Resource:
- !Sub 'arn:aws:s3:::${myBucket}/*'
Principal:
Service: lambda.amazonaws.com
AWS:
- !Sub '${LamdaAccountId}'

Related

Why bucket policy is used?

I have a question about s3 bucket access, it's so complecated for me.
my configuration for s3 bucket is "public access is denied"
when I don't specify any policy in s3 bucket policy and give a user predefined S3 readonly permission, I can bring list of buckets and list of object in buckets.
But when I only make bucket policy for the user who has no iam policy attached to list buckets and object. the user are not able to list buckets and objects.
If it is true, why bucket policy is necessary to use?
bucket policy I made is like this.
{
"Version": "2012-10-17",
"Id": "Policy1672363253371",
"Statement": [
{
"Sid": "Stmt16",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::2040:user/viewer"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::test-bucket"
}
{
"Sid": "Stmt17",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::2040:user/viewer"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}

S3 Bucket Policy to work with CloudFront GetObject and PutObject directly to the bucket using Multer-S3

I am trying to make a S3 bucket policy that only allows GetObject from CloudFront but able to PutObject directly to the bucket.
Tried with several combinations but none of the worked.
Here is the latest attempt that I tried.
With, Block All Public Access: ALL OFF.
Bucket Policy:
{
"Version": "2012-10-17",
"Id": "Policy1604429581591",
"Statement": [
{
"Sid": "Stmt1605554261786",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
},
{
"Sid": "Stmt1605557746418",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
},
{
"Sid": "Stmt1605557857544",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:MYCLOUDFRONT"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
}
]
}
This allows me to PutObject to the bucket but GetObject using CloudFront URL got access denied. If I removed
{
"Sid": "Stmt1605557746418",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
}
I can GetObject from CloudFront as well as from bucket directly.
Please help!
Found the solution of it.
First follow the instructions here to setup CloudFront: https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-access-to-amazon-s3/. Key point is
5. For Restrict Bucket Access, select Yes.
I was using Multer-S3 to upload my image files. ACL needs to be set to
acl: 'authenticated-read',
Also I am using serverSideEncryption, in S3 bucket properties=>Default encryption
Default encryption: Enabled
Server-side encryption: Amazon S3 master-key(SSE-S3)
Multer-S3 config
serverSideEncryption: 'AES256',
S3 Bucket permission un-Block all public access and ACL only enables Bucket Owner's permissions.
The final bucket policy I have is:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E36AHAEXL422P3"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MYBUCKET/*"
},
{
"Sid": "Stmt1605745908405",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::MYBUCKET/*",
"Condition": {
"StringEqualsIfExists": {
"s3:x-amz-server-side-encryption": "AES256"
}
}
}
]
}
With all above configurations, this allows PubObject from anyone as long as requests have server side encryption with 'AE256'. GetObject requests directly to the bucket will be blocked. All GetObject requests need to go through CloudFront.

AWS S3 Bucket as an internal website for documentation with automatic updates from CI

I want to make an S3 bucket that receives internal documentation from Travis-CI whenever we build master which works fine on Travis-CI. However, I want to limit the internal documentation to only be visible to our company IPs which are static. The policy I am trying to use is the following.
{
"Version": "2012-10-17",
"Id": "InternalDocs",
"Statement": [
{
"Sid": "Docs User",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::000000000000:user/docs"
},
"Action": [
"s3:AbortMultipartUpload",
"s3:DeleteObject",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::docs",
"arn:aws:s3:::docs/*"
]
},
{
"Sid": "Whitelisted Read",
"Effect": "Deny",
"Principal": "*",
"Action": "*",
"Resource": "arn:aws:s3:::docs/*",
"Condition": {
"StringNotLike": {
"aws:userid": "000000000000"
},
"NotIpAddress": {
"aws:SourceIp": [
"0.0.0.0/32",
"0.0.0.0/32"
]
}
}
}
]
}
Please note that 0.0.0.0, 000000000000, arn:aws:s3:::docs and arn:aws:iam::000000000000:user/docs are placeholders because I don't want to show our actual names, ids or IPs.
The problem I end up with is that since "Principal": "*" seems to match any user, public or not public it will then end up blocking everything. The result is that anyone from the company's static ip can access the bucket but Travis-CI is blocked from uploading new versions of the docs to the bucket.
How can I allow Travis-CI to upload to my S3 bucket while still only allowing specifc IPs to view the documentation using s3:GetObject while blocking anyone else?
I use the deploy function on Travis-CI with the AWS access key and access secret for the user arn:aws:iam::000000000000:user/docs.

AWS S3 bucket policy to block source IP address not working

I know this question has been asked a few times and I have gone through a some documents and examples on this. But I am still not able to get it working.
I want to block access to my S3 bucket from one particular IP address and allow all others. I do not want to block instances belonging to an IAM role and hence I am using NotIpAddress Condition for this. Below is the policy I applied on my bucket:
{
"Version": "2012-10-17",
"Id": "Policy1486984747194",
"Statement": [
{
"Sid": "AllowAllExceptOneIP",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-test-bucket",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "52.38.90.46"
}
}
}
]
}
But this policy isn't working. I am able to upload files to my bucket from this machine, I am using s3-curl.pl to temporarily upload my files.
Can someone please help me find what is wrong here. Thanks.
To block all actions to an S3 bucket from a particular IP, policy needs to have separate deny effect statement for that IP, sample:
{
"Version": "2012-10-17",
"Id": "Policy1487062767078",
"Statement": [
{
"Sid": "AllowAll",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-test-bucket",
"arn:aws:s3:::my-test-bucket/*"
]
},
{
"Sid": "DenyIP",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-test-bucket",
"arn:aws:s3:::my-test-bucket/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": "52.38.90.46"
}
}
}
]
}
Action and Resource can be changed based on what one needs to block.
Thanks a lot #SergeyKovalev for helping me with this solution.

S3 Invalid Resource in bucket policy

I'm trying to make my entire S3 bucket public, but when I try to add the policy:
{
"Id": "Policy1454540872039",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1454540868094",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::sneakysnap/*",
"Principal": {
"AWS": [
"985506495298"
]
}
}
]
}
It tells me that my "Resource is invalid", but that is definitely the right arn and that is definitely the right bucket name. Anyone know what's going on?
I had this "problem" when I was trying to set a policy on the wrong bucket. That is, my arn in the policy was reading arn:aws:s3:::my-bucket-A/* but I was attempting to set it on my-bucket-B
I had solved the problem by this
arn:aws:s3:::your-bucket-name-here/*
'
If you are creating a policy for an access point it appears that aws will only accept the following format:
i) account id and region must be specified; and
ii) the literal string object must be included (object is not my bucket name)
arn:aws:s3:region:accountid:accesspoint/myaccesspointname/object/*
I found this answer here -> https://forums.aws.amazon.com/thread.jspa?threadID=315596
I faced the same issue and the following could fix your error. I hope this helps anyone facing the same. You need to specify the account ID that corresponds to the region for your load balancer and bucket.
"Principal": {
"AWS": [
"*********"
Please refer to this & update accordingly. This would solve this issue.
See also the Bucket Permissions section of Access Logs for Your Application Load Balancer.
I also had the same problem!
I was using wrong bucket name so I correct it.
It worked for me!
Best of Luck !
I was getting this error as well. The following change fixed it... No idea why.
This bucket threw the error: bleeblahblo-stuff
This worked: bleeblahblostuff
Maybe it was the dash.... Maybe the bucket length... Or maybe a combination of the two?? Both buckets had the same settings. Hmmm.
I was facing the same problem. I was was not using the correct resource name.
I did change the resource name to exactly of that bucket for which I was creating the bucket policy e.g
"Resource": "arn:aws:s3:::abc/*"
to
"Resource": "arn:aws:s3:::abc12/*"
My problem was that when I created my S3 bucket, by default the following were true:
Manage public access control lists (ACLs)
Block new public ACLs and uploading public objects (Recommended)
True
Remove public access granted through public ACLs (Recommended)
True
Manage public bucket policies
Block new public bucket policies (Recommended)
True
Block public and cross-account access if bucket has public policies (Recommended)
True
I had to set these all to false in order for me to change my bucket policy.
If you're trying the AWS startup workshop, try to close the website-bucket-policy.json file, an re-open it. It works for me, and I guess the update of the json file is not saved automatically unless you close it.
See if the bucket name you are specifying in Resource exists or not. The above answer from Vitaly solved my issue.
the problem I realized I had was that my bucket had a ".com" extension which needs to be included in your arn
To add to iamsohel's answer. I had this same issue when trying to set an S3 policy for enabling Elastic load balancer access logs using Terraform.
Here's the policy I was trying to set:
Access logs for your Application Load Balancer
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::elb-account-id:root"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/AWSLogs/your-aws-account-id/*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::bucket-name/prefix/AWSLogs/your-aws-account-id/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::bucket-name"
}
]
}
But I wanted to add some variables to the policy. My initial policy looked like this:
bucket_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::${var.bucket_name.2}"
}
]
}
EOF
But this was throwing an error:
Error: Error putting S3 policy: MalformedPolicy: Policy has invalid resource
│ status code: 400, request id: 3HHH9QK9SKB1V9Z0, host id: 8mOrnGi/nsHIcz59kryeriVExU7v+XgGpTw64GHfhjgkwhA3WKSfG7eNbgkMgBMA8qYlyUTLYP8=
│
│ with module.s3_bucket_policy_1.aws_s3_bucket_policy.main,
│ on ../../../../modules/aws/s3-bucket-policy/main.tf line 1, in resource "aws_s3_bucket_policy" "main":
│ 1: resource "aws_s3_bucket_policy" "main" {
All I had to do was to add /* to the end of the arn for the bucket resource:
bucket_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}/*"
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::${var.bucket_name.2}/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Effect": "Allow",
"Principal": {
"Service": "delivery.logs.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::${var.bucket_name.2}"
}
]
}
EOF
In my case it was the missing region in the arn for govcloud - so the resource had to be
"arn:aws-us-gov:s3:::grcsimpletest"
rather than
"arn:aws:s3:::grcsimpletest"
Strangely the policy that failed was from an AWS doc.... That said, it kind of clicked when I edited the policy in the S3 console and it showed the bucket arm on the edit screen.