cross account permission for s3:putObject is not working as expected - amazon-s3

Am trying to the test the s3 cross Account permission with instructions provided int the S3 Document. In the documentation example Account A creates bucket policy with read access to Account B root. Account B created a user Dave and provided him Read Access on Account A bucket.
I have tried the above example and it perfectly worked fine for me. But when i try to use the same example for write access it doesnt work for me. For example in account A i created below bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountB-ID:root"
},
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::examplebucket"
]
}
]
}
In Account B i created user Dave with below permission
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Example",
"Effect": "Allow",
"Action": [
"s3: PutObject"
],
"Resource": [
"arn:aws:s3:::examplebucket"
]
}
]
}
But when i try to put object using User Dave credentials of Account B i get access denied.
Is this expected behaviour or am i missing some thing.

In your user policy s3: PutObject shouldn't have a space in it.

Related

Error while creating database using AWS Glue

I created a service role using AWS documentation with the following trust relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "glue.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
In terms of the policies attached to the role, I have attached AWSGlueService policy and Amazons3FullAccess policy. Additionally I have attached the kms policy as below just in case if it tries to decrypt something
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:"
],
"Resource": [
""
]
}
]
}
As a matter of fact, I tried this in one AWS account and I tried to create database, and I can create a database under AWS Glue>Databases>Add Database. It is working fine.
When I try to confiqure the same policies in another aws account, it throws the following error
{"service":"AWSGlue","statusCode":400,"errorCode":"GlueEncryptionException","requestId":"5c852699-d6c8-48df-8793-bbaab85cf783","errorMessage":"Invalid keyId aws/glue (Service: AWSKMS; Status Code: 400; Error Code: NotFoundException; Request ID: 314f7791-8b25-4bcc-bf56-b0b55e76d300; Proxy: null)","type":"AwsServiceError"}
I kind of understand that I am missing something fundamental. I could not get this resolved. Please could you help me understand, what I am missing in the permissions.

Cannot upload to s3 bucket with public access blocked

I'm trying to upload to an s3 bucket which works fine if I set Block all public access: Off. However, with it on and the following bucket policy I get an access denied message
{
"Version": "2012-10-17",
"Id": "Policy1594969692377",
"Statement": [
{
"Sid": "Stmt1594969687722",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::066788420637:user/transloadit2"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketname"
}
]
}
I thought that enabling/disabling public access just allowed rules to be created to then make the bucket public? I don't understand why it is blocking my upload when it is disabled.
Many thanks,
Matt
It appears that you wish to grant permission for a specific IAM User to upload to an Amazon S3 bucket.
For this, instead of creating a bucket policy you should add a policy to the IAM User themselves, such as:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
}
]
}
Note: This policy is granting too many permissions, including permission to delete the bucket and all its contents. I recommend that you limit it to the specific API calls desired (eg PutObject, GetObject).
Basically:
If you wish to grant access to one user, put the policy on the IAM User
If you wish to grant access to 'everybody', use a bucket policy
Permissions granted to the IAM User are not impacted by S3 Block Public Access.
You use the bucket itself as a resource. If you want to do operations over objects (put, get etc.), you need to put objects path to the resource. So, in your case, you should update your policy like this:
{
"Version": "2012-10-17",
"Id": "Policy1594969692377",
"Statement": [
{
"Sid": "Stmt1594969687722",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::066788420637:user/transloadit2"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucketname",
"arn:aws:s3:::bucketname/*"
]
}
]
}

AWS S3 sync inconsistent failure when attempting to sync to another bucket in a different account with kms in the mix

Executive summary of the problem. I have a bucket let's call it bucket A that is setup with a default Customer KMS key (will call the id: 1111111) in one account, which we will call 123. In that bucket there are two objects, which are both under the same path within this bucket. They have the same KMS key ID and the same Owner. When I attempt to sync these to a new bucket B in a different account, let's account 456, one is successfully sync'd over but the other is not and instead I get:
An error occurred (AccessDenied) when calling the CopyObject operation: Access Denied
Has anyone seen inconsistent behavior like this before? I say inconsistent because there is absolutely no difference in the access rights between these but one is successful and another isn't. Note: my summary states two objects for simplicity but one of my real cases there are 30 objects where 2 are copying over and the rest failing and within some other paths different mixed results.
The following describes conditions -- some data obfuscated for security but in a consistent manner:
Bucket A (com.mycompany.datalake.us-east-1) Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccess",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::123:root",
"arn:aws:iam::456:root"
]
},
"Action": [
"s3:PutObjectTagging",
"s3:PutObjectAcl",
"s3:PutObject",
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*",
"arn:aws:s3:::com.mycompany.datalake.us-east-1"
]
},
{
"Sid": "DenyIfNotGrantingFullAccess",
"Effect": "Deny",
"Principal": {
"AWS": [
"arn:aws:iam::123:root",
"arn:aws:iam::456:root"
]
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*",
"arn:aws:s3:::com.mycompany.datalake.us-east-1"
],
"Condition": {
"StringNotLike": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Sid": "DenyIfNotUsingExpectedKmsKey",
"Effect": "Deny",
"Principal": {
"AWS": [
"arn:aws:iam::123:root",
"arn:aws:iam::456:root"
]
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*",
"arn:aws:s3:::com.mycompany.datalake.us-east-1"
],
"Condition": {
"StringNotLike": {
"s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-1:123:key/1111111"
}
}
}
]
}
Also in the source account, I have created an assumed role, which I call datalake_full_access_role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*",
"arn:aws:s3:::com.mycompany.datalake.us-east-1"
]
}
]
}
Which has a Trusted relationship with account 456. Also worth mentioning is that currently the policy for the KMS key 1111111 is wide open:
{
"Version": "2012-10-17",
"Id": "key-default-1",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"kms:Encrypt*",
"kms:Decrypt*",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:Describe*"
],
"Resource": "*"
}
]
}
Now for the target bucket B (mycompany-us-west-2-datalake) in account 456, the Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccountBasedAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::456:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mycompany-us-west-2-datalake",
"arn:aws:s3:::mycompany-us-west-2-datalake/*"
]
}
]
}
To do the migration (the sync) I provision an EC2 instance within the 456 account and attach to it an instance profile that has the following policies attached to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::123:role/datalake_full_access_role"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:DescribeKey",
"kms:ReEncrypt*",
"kms:CreateGrant",
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-east-1:123:key/1111111"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1",
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mycompany-us-west-2-datalake",
"arn:aws:s3:::mycompany-us-west-2-datalake/*"
]
}
]
}
Now on the EC2 instance I install latest aws version:
$ aws --version
aws-cli/1.16.297 Python/3.5.2 Linux/4.4.0-1098-aws botocore/1.13.33
and then run my sync command:
aws s3 sync s3://com.mycompany.datalake.us-east-1 s3://mycompany-us-west-2-datalake --source-region us-east-1 --region us-west-2 --acl bucket-owner-full-control --exclude '*' --include '*/zone=raw/Event/*' --no-progress
I believe I've done my homework and this all should work and for several objects it does but not all and I have nothing else up my sleeve to try at this point. Note I have been 100% successful in syncing to a local directory on the EC2 instance and then from the local directory to the new bucket with the following two calls:
aws s3 sync s3://com.mycompany.datalake.us-east-1 datalake --source-region us-east-1 --exclude '*' --include '*/zone=raw/Event/*' --no-progress
aws s3 sync datalake s3://mycompany-us-west-2-datalake --region us-west-2 --acl bucket-owner-full-control --exclude '*' --include '*/zone=raw/Event/*' --no-progress
This absolutely makes no sense as from an access POV there is no difference. The following is a look into the attributes of two objects in the source bucket, one that succeeds and one that fails:
Successful object:
Owner
Dev.Awsmaster
Last modified
Jan 12, 2019 10:11:48 AM GMT-0800
Etag
12ab34
Storage class
Standard
Server-side encryption
AWS-KMS
KMS key ID
arn:aws:kms:us-east-1:123:key/1111111
Size
9.2 MB
Key
security=0/zone=raw/Event/11_96152d009794494efeeae49ed10da653.avro
Failed object:
Owner
Dev.Awsmaster
Last modified
Jan 12, 2019 10:05:26 AM GMT-0800
Etag
45cd67
Storage class
Standard
Server-side encryption
AWS-KMS
KMS key ID
arn:aws:kms:us-east-1:123:key/1111111
Size
3.2 KB
Key
security=0/zone=raw/Event/05_6913583e47f457e9e25e9ea05cc9c7bb.avro
ADDENDUM: After looking through several cases I am starting to see a pattern. I think there may be an issue when the object is too small. In 10 out of 10 directories analyzed where some but not all objects synced successfully, all that were successful had a size of 8MB or more and all that failed were under 8MB. Could this be a bug with aws s3 sync when KMS is in the mix? I am wondering if I can tweak the ~/.aws/config such that it may address this?
I found a solution; although, I still think this is a bug with aws s3 sync. By setting the following in the ~./aws/config all objects synced successfully:
[default]
output = json
s3 =
signature_version = s3v4
multipart_threshold = 1
The signature_version I had before but figured I would provide it for completeness in case someone has a similar need. The new entry is multipart_threshold = 1, which means an object with any size at all will trigger a multipart upload. I didn't specify the multipart_chunksize, which according to documentation will default to 5MB.
Honestly, this requirement doesn't make sense as it shouldn't matter if the object was uploaded to S3 previously using multipart or not and I know this doesn't matter when KMS isn't involved but apparently it does matter when it is.

IAM bucket policy to allow cross-account Lambda function to write to S3

I'm having a tough time figuring out how to make this work. Our client runs a Lambda function to generate data to write to our bucket. Lambda assumes a role and because of that (I think) all our attempts to allow the client's entire account access to the bucket still result in an AccessDenied error.
In looking at our logs I see the AccessDenied is returned for the STS assumed-role. However, S3 console won't allow me to add a policy for a wildcard Principal, and the assumed role's session ID changes each session.
My guess from the sparse documentation is that we need to provide a trust relationship to the lambda.amazonaws.com service. But I can't find any documentation anywhere on how to limit that to just access from a specific Lambda function or account.
I would like to have something like this but with further constraints on the Principal so that it's not accessible by any account or Lambda function.
{
"Version": "2012-10-17",
"Id": "Policy11111111111111",
"Statement": [
{
"Sid": "Stmt11111111111111",
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::bucket-name-here/*",
"arn:aws:s3:::bucket-name-here"
]
}
]
}
UPDATE
This policy doesn't even work. It still returns an AccessDenied. The user listed in the logs is in the form of arn:aws:sts::111111222222:assumed-role/role-name/awslambda_333_201512111822444444.
So at the point I'm at a loss as to how to even allow a Lambda function to write to an S3 bucket.
We resolved this eventually with help from the IAM team.
IAM roles do not inherit any permission from the account so they need permissions assigned explicitly to the assumed role for the Lambda script.
In our case the Lambda script was also trying to grant the destination bucket owner full control of the copied file. The role assumed by the Lambda function was missing permissions for s3:PutObjectAcl.
After we added the permission the lambda function began working correctly.
The destination policy that we have working now is something like this:
{
"Version": "2012-10-17",
"Id": "Policy11111111111111",
"Statement": [
{
"Sid": "Stmt11111111111111",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListBucket*",
"Resource": "arn:aws:s3:::bucket-name",
"Condition": {
"StringLike": {
"aws:userid": "ACCOUNT-ID:awslambda_*"
}
}
},
{
"Sid": "Stmt11111111111111",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"StringLike": {
"aws:userid": "ACCOUNT-ID:awslambda_*"
}
}
},
{
"Sid": "Stmt11111111111111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::0000000000000:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name"
},
{
"Sid": "Stmt11111111111111",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::0000000000000:root"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
To Allow Cross account lambda function to get access of s3 bucket
following policy we need to add to s3 bucket policy externally
{
"Sid": "AWSLambda",
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com",
"AWS": "arn:aws:iam::<AccountID>:root"
},
"Action": "s3:GetObject",
"Resource": "<AWS_S3_Bucket_ARN>/*"
}
Following Template will help you to allow cross account Lambda function to access s3 bucket
Parameters:
LamdaAccountId:
Description: AccountId to which allow access
Type: String
Resources:
myBucket:
Type: 'AWS::S3::Bucket'
Properties: {}
Metadata:
'AWS::CloudFormation::Designer':
id: e5eb9fcf-5fe2-468c-ad54-b9b41ba1926a
myPolicy:
Type: 'AWS::S3::BucketPolicy'
Properties:
Bucket: !Ref myBucket
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: Stmt1580304800238
Action: 's3:*'
Effect: Allow
Resource:
- !Sub 'arn:aws:s3:::${myBucket}/*'
Principal:
Service: lambda.amazonaws.com
AWS:
- !Sub '${LamdaAccountId}'

s3cmd reporting Access Denied on user of account but not when using main account

We have two AWS accounts. We are using s3cmd to backup data from one s3 bucket to another.
The issue we have run into is this: The source bucket is public, and can be accessed by anybody without credentials. When we initiate the backup with s3cmd using one of the two master key pairs from the s3 bucket where want to put the backup files on it works flawlessly.
However, when we try to perform this same operation - this time using a user's key pair rather than the account's key pair (on the account where we are backing up the files to) we are given an access denied error.
Here is the command we run:
s3cmd -c /root/.s3cfgBackup sync s3://oldbucket/news/ s3://newbucket/Videos/
Here is the policy on the user that gets access denied
{
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::newbucket",
"arn:aws:s3:::newbucket/*"
]
}
],
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
}
]
}
Can anyone help me resolve this access denied issue? It would be greatly appreciated.
I would try changing the policy on that user this way:
{
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::newbucket",
"arn:aws:s3:::newbucket/*"
]
}
],
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::oldbucket",
"arn:aws:s3:::oldbucket/*"
]
}
],
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
}
]
}