s3cmd reporting Access Denied on user of account but not when using main account - amazon-s3

We have two AWS accounts. We are using s3cmd to backup data from one s3 bucket to another.
The issue we have run into is this: The source bucket is public, and can be accessed by anybody without credentials. When we initiate the backup with s3cmd using one of the two master key pairs from the s3 bucket where want to put the backup files on it works flawlessly.
However, when we try to perform this same operation - this time using a user's key pair rather than the account's key pair (on the account where we are backing up the files to) we are given an access denied error.
Here is the command we run:
s3cmd -c /root/.s3cfgBackup sync s3://oldbucket/news/ s3://newbucket/Videos/
Here is the policy on the user that gets access denied
{
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::newbucket",
"arn:aws:s3:::newbucket/*"
]
}
],
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
}
]
}
Can anyone help me resolve this access denied issue? It would be greatly appreciated.

I would try changing the policy on that user this way:
{
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::newbucket",
"arn:aws:s3:::newbucket/*"
]
}
],
"Statement": [
{
"Action": "s3:*",
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::oldbucket",
"arn:aws:s3:::oldbucket/*"
]
}
],
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
}
]
}

Related

AWS SFTP Transfer Family - Session policies

I have setup a AWS SFTP server with custom api gateway identity provider. The user is created as SFTP/username in secrets manager with following key, value pairs -
Password: <passwordvalue>
Role: <roleARN> // roleARN policy is as follows
HomeDirectory: /<s3bucketname>/<username>
The roleARN's policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketContents",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:ListAllMyBuckets",
"s3:ListBucketVersions",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::<s3bucketname>"
},
{
"Sid": "AllUserReadAccessInUserFolder",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<s3bucketname>/<username>/*"
]
},
{
"Sid": "AllUserFullAccessForToFolders",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<s3bucketname>/<username>/To/*"
]
},
{
"Sid": "AllUserReadAccessForFromFolders",
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<s3bucketname>/<username>/From/*"
]
},
{
"Sid": "DenyUserFromDeletingStandardFolders",
"Action": [
"s3:DeleteObject"
],
"Effect": "Deny",
"Resource": [
"arn:aws:s3:::<s3bucketname>/<username>/To/",
"arn:aws:s3:::<s3bucketname>/<username>/From/"
]
}
]
}
With the current policy I have correct permissions for a specific user and the permissions/access is working as expected, but the problem is the hardcoded user in the policy.
I now have to create one more user for SFTP in secrets manager and was expecting to use the same IAM role what I have used for first user. I found that this can be achieved using session policies (https://docs.aws.amazon.com/transfer/latest/userguide/users-policies.html) that I can use same role/policy for multiple sftp users in secrets manager.
But I am having hard time getting it to work.
When I am replacing in the policy - the s3bucketname with ${transfer:HomeBucket}
and related values as mentioned in the session policies link above - I was expecting it to work, but I kept running into access denied issues when trying to list the s3 bucket contents via SFTP client.
Can someone help me understand what am I missing here, any help greatly appreciated.
Got to know that I need to use HomeDirectoryDetails instead of HomeDirectory the logical directory - https://aws.amazon.com/blogs/storage/simplify-your-aws-sftp-structure-with-chroot-and-logical-directories/
Thanks.

How to restrict the S3 buckets listing in the Visual Studio AWS explorer

I am setting up the AWS toolkit in the Visual Studio. I have created an IAM user which will be used for development.
But for the IAM user I have configured I am seeing that it cannot see the S3 buckets in the explorer. It gives "Access denied".
This is the custom role assigned to the IAM user:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowListing",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::dev-buckets"
},
{
"Sid": "AllowReadWriteDel",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::dev-buckets/*"
}
]
}
The only way I can get it working is by adding "AmazonS3FullAccess" policy to the IAM user. But then it exposes all the buckets in the account. Not just the buckets meant for the developers.
Is it possible to do using a custom policy? I am a beginner.
You cannot only list specific bucket when trying to list buckets.
I think the following policy should help you out:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::dev-buckets",
"arn:aws:s3:::dev-buckets/*"
]
}
]
}

IAM Policy is not giving access to the accesspoint

With this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:ListStorageLensConfigurations",
"s3:ListAccessPointsForObjectLambda",
"s3:GetAccessPoint",
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:ListAccessPoints",
"s3:ListJobs",
"s3:PutStorageLensConfiguration",
"s3:CreateJob"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
I am allowed to access a specific s3 accesspoint. However, when I try using a more specific access which only gives s3:* actions to a specific accesspoint:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:ListStorageLensConfigurations",
"s3:ListAccessPointsForObjectLambda",
"s3:GetAccessPoint",
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:ListAccessPoints",
"s3:ListJobs",
"s3:PutStorageLensConfiguration",
"s3:CreateJob"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:eu-west-1:598276570227:accesspoint/accesspointname"
}
]
}
This does not work, and the EC2 with this role stops being able to access the s3 access point (just copying a file using the AWS CLI)
First why is this happening? The role still should have access to all the actions on that accesspoint by my reckoning (which must be wrong in some way!).
Secondly, I am trying to make it such that an s3 bucket is only accessible from a certain IAM role. I tried setting this from the access policy from the access point itself. This had the opposite problem that it was too permissive and everything could still access it. What is the correct way of doing this - putting an IAM policy on the accesspoint to restrict access to the IAM role or making an IAM Role which has access to this s3 access point?
I got this working by using this:]
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:ListStorageLensConfigurations",
"s3:ListAccessPointsForObjectLambda",
"s3:GetAccessPoint",
"s3:PutAccountPublicAccessBlock",
"s3:GetAccountPublicAccessBlock",
"s3:ListAllMyBuckets",
"s3:ListAccessPoints",
"s3:ListJobs",
"s3:PutStorageLensConfiguration",
"s3:CreateJob"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*",
"Condition": {
"StringLike": {
"s3:DataAccessPointArn": "arn:aws:s3:eu-west-1:598276570227:accesspoint/accesspointname"
}
}
}
]
}

AWS S3 sync inconsistent failure when attempting to sync to another bucket in a different account with kms in the mix

Executive summary of the problem. I have a bucket let's call it bucket A that is setup with a default Customer KMS key (will call the id: 1111111) in one account, which we will call 123. In that bucket there are two objects, which are both under the same path within this bucket. They have the same KMS key ID and the same Owner. When I attempt to sync these to a new bucket B in a different account, let's account 456, one is successfully sync'd over but the other is not and instead I get:
An error occurred (AccessDenied) when calling the CopyObject operation: Access Denied
Has anyone seen inconsistent behavior like this before? I say inconsistent because there is absolutely no difference in the access rights between these but one is successful and another isn't. Note: my summary states two objects for simplicity but one of my real cases there are 30 objects where 2 are copying over and the rest failing and within some other paths different mixed results.
The following describes conditions -- some data obfuscated for security but in a consistent manner:
Bucket A (com.mycompany.datalake.us-east-1) Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccess",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::123:root",
"arn:aws:iam::456:root"
]
},
"Action": [
"s3:PutObjectTagging",
"s3:PutObjectAcl",
"s3:PutObject",
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*",
"arn:aws:s3:::com.mycompany.datalake.us-east-1"
]
},
{
"Sid": "DenyIfNotGrantingFullAccess",
"Effect": "Deny",
"Principal": {
"AWS": [
"arn:aws:iam::123:root",
"arn:aws:iam::456:root"
]
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*",
"arn:aws:s3:::com.mycompany.datalake.us-east-1"
],
"Condition": {
"StringNotLike": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Sid": "DenyIfNotUsingExpectedKmsKey",
"Effect": "Deny",
"Principal": {
"AWS": [
"arn:aws:iam::123:root",
"arn:aws:iam::456:root"
]
},
"Action": "s3:PutObject",
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*",
"arn:aws:s3:::com.mycompany.datalake.us-east-1"
],
"Condition": {
"StringNotLike": {
"s3:x-amz-server-side-encryption-aws-kms-key-id": "arn:aws:kms:us-east-1:123:key/1111111"
}
}
}
]
}
Also in the source account, I have created an assumed role, which I call datalake_full_access_role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*",
"arn:aws:s3:::com.mycompany.datalake.us-east-1"
]
}
]
}
Which has a Trusted relationship with account 456. Also worth mentioning is that currently the policy for the KMS key 1111111 is wide open:
{
"Version": "2012-10-17",
"Id": "key-default-1",
"Statement": [
{
"Sid": "Enable IAM User Permissions",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"kms:Encrypt*",
"kms:Decrypt*",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:Describe*"
],
"Resource": "*"
}
]
}
Now for the target bucket B (mycompany-us-west-2-datalake) in account 456, the Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccountBasedAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::456:root"
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mycompany-us-west-2-datalake",
"arn:aws:s3:::mycompany-us-west-2-datalake/*"
]
}
]
}
To do the migration (the sync) I provision an EC2 instance within the 456 account and attach to it an instance profile that has the following policies attached to it:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::123:role/datalake_full_access_role"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:DescribeKey",
"kms:ReEncrypt*",
"kms:CreateGrant",
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-east-1:123:key/1111111"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::com.mycompany.datalake.us-east-1",
"arn:aws:s3:::com.mycompany.datalake.us-east-1/security=0/*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::mycompany-us-west-2-datalake",
"arn:aws:s3:::mycompany-us-west-2-datalake/*"
]
}
]
}
Now on the EC2 instance I install latest aws version:
$ aws --version
aws-cli/1.16.297 Python/3.5.2 Linux/4.4.0-1098-aws botocore/1.13.33
and then run my sync command:
aws s3 sync s3://com.mycompany.datalake.us-east-1 s3://mycompany-us-west-2-datalake --source-region us-east-1 --region us-west-2 --acl bucket-owner-full-control --exclude '*' --include '*/zone=raw/Event/*' --no-progress
I believe I've done my homework and this all should work and for several objects it does but not all and I have nothing else up my sleeve to try at this point. Note I have been 100% successful in syncing to a local directory on the EC2 instance and then from the local directory to the new bucket with the following two calls:
aws s3 sync s3://com.mycompany.datalake.us-east-1 datalake --source-region us-east-1 --exclude '*' --include '*/zone=raw/Event/*' --no-progress
aws s3 sync datalake s3://mycompany-us-west-2-datalake --region us-west-2 --acl bucket-owner-full-control --exclude '*' --include '*/zone=raw/Event/*' --no-progress
This absolutely makes no sense as from an access POV there is no difference. The following is a look into the attributes of two objects in the source bucket, one that succeeds and one that fails:
Successful object:
Owner
Dev.Awsmaster
Last modified
Jan 12, 2019 10:11:48 AM GMT-0800
Etag
12ab34
Storage class
Standard
Server-side encryption
AWS-KMS
KMS key ID
arn:aws:kms:us-east-1:123:key/1111111
Size
9.2 MB
Key
security=0/zone=raw/Event/11_96152d009794494efeeae49ed10da653.avro
Failed object:
Owner
Dev.Awsmaster
Last modified
Jan 12, 2019 10:05:26 AM GMT-0800
Etag
45cd67
Storage class
Standard
Server-side encryption
AWS-KMS
KMS key ID
arn:aws:kms:us-east-1:123:key/1111111
Size
3.2 KB
Key
security=0/zone=raw/Event/05_6913583e47f457e9e25e9ea05cc9c7bb.avro
ADDENDUM: After looking through several cases I am starting to see a pattern. I think there may be an issue when the object is too small. In 10 out of 10 directories analyzed where some but not all objects synced successfully, all that were successful had a size of 8MB or more and all that failed were under 8MB. Could this be a bug with aws s3 sync when KMS is in the mix? I am wondering if I can tweak the ~/.aws/config such that it may address this?
I found a solution; although, I still think this is a bug with aws s3 sync. By setting the following in the ~./aws/config all objects synced successfully:
[default]
output = json
s3 =
signature_version = s3v4
multipart_threshold = 1
The signature_version I had before but figured I would provide it for completeness in case someone has a similar need. The new entry is multipart_threshold = 1, which means an object with any size at all will trigger a multipart upload. I didn't specify the multipart_chunksize, which according to documentation will default to 5MB.
Honestly, this requirement doesn't make sense as it shouldn't matter if the object was uploaded to S3 previously using multipart or not and I know this doesn't matter when KMS isn't involved but apparently it does matter when it is.

AWS restrict access to subfolder in s3

I am trying to restrict an IAM role to only be able to access a specific subfolder (key prefix) in an S3 bucket. Here's the policy JSON I'm using, but currently the user can still access other folders in the bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::somebucket",
"arn:aws:s3:::somebucket/*"
]
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"s3:ListBucketVersions",
"s3:ListBucketByTags",
"s3:GetBucketAcl"
],
"Resource": [
"arn:aws:s3:::mybucket"
]
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": [
"s3:GetObjectAcl",
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::mybucket/datasets/company1/*"
]
}
]
}
Currently, using this role I can still do, e.g.
aws s3 cp s3://mybucket/datasets/company2/dataset.csv
and download the dataset. What am I doing wrong?
When I try and simulate the policy it seems to be correct (trying to getObject on mybucket/datasets/company2/dataset.csv fails implicitly, but this does not happen in practice. There are no other policies attached to this user.