We have been trying to crack an issue with resource permissions related to S3 and Lambda.
We have a root account which inturn has -
Account A - Bucket owner
Account B - Used to upload (through CORS) and give access to S3 images
ROLE L - We have a lambda function which assigned this role with Full S3 access
The buckets have access policy like below -
{
"Version": "2012-10-17",
"Id": "Policyxxxxxxxxx",
"Statement": [
{
"Sid": "Stmt44444444444",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::xxxxxxxxxxxx:user/account-A",
"arn:aws:iam::xxxxxxxxxxxx:role/role-L"
]
},
"Action": [
"s3:*",
],
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*"
]
}
]
}
The issue -
The lambda is able to access S3 resource only if object ACL is set to Public/read-only. But Lambda fails when the resource is set to 'private'.
Bucket policy just gives access to the bucket. Is there a way to give Role L read access to the resource?
Objects stored in Amazon S3 buckets are private by default. There is no need to use a Deny policy unless you wish to override another policy that grants access to the content.
I would recommend:
Remove your Deny policy
Create an IAM Role for your AWS Lambda function and grant permission to access the S3 bucket within that role.
Feel free to add a Bucket Policy for normal use as appropriate, but that should not impact your Lambda function's access that is granted via the Role.
Related
By trying to get s3 object(of account1) from ec2 instance(of account2), the Sts session creation is failed with error:
"User arn:aws:sts::99*804963:assumed-role/i-9B6331541002f46-us-west is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::85*****15:role/MyS3DataReadRole
To provide access to fetch s3 object:
I've created permission in account2 with GetObject access to the s3 object(arn).
I've provided trust relationship for the role, where Principal.AWS = arn:aws:sts::99*804963:role/i-9B6331541002f46-us-west
The only suspicious point here is "assumed-role" instead of "role" in the user instance arn. AFAIK The user arn is calculated automatically by AWS SDK automatically, but I can't understand why "assumed-" prefix is added before the "role". I.e. in error message is mentioned: "arn:aws:sts::99804963:assumed-role/i-9B6331541002f46-us-west"
but in trust relationship I've provided correct arn, i.e. "arn:aws:sts::99804963:role/i-9B6331541002f46-us-west"
You also have to create the assume role policy and attach it to the EC2 instance role (99*804963) so that EC2 instance role can have permissions to assume the role (85*****15:role) which has read permissions for the S3 object.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::85*****15:role/MyS3DataReadRole"
}
]
}
Please update the account id in the above policy with actual one.
I am adding a IAM user for read and write access to objects in AWS S3 bucket from React Native app. I plan to use signed URL to access the objects in a S3 bucket so the policy should be for programmatically access. The IAM user is created just for the purpose of read/write access to S3 bucket. When I open the existing policy to choose from, there are only 4 S3 related policy:
I can use the read only access. But I didn't find write access permission. Full access to the bucket seems to much to give. Also some of them has description of management console use and I am not sure if the policy could be used programmatically.
It appears that you are asking how to assign read and write permissions on a specific bucket to a specific user.
This can be done by attaching an inline policy to the IAM User. It would be something like:
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"s3:ListAllMyBuckets"
],
"Resource":"arn:aws:s3:::*"
},
{
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource":"arn:aws:s3:::examplebucket"
},
{
"Effect":"Allow",
"Action":[
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource":"arn:aws:s3:::examplebucket/*"
}
]
}
Please note that some permissions are granted against the bucket (eg ListBucket) while some are granted within the bucket (eg GetBucket).
See: User Policy Examples - Amazon Simple Storage Service
I have some very highly confidential data that i want to store in s3 bucket.
I want to make policies ( bucket or iam whatever required) in such a way that no one ( not even admin) can read the contents of files in that bucket from aws console.
But i will have a program running on my host that needs to put and get data from that s3 bucket.
Also i will be using server side encryption of s3 but i can't use client side encryption of s3.
You are looking for something like this;
{
"Id": "bucketPolicy",
"Statement": [
{
"Action": "s3:*",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::111111111111:user/USERNAME",
"arn:aws:iam::111111111111:role/ROLENAME"
]
},
"Resource": [
"arn:aws:s3:::examplebucket",
"arn:aws:s3:::examplebucket/*"
]
}
],
"Version": "2012-10-17"
}
For test purposes make sure you replace arn:aws:iam::111111111111:user/USERNAME with your user arn. So in case you lock out everybody you can at least perform actions on the bucket.
arn:aws:iam::111111111111:role/ROLENAME should be replaced by the role arn which is attached to your EC2 instance (I am assuming that is what you mean by host).
I recently setup an IAM role for accessing a bucket with the following policy:
{
"Statement": [
{
"Sid": "Stmt1359923112752",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<BUCKET_NAME>"
]
}
]
}
While I can list the contents of the bucket fine, when I call get_contents_to_filename on a particular key, I receive a boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden exception.
Is there a role permission that I need to add to fetch keys from S3? I have checked the permissions on the individual key, and there appears to be nothing that explicitly forbids access to other users; there is only a single permission that grants the owner full permissions.
For completeness, I verified that removing the role policy above prevents access to the bucket completely thus it's not an issue with the policy being applied.
Thanks!
You have to give permission to the objects in the bucket, not just to the bucket. So your resource would have to be arn:aws:s3:::<bucketname>/*. That matches every object.
Unfortunately, that doesn't match the bucket itself. So you either need to give bucket related permissions to arn:aws:s3:::<bucketname> and object permissions to arn:aws:s3:::<bucketname>/*, or just give permissions to arn:aws:s3:::<bucketname>*. Though in that latter case, giving permissions to a bucket named fred would also give the same permissions to one named freddy.
I am trying to access a bucket on S3 with boto. I have been given read access to the bucket and my keys are working when I explore it in S3 Browser. The following code is returning 403 Forbidden Access Denied.
conn = S3Connection('Access_Key_ID', 'Secret_Access_Key')
conn.get_all_buckets()
This also occurs when using the access key and secret access key via the boto config file. Is there something else I need to be doing because the keys are from IAM perhaps? Could this indicate an error in the setup? I don't know much about IAM, I was just given the keys.
Some things to check...
If you are using boto, be sure you are using conn.get_bucket(bucket_name) to access only the bucket you have permission to access.
In your IAM (user) policy, if you are restricting access to a single
bucket, be sure that the policy includes adequate permissions to the
bucket and do not include a trailing slash+asterisks for the ARN name (see example below).
Be sure to set "Upload/Delete" permissions for "Authenticated Users" in S3 for the bucket.
Permissions sample:
IAM policy sample:
NOTE: The SID will be automatically generated when using the policy generator
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:*"
],
"Sid": "Stmt0000000000001",
"Resource": [
"arn:aws:s3:::myBucketName"
],
"Effect": "Allow"
}
]
}
My guess is that it's because you're calling conn.get_all_buckets() instead of conn.get_bucket(bucket_name) for the individual bucket you have access to.
from boto.s3.connection import S3Connection
conn = S3Connection('access key', 'secret access key')
allBuckets = conn.get_all_buckets()
for bucket in allBuckets:
print(str(bucket.name))