I am trying to access a bucket on S3 with boto. I have been given read access to the bucket and my keys are working when I explore it in S3 Browser. The following code is returning 403 Forbidden Access Denied.
conn = S3Connection('Access_Key_ID', 'Secret_Access_Key')
conn.get_all_buckets()
This also occurs when using the access key and secret access key via the boto config file. Is there something else I need to be doing because the keys are from IAM perhaps? Could this indicate an error in the setup? I don't know much about IAM, I was just given the keys.
Some things to check...
If you are using boto, be sure you are using conn.get_bucket(bucket_name) to access only the bucket you have permission to access.
In your IAM (user) policy, if you are restricting access to a single
bucket, be sure that the policy includes adequate permissions to the
bucket and do not include a trailing slash+asterisks for the ARN name (see example below).
Be sure to set "Upload/Delete" permissions for "Authenticated Users" in S3 for the bucket.
Permissions sample:
IAM policy sample:
NOTE: The SID will be automatically generated when using the policy generator
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:*"
],
"Sid": "Stmt0000000000001",
"Resource": [
"arn:aws:s3:::myBucketName"
],
"Effect": "Allow"
}
]
}
My guess is that it's because you're calling conn.get_all_buckets() instead of conn.get_bucket(bucket_name) for the individual bucket you have access to.
from boto.s3.connection import S3Connection
conn = S3Connection('access key', 'secret access key')
allBuckets = conn.get_all_buckets()
for bucket in allBuckets:
print(str(bucket.name))
Related
I am trying to upload an object to an S3 bucket using boto3 and a service account created by a user with readwrite permissions. The IAM policy for the user is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
However, I am getting the following error:
boto3.exceptions.S3UploadFailedError: Failed to upload /tmp/tmpfnkhwptw/model/requirements.txt to ml-artifacts/1/02e5b8a81a834b6e83a3412745f4ff6a/artifacts/sklearn-model/requirements.txt: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied.
I've verified that the user is trying to upload the object to the correct bucket and prefix, and that the user has the correct permissions to write in the bucket -- this comes from a MLFlow example for a simple model using a wine dataset (can provide the code if wanted, but there's nothing special about it).
I am using MinIO as the object storage and the endpoint URL. The code works correctly if I use admin credentials, but not with user service account credentials.
What could be causing this error and how can I resolve it?
I'm afraid I didn't give the whole information on my question. I thought service accounts were always necessary (I'm inexperienced) and didn't say that I was using the service account credentials.
The answer is as simple as using the user credentials and not the service account (created by said user) credentials.
Another thing I have learned is that the service account didn't work because, for some reason, my MinIO version didn't give the same privileges as the user who created it to the account. When I opted for giving specific access privileges to the service account and pasted the "readwrite" JSON config into it, it worked.
I am trying to run a crawler across an s3 datastore in my account which contains two csv files. However, when I try to run the crawler, no tables are loaded, and I see the following errors in cloudwatch for the each of the files:
Error Access Denied (Service: Amazon S3; Status Code: 403; Error
Code: AccessDenied;
Tables created did not infer schemas from this file.
This is especially odd as the IAM role has the AdministratorAccess policy attached, so there should not be any access denied issue.
Any help would be appreciated.
Check to see if the files you are crawling are encrypted. If they are, then your Glue role probably doesn't have a policy that allows it to decrypt.
If so, it might need something like this:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": [
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
"arn:aws:kms:us-west-2:111122223333:key/0987dcba-09fe-87dc-65ba-ab0987654321"
]
}
}
Make sure the policies attached to you IAM role have these :
AmazonS3FullAccess
AwsGlueConsoleFullAccess
AwsGlueServicerole.
We had a similar issue with an S3 crawler. According to AWS, S3 crawlers, unlike JDBC crawlers, do not create an ENI in your VPC. This means your bucket policy must allow access from outside the VPC.
Check that your bucket policy does not have an explicit deny somewhere on S3:*. If there is one, make sure to add a conditional on the statement and add the role id in the conditional as aws:userId in the statement. Keep in mind the role id and role arn is not the same thing.
To get the role id:
aws iam get-role --role-name Test-Role
Output:
{
"Role": {
"AssumeRolePolicyDocument": "<URL-encoded-JSON>",
"RoleId": "AIDIODR4TAW7CSEXAMPLE",
"CreateDate": "2013-04-18T05:01:58Z",
"RoleName": "Test-Role",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/Test-Role"
}
}
You might also need to add a state that allows s3:putObject* and s3:getObject* with the aws principal the assumed role. The assumed role will look something like:
arn:aws:sts::123456789012:assumed-role/Test-Role/AWS-Crawler
Hope this helps.
In my case the issue was: the crawler was configured in different region than S3 bucket it meant to crawl. After configuring new crawler in the same region as my S3 bucket the problem was resolved.
This is an S3 bucket policy issue. I made my tables public (bad policy I know) and it worked.
IAM Roles
Here are the complete roles you need to give in order for Glue Crawler to work properly.
IAM Roles
I made sure that I wasn't missing something offered in the other suggestions, but I wasn't. It turns out there was another level of restrictions on reading the bucket imposed by my organization, though i'm not sure what it was.
We have been trying to crack an issue with resource permissions related to S3 and Lambda.
We have a root account which inturn has -
Account A - Bucket owner
Account B - Used to upload (through CORS) and give access to S3 images
ROLE L - We have a lambda function which assigned this role with Full S3 access
The buckets have access policy like below -
{
"Version": "2012-10-17",
"Id": "Policyxxxxxxxxx",
"Statement": [
{
"Sid": "Stmt44444444444",
"Effect": "Deny",
"NotPrincipal": {
"AWS": [
"arn:aws:iam::xxxxxxxxxxxx:user/account-A",
"arn:aws:iam::xxxxxxxxxxxx:role/role-L"
]
},
"Action": [
"s3:*",
],
"Resource": [
"arn:aws:s3:::bucket",
"arn:aws:s3:::bucket/*"
]
}
]
}
The issue -
The lambda is able to access S3 resource only if object ACL is set to Public/read-only. But Lambda fails when the resource is set to 'private'.
Bucket policy just gives access to the bucket. Is there a way to give Role L read access to the resource?
Objects stored in Amazon S3 buckets are private by default. There is no need to use a Deny policy unless you wish to override another policy that grants access to the content.
I would recommend:
Remove your Deny policy
Create an IAM Role for your AWS Lambda function and grant permission to access the S3 bucket within that role.
Feel free to add a Bucket Policy for normal use as appropriate, but that should not impact your Lambda function's access that is granted via the Role.
I recently setup an IAM role for accessing a bucket with the following policy:
{
"Statement": [
{
"Sid": "Stmt1359923112752",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::<BUCKET_NAME>"
]
}
]
}
While I can list the contents of the bucket fine, when I call get_contents_to_filename on a particular key, I receive a boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden exception.
Is there a role permission that I need to add to fetch keys from S3? I have checked the permissions on the individual key, and there appears to be nothing that explicitly forbids access to other users; there is only a single permission that grants the owner full permissions.
For completeness, I verified that removing the role policy above prevents access to the bucket completely thus it's not an issue with the policy being applied.
Thanks!
You have to give permission to the objects in the bucket, not just to the bucket. So your resource would have to be arn:aws:s3:::<bucketname>/*. That matches every object.
Unfortunately, that doesn't match the bucket itself. So you either need to give bucket related permissions to arn:aws:s3:::<bucketname> and object permissions to arn:aws:s3:::<bucketname>/*, or just give permissions to arn:aws:s3:::<bucketname>*. Though in that latter case, giving permissions to a bucket named fred would also give the same permissions to one named freddy.
I'm trying to create a new bucket policy in the Amazon S3 console and get the error
Invalid principal in policy - "AWS" : "my_username"
The username I'm using in principal is my default bucket grantee.
My policy
{
"Id": "Policy14343243265",
"Statement": [
{
"Sid": "SSdgfgf432432432435",
"Action": [
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my_bucket/*",
"Principal": {
"AWS": [
"my_username"
]
}
}
]
}
I don;t understand why I'm getting the error. What am I doing wrong?
As the error message says, your principal is incorrect. Check the S3 documentation on specifying Principals for how to fix it. As seen in the example policies, it needs to be something like arn:aws:iam::111122223333:root.
I was also getting the same error in the S3 Bucket policy generator. It turned out that one of the existing policies had a principal that had been deleted. The problem was not with the policy that was being added.
In this instance, to spot the policy that is bad you can look for a principal that does not have an account or a role in the ARN.
So, instead of looking like this:
"Principal": {
"AWS": "arn:aws:iam::123456789101:role/MyCoolRole"
}
It will look something like this:
"Principal": {
"AWS": "ABCDEFGHIJKLMNOP"
}
So instead of a proper ARN it will be an alphanumeric key like ABCDEFGHIJKLMNOP. In this case you will want to identify why the bad principal was there and most likely modify or delete it. Hopefully this will help someone as it was hard to track down for me and I didn't find any documentation to indicate this.
Better solution:
Create an IAM policy that gives access to the bucket
Assign it to a group
Put user into that group
Instead of saying "This bucket is allowed to be touched by this user", you can define "These are the people that can touch this".
It sounds silly right now, but wait till you add 42 more buckets and 60 users to the mix. Having a central spot to manage all resource access will save the day.
The value for Principal should be user arn which you can find in Summary section by clicking on your username in IAM.
It is because so that specific user can bind with the S3 Bucket Policy
In my case, it is arn:aws:iam::332490955950:user/sample ==> sample is the username
I was getting the same error message when I tried creating the bucket, bucket policy and principal (IAM user) inside the same CloudFormation stack. Although I could see that CF completed the IAM user creation before even starting the bucket policy creation, the stack deployment failed. Adding a DependsOn: MyIamUser to the BucketPolicy resource fixed it for me.
Why am I getting the error "Invalid principal in policy" when I try to update my Amazon S3 bucket policy?
Issue
I'm trying to add or edit the bucket policy of my Amazon Simple Storage Service (Amazon S3) bucket using the web console, awscli or terraform (etc). However, I'm getting the error message "Error: Invalid principal in policy." How can I fix this?
Resolution
You receive "Error: Invalid principal in policy" when the value of a Principal in your bucket policy is invalid. To fix this error, review the Principal elements in your bucket policy. Check that they're using one of these supported values:
The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) user or role --
Note: To find the ARN of an IAM user, run the [aws iam get-user][2] command. To find the ARN of an IAM role, run the [aws iam get-role][2] command or just go and check it from the IAM service in your account web console UI.
An AWS account ID
The string "*" to represent all users
Additionally, review the Principal elements in the policy and check that they're formatted correctly. If the Principal is one user, the element must be in this format:
"Principal": {
"AWS": "arn:aws:iam::AWS-account-ID:user/user-name1"
}
If the Principal is more than one user but not all users, the element must be in this format:
"Principal": {
"AWS": [
"arn:aws:iam::AWS-account-ID:user/user-name1",
"arn:aws:iam::AWS-account-ID:user/user-name2"
]
}
If the Principal is all users, the element must be in this format:
{
"Principal": "*"
}
If you find invalid Principal values, you must correct them so that you can save changes to your bucket policy.
Extra points!
AWS Policy Generator
Bucket Policy Examples
Ref-link: https://aws.amazon.com/premiumsupport/knowledge-center/s3-invalid-principal-in-policy-error/
I was facing the same issue when I've created a bash script to initiate my terraform s3 backend. After a few hours I've decided just to put sleep 5 after user creation and that made sense, you can notice it at the line 27 of my script
If you are getting the error Invalid principal in policy in S3 bucket policies, the following 3 steps are the way to resolve it.
1 Your bucket policy uses supported values for a Principal element
The Amazon Resource Name (ARN) of an IAM user or role
An AWS account ID
The string "*" to represent all users
2 The Principal value is formatted correctly
If the Principal is one user
"Principal": {
"AWS": "arn:aws:iam::111111111111:user/user-name1"
}
If the Principal is more than one user but not all users
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:user/user-name1",
"arn:aws:iam::111111111111:user/user-name2"
]
}
If the Principal is all users
{
"Principal": "*"
}
3 The IAM user or role wasn't deleted
If your bucket policy uses IAM users or roles as Principals, then confirm that those IAM identities weren't deleted. When you edit and then try to save a bucket policy with a deleted IAM ARN, you get the "Invalid principal in policy" error.
Read more here.
FYI: If you are trying to give access to a bucket for a region that is not enabled it will give the same error.
From AWS Docs: If your S3 bucket is in an AWS Region that isn't enabled by default, confirm that the IAM principal's account has the AWS Region enabled. For more information, see Managing AWS Regions.
If you are trying to give Account_X_ID access to the my_bucket like below. You need to enable the region of my_bucket on Account_X_ID.
"Principal": {
"AWS": [
"arn:aws:iam::<Account_X_ID>:root"
]
}
"Resource": "arn:aws:s3:::my_bucket/*",
Hope this helps someone.