How do I edit a bucket policy deployed by organizational-level CloudTrail - amazon-s3

we have a multi-account setup where we deployed an organizational-level CloudTrail in our root account's Control Tower.
Organizational-level CloudTrail allows us to deploy CloudTrail in each of our respective accounts and provides them the ability to send logs to CloudWatch in our Root account and to an S3 logging bucket in our central logging account.
Now I have AWS Athena set up in our logging account to try and run queries on the logs generated through our organizational-level CloudTrail deployment. So far, I have managed to create the Athena Table that is built on the mentioned logging bucket and I also created a destination bucket for the query results.
When I try to run a simple "preview table" query, I get the following error:
Permission denied on S3 path: s3://BUCKET_NAME/PREFIX/AWSLogs/LOGGING_ACCOUNT_NUMBER/CloudTrail/LOGS_DESTINATION
This query ran against the "default" database, unless qualified by the query. Please post the error message on our forum or contact customer support with Query Id: f72e7dbf-929c-4096-bd29-b55c6c41f582
I figured that the error is caused by the logging bucket's policy lacking any statement allowing Athena access, but when I try to edit the bucket policy I get the following error:
Your bucket policy changes can’t be saved:
You either don’t have permissions to edit the bucket policy, or your bucket policy grants a level of public access that conflicts with your Block Public Access settings. To edit a bucket policy, you need s3:PutBucketPolicy permissions. To review which Block Public Access settings are turned on, view your account and bucket settings. Learn more about Identity and access management in Amazon S3
This is strange since the role I am using has full admin access to this account.
Please advise.
Thanks in advance!

I see this is is a follow up question to your previous one: S3 Permission denied when using Athena
Control Tower guardrail automatically deploys a guardrail which prohibits updating the aws-controltower bucket policy.
In your master account, go to AWS Organizations. Then, go to your Security OU. Then go to Policies tab. You should see 2 guardrail policies:
One of them will contain this policy:
{
"Condition": {
"ArnNotLike": {
"aws:PrincipalARN": "arn:aws:iam::*:role/AWSControlTowerExecution"
}
},
"Action": [
"s3:PutBucketPolicy",
"s3:DeleteBucketPolicy"
],
"Resource": [
"arn:aws:s3:::aws-controltower*"
],
"Effect": "Deny",
"Sid": "GRCTAUDITBUCKETPOLICYCHANGESPROHIBITED"
},
Add these principals below AWSControlTowerExecution:
arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AWSAdministratorAccess*
arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AdministratorAccess*
Your condition should look like this:
"Condition": {
"ArnNotLike": {
"aws:PrincipalArn": [
"arn:aws:iam::*:role/AWSControlTowerExecution",
"arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AWSAdministratorAccess*",
"arn:aws:iam::*:role/aws-reserved/sso.amazonaws.com/*/AWSReservedSSO_AdministratorAccess*"
]
}
},
You shoulld be able to update the bucket after this is applied.

Related

boto3.exceptions.S3UploadFailedError: Failed to upload object to S3 bucket: Access Denied

I am trying to upload an object to an S3 bucket using boto3 and a service account created by a user with readwrite permissions. The IAM policy for the user is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
However, I am getting the following error:
boto3.exceptions.S3UploadFailedError: Failed to upload /tmp/tmpfnkhwptw/model/requirements.txt to ml-artifacts/1/02e5b8a81a834b6e83a3412745f4ff6a/artifacts/sklearn-model/requirements.txt: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied.
I've verified that the user is trying to upload the object to the correct bucket and prefix, and that the user has the correct permissions to write in the bucket -- this comes from a MLFlow example for a simple model using a wine dataset (can provide the code if wanted, but there's nothing special about it).
I am using MinIO as the object storage and the endpoint URL. The code works correctly if I use admin credentials, but not with user service account credentials.
What could be causing this error and how can I resolve it?
I'm afraid I didn't give the whole information on my question. I thought service accounts were always necessary (I'm inexperienced) and didn't say that I was using the service account credentials.
The answer is as simple as using the user credentials and not the service account (created by said user) credentials.
Another thing I have learned is that the service account didn't work because, for some reason, my MinIO version didn't give the same privileges as the user who created it to the account. When I opted for giving specific access privileges to the service account and pasted the "readwrite" JSON config into it, it worked.

AWS Glue Access denied for crawler with administrator policy attached

I am trying to run a crawler across an s3 datastore in my account which contains two csv files. However, when I try to run the crawler, no tables are loaded, and I see the following errors in cloudwatch for the each of the files:
Error Access Denied (Service: Amazon S3; Status Code: 403; Error
Code: AccessDenied;
Tables created did not infer schemas from this file.
This is especially odd as the IAM role has the AdministratorAccess policy attached, so there should not be any access denied issue.
Any help would be appreciated.
Check to see if the files you are crawling are encrypted. If they are, then your Glue role probably doesn't have a policy that allows it to decrypt.
If so, it might need something like this:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": [
"kms:Decrypt"
],
"Resource": [
"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
"arn:aws:kms:us-west-2:111122223333:key/0987dcba-09fe-87dc-65ba-ab0987654321"
]
}
}
Make sure the policies attached to you IAM role have these :
AmazonS3FullAccess
AwsGlueConsoleFullAccess
AwsGlueServicerole.
We had a similar issue with an S3 crawler. According to AWS, S3 crawlers, unlike JDBC crawlers, do not create an ENI in your VPC. This means your bucket policy must allow access from outside the VPC.
Check that your bucket policy does not have an explicit deny somewhere on S3:*. If there is one, make sure to add a conditional on the statement and add the role id in the conditional as aws:userId in the statement. Keep in mind the role id and role arn is not the same thing.
To get the role id:
aws iam get-role --role-name Test-Role
Output:
{
"Role": {
"AssumeRolePolicyDocument": "<URL-encoded-JSON>",
"RoleId": "AIDIODR4TAW7CSEXAMPLE",
"CreateDate": "2013-04-18T05:01:58Z",
"RoleName": "Test-Role",
"Path": "/",
"Arn": "arn:aws:iam::123456789012:role/Test-Role"
}
}
You might also need to add a state that allows s3:putObject* and s3:getObject* with the aws principal the assumed role. The assumed role will look something like:
arn:aws:sts::123456789012:assumed-role/Test-Role/AWS-Crawler
Hope this helps.
In my case the issue was: the crawler was configured in different region than S3 bucket it meant to crawl. After configuring new crawler in the same region as my S3 bucket the problem was resolved.
This is an S3 bucket policy issue. I made my tables public (bad policy I know) and it worked.
IAM Roles
Here are the complete roles you need to give in order for Glue Crawler to work properly.
IAM Roles
I made sure that I wasn't missing something offered in the other suggestions, but I wasn't. It turns out there was another level of restrictions on reading the bucket imposed by my organization, though i'm not sure what it was.

AWS S3 Bucket Policy Source IP not working

I've been trying all possible options but with no results. My Bucket Policy works well with aws:Referer but it doesn't work at all with Source Ip as the condition.
My Server is hosted with EC2 and I am using the Public IP in this format xxx.xxx.xxx.xxx/32 (Public_Ip/32) as the Source Ip parameter.
Can anyone tell me what I am doing wrong?
Currently my Policy is the following
{
"Version": "2008-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPDeny",
"Effect": "Deny",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*",
"Condition": {
"NotIpAddress": {
"aws:SourceIp": "xx.xx.xxx.xxx/32"
}
}
}
]
}
I read all examples and case studies but it doesn't seem to allow access based on Source IP...
Thanks a lot!!!
While I won't disagree that policies are better than IP address wherever possible, the accepted answer didn't actually achieve the original question's goal. I needed to do this (I need access from a machine that wasn't EC2, and thus didn't have policies).
Here is a policy that only allows a certain (or multiple IPs) to access a bucket's object. This assumes that there is no other policy to allow access to the bucket (by default, buckets grant no public access).
This policy also does not allow listing. Only if you know if the full url to the object you need. If you need more permissions, just add them to the Action bit.
{
"Id": "Policy123456789",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::mybucket/*",
"Condition" : {
"IpAddress" : {
"aws:SourceIp": [
"xx.xx.xx.xx/32"
]
}
}
}
]
}
From the discussion on the comments on the question, it looks like your situation can be rephrased as follows:
How can I give an specific EC2 instance full access to an S3 bucket, and deny access from every other source?
Usually, the best approach is to create an IAM Role and launch your EC2 instance associated with that IAM Role. As I'm going to explain, it is usually much better to use IAM Roles to define your access policies than it is to specify source IP addresses.
IAM Roles
IAM, or Identity and Access Management, is a service that can be used to create users, groups and roles, manage access policies associated with those three kinds of entities, manage credentials, and more.
Once you have your IAM role created, you are able to launch an EC2 instance "within" that role. In simple terms, it means that the EC2 instance will inherit the access policy you associated with that role. Note that you cannot change the IAM Role associated with an instance after you launched the instance. You can, however, modify the Access Policy associated with an IAM Role whenever you want.
The IAM service is free, and you don't pay anything extra when you associate an EC2 instance with an IAM Role.
In your situation
In your situation, what you should do is create an IAM Role to use within EC2 and attach a policy that will give the permissions you need, i.e., that will "Allow" all the "s3:xxx" operations it will need to execute on that specific resource "arn:aws:s3:::my_bucket/*".
Then you launch a new instance with this role (on the current AWS Management Console, on the EC2 Launch Instance wizard, you do this on the 3rd step, right after choosing the Instance Type).
Temporary Credentials
When you associate an IAM Role with an EC2 instance, the instance is able to obtain a set of temporary AWS credentials (let's focus on the results and benefits, and not exactly on how this process works). If you are using the AWS CLI or any of the AWS SDKs, then you simply don't specify any credential at all and the CLI or SDK will figure out it has to look for those temporary credentials somewhere inside the instance.
This way, you don't have to hard code credentials, or inject the credentials into the instance somehow. The instance and the CLI or SDK will manage this for you. As an added benefit, you get increased security: the credentials are temporary and rotated automatically.
In your situation
If you are using the AWS CLI, you would simply run the commands without specifying any credentials. You'll be allowed to run the APIs that you specified in the IAM Role Access Policy. For example, you would be able to upload a file to that bucket:
aws s3 cp my_file.txt s3://my_bucket/
If you are using an SDK, say the Java SDK, you would be able to interact with S3 by creating the client objects without specifying any credentials:
AmazonS3 s3 = new AmazonS3Client(); // no credentials on the constructor!
s3.putObject("my_bucket", ........);
I hope this helps you solve your problem. If you have any further related questions, leave a comment and I will try to address them on this answer.

amazon s3 invalid principal in bucket policy

I'm trying to create a new bucket policy in the Amazon S3 console and get the error
Invalid principal in policy - "AWS" : "my_username"
The username I'm using in principal is my default bucket grantee.
My policy
{
"Id": "Policy14343243265",
"Statement": [
{
"Sid": "SSdgfgf432432432435",
"Action": [
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::my_bucket/*",
"Principal": {
"AWS": [
"my_username"
]
}
}
]
}
I don;t understand why I'm getting the error. What am I doing wrong?
As the error message says, your principal is incorrect. Check the S3 documentation on specifying Principals for how to fix it. As seen in the example policies, it needs to be something like arn:aws:iam::111122223333:root.
I was also getting the same error in the S3 Bucket policy generator. It turned out that one of the existing policies had a principal that had been deleted. The problem was not with the policy that was being added.
In this instance, to spot the policy that is bad you can look for a principal that does not have an account or a role in the ARN.
So, instead of looking like this:
"Principal": {
"AWS": "arn:aws:iam::123456789101:role/MyCoolRole"
}
It will look something like this:
"Principal": {
"AWS": "ABCDEFGHIJKLMNOP"
}
So instead of a proper ARN it will be an alphanumeric key like ABCDEFGHIJKLMNOP. In this case you will want to identify why the bad principal was there and most likely modify or delete it. Hopefully this will help someone as it was hard to track down for me and I didn't find any documentation to indicate this.
Better solution:
Create an IAM policy that gives access to the bucket
Assign it to a group
Put user into that group
Instead of saying "This bucket is allowed to be touched by this user", you can define "These are the people that can touch this".
It sounds silly right now, but wait till you add 42 more buckets and 60 users to the mix. Having a central spot to manage all resource access will save the day.
The value for Principal should be user arn which you can find in Summary section by clicking on your username in IAM.
It is because so that specific user can bind with the S3 Bucket Policy
In my case, it is arn:aws:iam::332490955950:user/sample ==> sample is the username
I was getting the same error message when I tried creating the bucket, bucket policy and principal (IAM user) inside the same CloudFormation stack. Although I could see that CF completed the IAM user creation before even starting the bucket policy creation, the stack deployment failed. Adding a DependsOn: MyIamUser to the BucketPolicy resource fixed it for me.
Why am I getting the error "Invalid principal in policy" when I try to update my Amazon S3 bucket policy?
Issue
I'm trying to add or edit the bucket policy of my Amazon Simple Storage Service (Amazon S3) bucket using the web console, awscli or terraform (etc). However, I'm getting the error message "Error: Invalid principal in policy." How can I fix this?
Resolution
You receive "Error: Invalid principal in policy" when the value of a Principal in your bucket policy is invalid. To fix this error, review the Principal elements in your bucket policy. Check that they're using one of these supported values:
The Amazon Resource Name (ARN) of an AWS Identity and Access Management (IAM) user or role --
Note: To find the ARN of an IAM user, run the [aws iam get-user][2] command. To find the ARN of an IAM role, run the [aws iam get-role][2] command or just go and check it from the IAM service in your account web console UI.
An AWS account ID
The string "*" to represent all users
Additionally, review the Principal elements in the policy and check that they're formatted correctly. If the Principal is one user, the element must be in this format:
"Principal": {
"AWS": "arn:aws:iam::AWS-account-ID:user/user-name1"
}
If the Principal is more than one user but not all users, the element must be in this format:
"Principal": {
"AWS": [
"arn:aws:iam::AWS-account-ID:user/user-name1",
"arn:aws:iam::AWS-account-ID:user/user-name2"
]
}
If the Principal is all users, the element must be in this format:
{
"Principal": "*"
}
If you find invalid Principal values, you must correct them so that you can save changes to your bucket policy.
Extra points!
AWS Policy Generator
Bucket Policy Examples
Ref-link: https://aws.amazon.com/premiumsupport/knowledge-center/s3-invalid-principal-in-policy-error/
I was facing the same issue when I've created a bash script to initiate my terraform s3 backend. After a few hours I've decided just to put sleep 5 after user creation and that made sense, you can notice it at the line 27 of my script
If you are getting the error Invalid principal in policy in S3 bucket policies, the following 3 steps are the way to resolve it.
1 Your bucket policy uses supported values for a Principal element
The Amazon Resource Name (ARN) of an IAM user or role
An AWS account ID
The string "*" to represent all users
2 The Principal value is formatted correctly
If the Principal is one user
"Principal": {
"AWS": "arn:aws:iam::111111111111:user/user-name1"
}
If the Principal is more than one user but not all users
"Principal": {
"AWS": [
"arn:aws:iam::111111111111:user/user-name1",
"arn:aws:iam::111111111111:user/user-name2"
]
}
If the Principal is all users
{
"Principal": "*"
}
3 The IAM user or role wasn't deleted
If your bucket policy uses IAM users or roles as Principals, then confirm that those IAM identities weren't deleted. When you edit and then try to save a bucket policy with a deleted IAM ARN, you get the "Invalid principal in policy" error.
Read more here.
FYI: If you are trying to give access to a bucket for a region that is not enabled it will give the same error.
From AWS Docs: If your S3 bucket is in an AWS Region that isn't enabled by default, confirm that the IAM principal's account has the AWS Region enabled. For more information, see Managing AWS Regions.
If you are trying to give Account_X_ID access to the my_bucket like below. You need to enable the region of my_bucket on Account_X_ID.
"Principal": {
"AWS": [
"arn:aws:iam::<Account_X_ID>:root"
]
}
"Resource": "arn:aws:s3:::my_bucket/*",
Hope this helps someone.

Multiple access_keys for different privileges with same S3 account?

I have a single S3/AWS account. I have several websites each which use their own bucket on S3 for reading/writing storage. I also host a lot of personal stuff (backups, etc) on other buckets on S3, which are not publicly accessible.
I would like to not have these websites-- some of which may have other people accessing their source code and configuration properties and seeing the S3 keys-- having access to my private data!
It seems from reading Amazon's docs that I need to partition privileges, by Amazon USER per bucket, not by access key per bucket. But that's not going to work. It also seems like I only get 2 access keys. I need to have one access key which is the master key, and several others which have much more circumscribed permissions-- only for certain buckets.
Is there any way to do that, or to approximate that?
You can achieve your goal by facilitating AWS Identity and Access Management (IAM):
AWS Identity and Access Management (IAM) enables you to securely
control access to AWS services and resources for your users. IAM
enables you to create and manage users in AWS, and it also enables you
to grant access to AWS resources for users managed outside of AWS in
your corporate directory. IAM offers greater security, flexibility,
and control when using AWS. [emphasis mine]
As emphasized, using IAM is strongly recommended for all things AWS anyway, i.e. ideally you should never use your main account credentials for anything but setting up IAM initially (as mentioned by Judge Mental already, you can generate as many access keys as you want like so).
You can use IAM just fine via the AWS Management Console (i.e. their is no need for 3rd party tools to use all available functionality in principle).
Generating the required policies can be a bit tricky in times, but the AWS Policy Generator is extremely helpful to get you started and explore what's available.
For the use case at hand you'll need a S3 Bucket Policy, see Using Bucket Policies in particular and Access Control for a general overview of the various available S3 access control mechanisms (which can interfere in subtle ways, see e.g. Using ACLs and Bucket Policies Together).
Good luck!
Yes, To access different login account with different permission using same AWS Account, you can use AWS IAM. As a developer of Bucket Explorer, I am suggesting try Bucket Explorer- team edition if you are looking for the tool provide you gui interface with different login and different access permission. read http://team20.bucketexplorer.com
Simply create a custom IAM group policy that limits access to a particular bucket
Such as...
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource":
[
"arn:aws:s3:::my.bucket",
"arn:aws:s3:::my.bucket/*"
]
}
]
}
The first action s3:ListAllMyBuckets allows the users to list all of the buckets. Without that, their S3 client will show nothing in the bucket listing when the users logon.
The second action grants full S3 privileges to the user for the bucket named 'my.bucket'. That means they're free to create/list/delete bucket resources and user ACLs within the bucket.
Granting s3:* access is pretty lax. If you want tighter controls on the bucket just look up the relevant actions you want to grant and add them as a list.
"Action": "s3:Lists3:GetObject, s3:PutObject, s3:DeleteObject"
I suggest you create this policy as a group (ex my.bucket_User) so you can assign it to every user who needs access to this bucket without any unnecessary copypasta.