How to enable s3 Copy Bucket Permissions in Terraform statement - amazon-s3

My goal is to copy the data from a set of s3 buckets into main logging account bucket. Every time I try to perform:
aws s3 cp s3://sub-account-cloudtrail s3://master-acccount-cloudtrail --profile=admin;
I get
(AccessDenied) when calling the CopyObject operation: Access Denied`
I've looked at this post:
How to fix AccessDenied calling CopyObject
I am trying to add the bucket permissions to a Terraform data aws_iam_policy_document. The statement is written like so
data aws_iam_policy_document s3 {
version = "2012-10-17"
statement {
sid = "CopyOobjectPermissions"
effect = "Allow"
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/ops-mgmt-admin"]
}
actions = ["s3:GetObject","s3:PutObject","s3:PutObjectAcl"]
resources = ["${aws_s3_bucket.nfcisbenchmark_cloudtrail.arn}/*"]
}
statement {
sid = "CopyBucketPermissions"
actions = ["s3:ListBucket"]
effect = "Allow"
principals {
type = "AWS"
identifiers = ["arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/ops-mgmt-admin"]
}
resources = ["${aws_s3_bucket.nfcisbenchmark_cloudtrail.arn}/*"]
}
}
My goal is to restrict the permissions to the role that is assumed from the sub-account to the master account. My specific question is what permissions need to be added in order to enable copy permissions?
Expected:
Terraform plan runs successfully
Actual:
│ Error: Error putting S3 policy: MalformedPolicy: Action does not apply to any resource(s) in statement
How can I resolve this?

Two things to mention:
In your second statement the resource is wrong, this is why you get the MalformedPolicy error. It should be:
resources = [aws_s3_bucket.nfcisbenchmark_cloudtrail.arn]
Be careful with the identifier. At this point I'm not really sure if your buckets are in different accounts or not. If they are, the account_id in the identifier should reference the source account. data.aws_caller_identity.current.account_id returns the account ID to which Terraform is authenticated, which usually is the account where you are deploying resources (destination account). If your are not doing cross account copying, than it should be fine as it is.
Furthermore, in case of cross account access, ops-mgmt-admin role should have a policy applied to it which gives access to get/list/upload objects to an S3 bucket.

Related

Why bucket ARN ending with /* needs to be mentioned for resource in bucket Policy to allow user to upload the file

I created S3 bucket and I have added one user in IAM. Suppose my bucuket name sample123. When in bucket ploicy I mentioned resource like below statement user is not able to upload document.
Resource": "arn:aws:s3:::sample123"
But when resource is mentioned in policy as below , the user is able to upload document.
Resource": [ "arn:aws:s3:::sample123","arn:aws:s3:::sample123/*"]
what adding /* to ARN will do in policy. Note : I gave full bucket permissions to the user.
sample123/* means the all objects in sample123 bucket.
doc of S3 ARN examples says:
The ARN format for Amazon S3 resources reduces to the following:
arn:aws:s3:::bucket_name/key_name
...
The following ARN uses the wildcard * in the relative-ID part of the
ARN to identify all objects in the examplebucket bucket.
arn:aws:s3:::examplebucket/*
Also refer Example of S3 Actions with policy

Unable to create google group with Terraform resource google_cloud_identity_group

The following resource is used to create a google group using the terraform google-beta and version 3.36:
resource "google_cloud_identity_group" "cloud_identity_group_basic" {
provider = google-beta
display_name = "aaa bbb"
parent = "customers/XXX"
group_key {
id = "aaa_bbb#evilcorp.com"
}
labels = {
"cloudidentity.googleapis.com/groups.discussion_forum" = ""
}
}
terraform plan tells me that it will create the resource but performing apply results in an error (Actor does not have permission to create group). The terraform service-account has already a lot of permissions such as Organization Administrator, Google Cloud Managed Identities Admin, Google Cloud Managed Identities Domain Admin, ...
G Suite Domain-wide Delegation also has been tried, but unsure how this might help.
Terraform will perform the following actions:
# google_cloud_identity_group.cloud_identity_group_basic will be created
+ resource "google_cloud_identity_group" "cloud_identity_group_basic" {
+ create_time = (known after apply)
+ display_name = "aaa bbb"
+ id = (known after apply)
+ labels = {
+ "cloudidentity.googleapis.com/groups.discussion_forum" = ""
}
+ name = (known after apply)
+ parent = "customers/XXX"
+ update_time = (known after apply)
+ group_key {
+ id = "aaa_bbb#evilcorp.com"
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
google_cloud_identity_group.cloud_identity_group_basic: Creating...
Error: Error creating Group: googleapi: Error 403: Error(2015): Actor does not have permission to create group 'aaa_bbb#evilcorp.com'.
Details:
[
{
"#type": "type.googleapis.com/google.rpc.ResourceInfo",
"description": "Error(2015): Actor does not have permission to create group 'aaa_bbb#evilcorp.com'.",
"owner": "domain:cloudidentity.googleapis.com",
"resourceType": "cloudidentity.googleapis.com/Group"
}
]
on groups.tf line 1, in resource "google_cloud_identity_group" "cloud_identity_group_basic":
1: resource "google_cloud_identity_group" "cloud_identity_group_basic" {
It is possible to use service accounts with Google Groups APIs without domain-wide delegation now.
See: Setting up the Groups API / Assigning an admin role to the service account. This enabled the terraform service-account to create/manage groups.
Building a bit in top of #Dag answer:
It is also possible to do it through the Admin Console.
Actually, I didn't find any other way, as it seems impossible to obtain the uniqueID of the Default Cloud Build Service Account.
Follow the previous link as a Workspace Super User.
Click on the Groups Admin role.
Click in the down arrow in the Admins section
Finally click on Assign service accounts there you can paste
the service account email (<YOUR-PROJECT-ID>#cloudbuild.gserviceaccount.com)
After doing this, it is actually possible to obtain the service accounts uniqueID: Just run the Try this API from the Directory API documentation with the roleId (you can get the roleId from the URL you are after point 2) and the customer id that you can obtain from the Account settings.

Add custom S3 endpoint for Vertica backup

I am trying to backup the Vertica cluster to a S3 like data store (supports S3 protocol) internal to my enterprise network. We have similar credentials (ACCESS KEY and SECRET KEY).
Here's how my .ini file looks like
[S3]
s3_backup_path = s3://vertica_backups
s3_backup_file_system_path = []:/vertica/backups
s3_concurrency_backup = 10
s3_concurrency_restore = 10
[Transmission]
hardLinkLocal = True
[Database]
dbName = production
dbUser = dbadmin
dbPromptForPassword = False
[Misc]
snapshotName = fullbak1
restorePointLimit = 3
objectRestoreMode = createOrReplace
passwordFile = pwdfile
enableFreeSpaceCheck = True
Where can I supply my specific endpoint? For instance, my S3 store is available on a.b.c.d:80. I have tried changing s3_backup_path = a.b.c.d:80://wms_vertica_backups but I get the error Error: Error in VBR config: Invalid s3_backup_path. Also, I have the ACCESS KEY and SECRET KEY in ~/.aws/credentials.
After going through more resources I have exported the following ENV variables VBR_BACKUP_STORAGE_ENDPOINT_URL, VBR_BACKUP_STORAGE_ACCESS_KEY_ID, VBR_BACKUP_STORAGE_SECRET_ACCESS_KEY. vbr init throws the error Error: Unable to locate credentials Init FAILED. , I'm guessing it is still trying to connect to the AWS S3 servers. (Now removed credentials from ~/.aws/credentials
I think it's worthy to add that I'm running Vertica Enterprise mode 8.1.1.
For anyone looking for something similar, the question was answered in the Vertica forum here

Wildcard at end of principal for s3 bucket

I want to allow roles within an account that have a shared prefix to be able to read from an S3 bucket. For example, we have a number of roles named RolePrefix1, RolePrefix2, etc, and may create more of these roles in the future. We want all roles in an account that begin with RolePrefix to be able to access the S3 bucket, without having to change the policy document in the future.
My terraform for bucket policy document is as below:
data "aws_iam_policy_document" "bucket_policy_document" {
statement {
effect = "Allow"
actions = ["s3:GetObject"]
principals = {
type = "AWS"
identifiers = ["arn:aws:iam::111122223333:role/RolePrefix*"]
}
resources = ["${aws_s3_bucket.bucket.arn}/*"]
}
}
This gives me the following error:
Error putting S3 policy: MalformedPolicy: Invalid principal in policy.
Is it possible to achieve this functionality in another way?
You cannot use wildcard along with the ARN in the IAM principal field. You're allowed to use just "*".
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements_principal.html
When you specify users in a Principal element, you cannot use a wildcard (*) to mean "all users". Principals must always name a specific user or users.
Workaround:
Keep "Principal":{"AWS":"*"} and create a condition based on ARNLike etc as they accept user ARN with wildcard in condition.
Example:
https://aws.amazon.com/blogs/security/how-to-restrict-amazon-s3-bucket-access-to-a-specific-iam-role/

Amazon S3 error- A conflicting conditional operation is currently in progress against this resource.

Why I got this error when I try to create a bucket in amazon S3?
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the name is available again.
This error means that, the bucket was recently deleted and is queued for delete in S3. You must wait until the Bucket name is available again.
Kindly note, I received this error when my access-priviliges were blocked.
The error means your Operation for creating new bucket at S3 is aborted.
There can be multiple reasons for this, you can check the below points for rectifying this error:
Is this Bucket available or is Queued for Deletion
Do you have adequate access privileges for this operation
Your Bucket Name must be unique
P.S: Edited this answer to add more details as shared by Sanity below, and his answer is more accurate with updated information.
You can view the related errors for this operation here.
I am editing my asnwer so that correct answer posted below can be selected as correct answer to this question.
Creating a S3 bucket policy and the S3 public access block for a bucket at the same time will cause the error.
Terraform example
resource "aws_s3_bucket_policy" "allow_alb_access_bucket_elb_log" {
bucket = local.bucket_alb_log_id
policy = data.aws_iam_policy_document.allow_alb_access_bucket_elb_log.json
}
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
}
Solution
resource "aws_s3_bucket_public_access_block" "lb_log" {
bucket = local.bucket_alb_log_id
block_public_acls = true
block_public_policy = true
#--------------------------------------------------------------------------------
# To avoid OperationAborted: A conflicting conditional operation is currently in progress
#--------------------------------------------------------------------------------
depends_on = [
aws_s3_bucket_policy.allow_alb_access_bucket_elb_log
]
}
We have also observed this error several times when we try to move bucket from one account to other. In order to achieve this you should do the following :
Backup content of the S3 bucket you want to move.
Delete S3 bucket on the account.
Wait for 1/2 hours
Create a bucket with the same name in another account
Restore s3 bucket backup
I received this error running a terraform apply with the error:
Error: error creating public access block policy for S3 bucket
(bucket-name): OperationAborted: A conflicting
conditional operation is currently in progress against this resource.
Please try again.
status code: 409, request id: 30B386F1FAA8AB9C, host id: M8flEj6+ncWr0174ftzHd74CXBjhlY8Ys70vTyORaAGWA2rkKqY6pUECtAbouqycbAZs4Imny/c=
It said to "please try again" which I did and it worked the second time. It seems there wasn't enough wait time when provisioning the initial resource with Terraform.
To fully resolve this error, I inserted a 5 second sleep between multiple requests. There is nothing else that I had to do.