S3 Replication - s3:PutReplicationConfiguration - amazon-s3

I have been attempting to introduce S3 bucket replication into my existing project's stack. I kept getting an 'API: s3:PutBucketReplication Access Denied' error in CloudFormation when updating my stack through my CodeBuild/CodePipeline project after adding the Replication rule on the source bucket + S3 replication role. For testing, I've added full S3 permission ( s3:* ) to the CodeBuild Role for all resources ( "*" ), as well as full S3 permissions on the S3 replication role -- again I got the same result.
Additionally, I tried running a stand-alone, stripped down version of the CF template (so not updating my existing application infrastructure stack) - which creates the buckets (source + target) and the S3 replication role. It was deployed/run through CloudFormation while logged in with my Admin role via the console and again I got the same error as when attempting the deployment with my CodeBuild role in CodePipeline.
As a last ditch sanity check, again being logged in using my admin role for the account, I attempted to perform the replication setup manually on buckets that I created using the S3 console and I got the below error:
You don't have permission to update the replication configuration
You or your AWS admin must update your IAM permissions to allow s3:PutReplicationConfiguration, and then try again. Learn more about Identity and access management in Amazon S3 API response
Access Denied
I confirmed that my role has full S3 access across all resources. This message seems to suggest to me that the permission s3:PutReplicationConfiguration may be different then other S3 permissions somehow - needing to be configured with root access to the account or something?
Also, it seems strange to me that CloudFormation indicates the s3:PutBucketReplication permission, where as the S3 console error references the permission s3:PutReplicationConfiguration. There doesn't seem to be an IAM action for s3:PutBucketReplication (ref: https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) only s3:PutReplicationConfiguration.

Have you checked Permission Boundary? Is this in a corporate control tower or stand alone account?
Deny always wins so if you have a Permission Boundary that excludes some actions even when you have explicitly allowed it you may run into issues like this.

It turns out that the required permission (s3:PutReplicationConfiguration) was actually being blocked by a preventive ControlTower Guard Rail that was put in place on the OU the AWS account exists in. Unfortunately, this DENY is not visible as a user from anywhere within the AWS account, as it exists outside of any Permission Boundary or IAM Policy. This required some investigation from our internal IT team to identify the source of the DENY from the guard rail control.
https://docs.aws.amazon.com/controltower/latest/userguide/elective-guardrails.html#disallow-s3-ccr

Related

ECS fargate, permissions to download file from S3

I am trying to deploy a ECR image to ECS Fargate. In the Dockerfile I run an AWS cli command to download a file from S3.
However, I require the relevant permissions to access the S3 from ECS. There is a task role (under ECS task definition) screenshot below, that I presume I can grant ECS the rights to access S3. However, the dropdown only provided me with the default ecsTaskExecutionRole, and not a custom role I created myself.
Is this a bug? Or am I required to add the role elsewhere?
[NOTE] I do not want to include the AWS keys as an env variable to Docker due to security reasons.
[UPDATES]
Added a new ECS role with permissions boundary with S3. Task role still did not show up.
Did you grant ECS the right to assume your custom role? As per documentation:
https://docs.aws.amazon.com/AmazonECS/latest/userguide/task-iam-roles.html#create_task_iam_policy_and_role
The a trust relationship needs to established, so that ECS service can assume the role on your behalf.

Orphaned AWS s3 Bucket Cannot Be Deleted

After making some changes for an aws hosted static website, I deleted an aws s3 bucket through the AWS console. However, the bucket is now orphaned. Although it is not listed in the AWS console, I can see still reach what is left of it through the CLI and through the URI.
When I try to recreate a www bucket with the same name, the AWS console returns the following error:
Bucket already exists
The bucket with issues has a www prefix, so now I have two different versions (www and non-www) of the same website.
The problem URI is:
www.michaelrieder.com and www.michaelrieder.com.s3-eu-west-1.amazonaws.com
I made many failed attempts to delete the bucket using the aws s3 CLI utility. I tried aws rb force, aws rm, and any other command I remotely thought might work.
I need to delete and recreate the bucket with exactly the same name so I can have www website redirection working correctly as aws enforces static website naming conventions strictly.
When I execute the aws s3 CLI command for example:
aws s3 rb s3://www.michaelrieder.com --force --debug
A typical CLI error message is:
An error occurred (AccessDenied) when calling the DeleteObject operation: Access Denied
It thought it might be a cache related issue and that the the bucket would flush itself after a period of time, but the issue has persisted for over 48 hours.
It seems to be a permissions issue, but I cannot find a way to change the phantom bucket’s permissions or any method of deleting the bucket or even it’s individual objects, since I do not have access to the bucket via the AWS console or the aws s3 CLI.
Appreciate any ideas. Please help.

AWS Deployment issue Error code: HEALTH_CONSTRAINTS (In-place deployment)

i want to Host my website on AWS s3
but when i create code deployment & i followed this url -> https://aws.amazon.com/getting-started/tutorials/deploy-code-vm/
showing this error -> Deployment Failed
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS)
error Screen shoot -> http://i.prntscr.com/oqr4AxEiThuy823jmMck7A.png
so please Help me
If you want to host your website on S3, you should upload your code into S3 bucket and enable Static Web Hosting for that bucket. If you use CodeDeploy, it will take application code either from S3 bucket or GitHub and host it on EC2 instances.
I will assume that you want to use CodeDeploy to host your website on a group of EC2 instances. The error that you have mentioned could occur if your EC2 instances do not have correct permission through IAM role.
From Documentation
The permissions you add to the service role specify the operations AWS CodeDeploy can perform when it accesses your Amazon EC2 instances and Auto Scaling groups. To add these permissions, attach an AWS-supplied policy, AWSCodeDeployRole, to the service role.
If you are following along the sample deployment from the CodeDeploy wizard make sure you have picked Create A Service Role at the stage that you are required to Select a service role.

Best practice to make S3 file accessible for Redshift through COPY operation for anyone

I want to publish a tutorial where a data from sample tsv file S3 is used by Redshift. Ideally I want it to be simple copy paste operation required to follow the exercises step by step, similar to what's in Load Sample Data from Amazon S3. The problem is with the first data import task using COPY command as it only supports S3, or EMR based load.
This seems like a simple requirement but no hassle-free way to really do it with Redshift COPY (I can make the file available for browser download without any problem but COPY requires CREDENTIALS parameter…)
Variety of options for Redshift COPY Authorization parameters is quite rich:
Should I ask user to Create an IAM Role for Amazon Redshift
himself?
Should I create it myself and publish the IAM role ARN? Sounds most hassle
free (copy paste) but security wise doesn't sound well…? Do I need to restrict S3 permissions to limit the access to only that particular file for that role?
Should I try temporary access instead?
You are correct:
Data can be imported into Amazon Redshift from Amazon S3 via the COPY command
The COPY command requires permission to access the data stored in Amazon S3. This can be granted either via:
Credentials (Access Key + Secret Key) associated with an IAM User, or
An IAM Role
You cannot create a Role for people and let them use it, because their Amazon Redshift cluster will be running in a different AWS Account than your IAM Role. You could possibly grant trust access so that other accounts can use the role, but this is not necessarily a wise thing to do.
As for credentials, they could either use their own or ones the you supply. They can access their own Access Key + Secret Key in the IAM console.
If you wish to supply credentials for them to use, you could create an IAM User that has permission only to access the Amazon S3 files they need. It is normally unwise to publish your AWS credentials because they might expose a security hole, so you should think carefully before doing this.
At the end of the day, it's probably best to show them the correct process so they understand how to obtain their own credentials. Security is very important in the cloud, so you would also be teaching them good security practice, in additional to Amazon Redshift itself.

How to grant a user account access for GCE Cloud Storage

I have a Compute VM that has storage permission of read-write. When I try to run the following command:
gsutil rsync -R local-dir gs://folder1/prod/www.domain.tld
I get insufficient permission error:
Building synchronization state...
Skipping cloud sub-directory placeholder object (gs://folder1/prod/www.domain.tld/) because such objects aren't needed in (and would interfere with) directories in the local file system
Starting synchronization
Copying file://local-dir/.gitignore [Content-Type=application/octet-stream]...
Uploading gs://folder1/prod/www.domain.tld/.gitignore: 174 B/174 B
AccessDeniedException: 403 Insufficient Permission
I am not sure what account needs to have specific permissions or how I would even assign the permissions. Can someone provide some direction as to what I need to look into? The only permissions I can think of is a service account but I am not sure how to create one or if that is what I even need to do. Also, once I grant an account access/permission, how would I use the account to authenticate?
Use gcloud auth list command to lookup the active account. The service account of Comoute Engine is similar to the following:
123845678986-compute#developer.gserviceaccount.com (active)
By default, this service account is a member of your project with Edit permission. Check ACLs of your GCS bucket and its folders and make sure the GCE service account or the group which it is belong to, has ownership or editing rights to them.