I am trying to deploy a ECR image to ECS Fargate. In the Dockerfile I run an AWS cli command to download a file from S3.
However, I require the relevant permissions to access the S3 from ECS. There is a task role (under ECS task definition) screenshot below, that I presume I can grant ECS the rights to access S3. However, the dropdown only provided me with the default ecsTaskExecutionRole, and not a custom role I created myself.
Is this a bug? Or am I required to add the role elsewhere?
[NOTE] I do not want to include the AWS keys as an env variable to Docker due to security reasons.
[UPDATES]
Added a new ECS role with permissions boundary with S3. Task role still did not show up.
Did you grant ECS the right to assume your custom role? As per documentation:
https://docs.aws.amazon.com/AmazonECS/latest/userguide/task-iam-roles.html#create_task_iam_policy_and_role
The a trust relationship needs to established, so that ECS service can assume the role on your behalf.
Related
I would like to create an ECS task with Fargate, and have that upload a file to S3 using the AWS CLI (among other things). I know that it's possible to create task roles, which can provide the task with permissions on AWS services/resources. Similarly, in OpsWorks, the AWS SDK is able to query instance metadata to obtain temporary credentials for its instance profile. I also found these docs suggesting that something similar is possible with the AWS CLI on EC2 instances.
Is there an equivalent for Fargateāi.e., can the AWS CLI, running in a Fargate container, query the metadata service for temporary credentials? If not, what's a good way to authenticate so that I can upload a file to S3? Should I just create a user for this task and pass in AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables?
(I know it's possible to have an ECS task backed by EC2, but this task is short-lived and run maybe monthly; it seemed a good fit for Fargate.)
"I know that it's possible to create task roles, which can provide the
task with permissions on AWS services/resources."
"Is there an equivalent for Fargate"
You already know the answer. The ECS task role isn't specific to EC2 deployments, it works with Fargate deployments as well.
You can get the task metadata, including IAM access keys, through the ECS metadata service. But you don't need to worry about that, because the AWS CLI, and any AWS SDK, will automatically pull that information when it is running inside an ECS task.
I have been attempting to introduce S3 bucket replication into my existing project's stack. I kept getting an 'API: s3:PutBucketReplication Access Denied' error in CloudFormation when updating my stack through my CodeBuild/CodePipeline project after adding the Replication rule on the source bucket + S3 replication role. For testing, I've added full S3 permission ( s3:* ) to the CodeBuild Role for all resources ( "*" ), as well as full S3 permissions on the S3 replication role -- again I got the same result.
Additionally, I tried running a stand-alone, stripped down version of the CF template (so not updating my existing application infrastructure stack) - which creates the buckets (source + target) and the S3 replication role. It was deployed/run through CloudFormation while logged in with my Admin role via the console and again I got the same error as when attempting the deployment with my CodeBuild role in CodePipeline.
As a last ditch sanity check, again being logged in using my admin role for the account, I attempted to perform the replication setup manually on buckets that I created using the S3 console and I got the below error:
You don't have permission to update the replication configuration
You or your AWS admin must update your IAM permissions to allow s3:PutReplicationConfiguration, and then try again. Learn more about Identity and access management in Amazon S3 API response
Access Denied
I confirmed that my role has full S3 access across all resources. This message seems to suggest to me that the permission s3:PutReplicationConfiguration may be different then other S3 permissions somehow - needing to be configured with root access to the account or something?
Also, it seems strange to me that CloudFormation indicates the s3:PutBucketReplication permission, where as the S3 console error references the permission s3:PutReplicationConfiguration. There doesn't seem to be an IAM action for s3:PutBucketReplication (ref: https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html) only s3:PutReplicationConfiguration.
Have you checked Permission Boundary? Is this in a corporate control tower or stand alone account?
Deny always wins so if you have a Permission Boundary that excludes some actions even when you have explicitly allowed it you may run into issues like this.
It turns out that the required permission (s3:PutReplicationConfiguration) was actually being blocked by a preventive ControlTower Guard Rail that was put in place on the OU the AWS account exists in. Unfortunately, this DENY is not visible as a user from anywhere within the AWS account, as it exists outside of any Permission Boundary or IAM Policy. This required some investigation from our internal IT team to identify the source of the DENY from the guard rail control.
https://docs.aws.amazon.com/controltower/latest/userguide/elective-guardrails.html#disallow-s3-ccr
so I'm using a MLFlow tracking server where I define a S3 bucket to be the artifact stores. Right now, MLFlow by default is getting the credentials to write/read the bucket via my default profile in .aws/credentials but I do have a staging and dev profile as well. So my question is is there a way to explicitly tells MLFlow to use the staging or dev profile credentials instead of default? I can't seem to find this info anywhere. Thanks!
To allow the server and clients to access the artifact location, you should configure your cloud provider credentials as normal. For example, for S3, you can set the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, use an IAM role, or configure a default profile in ~/.aws/credentials. See Set up AWS Credentials and Region for Development for more info.
Apparently there is no option to set another profile. I use aws-vault so it is easy to change profiles
I have 2 AWS accounts, Account A and Account B.
Account A has a EKS cluster running with a flink cluster running on it. To manage the IAM roles, we use Kube2iam.
All the pods on cluster have specific roles assigned to them. For simplicity lets say the role for one of the pods is Pod-Role
The K8s worker nodes have the role Worker-Node-role
Kube2iam is correctly configured to make proper EC2 metadata calls when required.
Account B has a S3 bucket, which the Pod hosted in Account A worked node need to read.
Possible Solution:
Create a role in Account B, let's say, AccountB_Bucket_access_role with a policy that allows reading the bucket. Add Pod-Role as a trusted entity to it.
Add a policy in Pod-role which allows switching to AccountB_Bucket_access_role, basically the STS AssumeRole action.
Create a AWS profile in Pod, let's say, custom_profile, with role_arn set to AccountB_Bucket_access_role role's arn.
While deploying the flink pod, set AWS_PROFILE=AccountB_Bucket_access_role.
QUESTION: Given above whenever the flink app needs to talk to S3 bucket, it first assumes the AccountB_Bucket_access_role role and is able to read the S3 bucket. But setting AWS_PROFILE actually switches the role for flink app, hence all the POD-ROLE permissions are lost, and they are required for proper functioning of flink app.
Is there a way, that this AWS custom_profile could only be used when reading S3 bucket and it switches to POD-ROLE after that.
val flinkEnv: StreamExecutionEnvironment = AppUtils.setUpAndGetFlinkEnvRef(config.flink)
val textInputFormat = new TextInputFormat(new Path(config.path))
env
.readFile(
textInputFormat,
config.path,
FileProcessingMode.PROCESS_CONTINUOUSLY,
config.refreshDurationMs
)
This is what I use in flink job to read S3 file.
Nvm, we can configure a role of one account to access a particular bucket from another account. Access Bucket from another account
i want to Host my website on AWS s3
but when i create code deployment & i followed this url -> https://aws.amazon.com/getting-started/tutorials/deploy-code-vm/
showing this error -> Deployment Failed
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS)
error Screen shoot -> http://i.prntscr.com/oqr4AxEiThuy823jmMck7A.png
so please Help me
If you want to host your website on S3, you should upload your code into S3 bucket and enable Static Web Hosting for that bucket. If you use CodeDeploy, it will take application code either from S3 bucket or GitHub and host it on EC2 instances.
I will assume that you want to use CodeDeploy to host your website on a group of EC2 instances. The error that you have mentioned could occur if your EC2 instances do not have correct permission through IAM role.
From Documentation
The permissions you add to the service role specify the operations AWS CodeDeploy can perform when it accesses your Amazon EC2 instances and Auto Scaling groups. To add these permissions, attach an AWS-supplied policy, AWSCodeDeployRole, to the service role.
If you are following along the sample deployment from the CodeDeploy wizard make sure you have picked Create A Service Role at the stage that you are required to Select a service role.