Fargate-unable to load aws credentials from any provider - amazon-s3

I have zeppelin docker, I am trying to run on ECS with fargate. For the Zeppelin notebook, I want to configure s3 to store the notebook.
so I have created a bucket and added the permission to read/write the bucket for IAM role(The same is used in ECS task and Execution role) .
When I start ECS+EC2 I see my application comes up, but when I select the launch type as fargate, my zeppelin application throws below error
Unable to load AWS credentials from any provider in the chain
Is there any extra configuration required for fargate?

Related

Providing credentials to the AWS CLI in ECS/Fargate

I would like to create an ECS task with Fargate, and have that upload a file to S3 using the AWS CLI (among other things). I know that it's possible to create task roles, which can provide the task with permissions on AWS services/resources. Similarly, in OpsWorks, the AWS SDK is able to query instance metadata to obtain temporary credentials for its instance profile. I also found these docs suggesting that something similar is possible with the AWS CLI on EC2 instances.
Is there an equivalent for Fargateā€”i.e., can the AWS CLI, running in a Fargate container, query the metadata service for temporary credentials? If not, what's a good way to authenticate so that I can upload a file to S3? Should I just create a user for this task and pass in AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY as environment variables?
(I know it's possible to have an ECS task backed by EC2, but this task is short-lived and run maybe monthly; it seemed a good fit for Fargate.)
"I know that it's possible to create task roles, which can provide the
task with permissions on AWS services/resources."
"Is there an equivalent for Fargate"
You already know the answer. The ECS task role isn't specific to EC2 deployments, it works with Fargate deployments as well.
You can get the task metadata, including IAM access keys, through the ECS metadata service. But you don't need to worry about that, because the AWS CLI, and any AWS SDK, will automatically pull that information when it is running inside an ECS task.

ECS fargate, permissions to download file from S3

I am trying to deploy a ECR image to ECS Fargate. In the Dockerfile I run an AWS cli command to download a file from S3.
However, I require the relevant permissions to access the S3 from ECS. There is a task role (under ECS task definition) screenshot below, that I presume I can grant ECS the rights to access S3. However, the dropdown only provided me with the default ecsTaskExecutionRole, and not a custom role I created myself.
Is this a bug? Or am I required to add the role elsewhere?
[NOTE] I do not want to include the AWS keys as an env variable to Docker due to security reasons.
[UPDATES]
Added a new ECS role with permissions boundary with S3. Task role still did not show up.
Did you grant ECS the right to assume your custom role? As per documentation:
https://docs.aws.amazon.com/AmazonECS/latest/userguide/task-iam-roles.html#create_task_iam_policy_and_role
The a trust relationship needs to established, so that ECS service can assume the role on your behalf.

Stream S3 file from a one AWS subaccount, Flink deployed on Kubernetes cluster in another AWS account

I have 2 AWS accounts, Account A and Account B.
Account A has a EKS cluster running with a flink cluster running on it. To manage the IAM roles, we use Kube2iam.
All the pods on cluster have specific roles assigned to them. For simplicity lets say the role for one of the pods is Pod-Role
The K8s worker nodes have the role Worker-Node-role
Kube2iam is correctly configured to make proper EC2 metadata calls when required.
Account B has a S3 bucket, which the Pod hosted in Account A worked node need to read.
Possible Solution:
Create a role in Account B, let's say, AccountB_Bucket_access_role with a policy that allows reading the bucket. Add Pod-Role as a trusted entity to it.
Add a policy in Pod-role which allows switching to AccountB_Bucket_access_role, basically the STS AssumeRole action.
Create a AWS profile in Pod, let's say, custom_profile, with role_arn set to AccountB_Bucket_access_role role's arn.
While deploying the flink pod, set AWS_PROFILE=AccountB_Bucket_access_role.
QUESTION: Given above whenever the flink app needs to talk to S3 bucket, it first assumes the AccountB_Bucket_access_role role and is able to read the S3 bucket. But setting AWS_PROFILE actually switches the role for flink app, hence all the POD-ROLE permissions are lost, and they are required for proper functioning of flink app.
Is there a way, that this AWS custom_profile could only be used when reading S3 bucket and it switches to POD-ROLE after that.
val flinkEnv: StreamExecutionEnvironment = AppUtils.setUpAndGetFlinkEnvRef(config.flink)
val textInputFormat = new TextInputFormat(new Path(config.path))
env
.readFile(
textInputFormat,
config.path,
FileProcessingMode.PROCESS_CONTINUOUSLY,
config.refreshDurationMs
)
This is what I use in flink job to read S3 file.
Nvm, we can configure a role of one account to access a particular bucket from another account. Access Bucket from another account

AWS Deployment issue Error code: HEALTH_CONSTRAINTS (In-place deployment)

i want to Host my website on AWS s3
but when i create code deployment & i followed this url -> https://aws.amazon.com/getting-started/tutorials/deploy-code-vm/
showing this error -> Deployment Failed
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems. (Error code: HEALTH_CONSTRAINTS)
error Screen shoot -> http://i.prntscr.com/oqr4AxEiThuy823jmMck7A.png
so please Help me
If you want to host your website on S3, you should upload your code into S3 bucket and enable Static Web Hosting for that bucket. If you use CodeDeploy, it will take application code either from S3 bucket or GitHub and host it on EC2 instances.
I will assume that you want to use CodeDeploy to host your website on a group of EC2 instances. The error that you have mentioned could occur if your EC2 instances do not have correct permission through IAM role.
From Documentation
The permissions you add to the service role specify the operations AWS CodeDeploy can perform when it accesses your Amazon EC2 instances and Auto Scaling groups. To add these permissions, attach an AWS-supplied policy, AWSCodeDeployRole, to the service role.
If you are following along the sample deployment from the CodeDeploy wizard make sure you have picked Create A Service Role at the stage that you are required to Select a service role.

AWSDatabaseMigrationService Error: no authentication credentials

I am trying to set up AWS DMS to copy data from S3 to Redshift.
For configuration of the source endpoint (S3), after setting the parameters, when I run the test, get the following error:
AWSDatabaseMigrationService: Cannot change the engine for endpoint
with no authentication credentials
What does this error mean?
Make sure you have followed the instructions to the letter
http://docs.aws.amazon.com/dms/latest/userguide/CHAP_Source.S3.html
specifically:
Prerequisites When Using S3 as a Source for AWS DMS
When you use S3 as a source for AWS DMS, the source S3 bucket that you
use must be in the same AWS Region as the AWS DMS replication instance
that you use to migrate your data. In addition, the AWS account you
use for the migration must have read access to the source bucket.
The AWS Identity and Access Management (IAM) role assigned to the user
account used to create the migration task must have the following set
of permissions.