Kubectl Forbidden error in EKS after modifying the configmap - amazon-eks

I have locked myself out after modifying the config map. Is there any way around this?
This happened after i modified the config map using
kubectl edit -n kube-system configmap/aws-auth
Now i am getting an error using the IAM role that was used to create the cluster
Error from server (Forbidden): pods is forbidden: User "USERNAME" cannot list resource "pods" in API group "" in the namespace "default"

By default, EKS Cluster creator (IAM role/user) get full accessto EKS Cluster (irrespective of aws-auth configMap)
Run aws sts get-caller-identity and validate if Arn from response is the IAM role/user that created the EKS Cluster.
If you are locked out with no access for Cluster Creator, reach out to AWS Premium Support using the same account as EKS Cluster. They can help fix it (hope).
Worst case, have to create a new Cluster.

Related

How to see EKS Fargate FluentBit logs for debugging?

Is there any way I can see FluentBit logs for EKS Fargate? I'd like to see the errors that are raised by the plugins.
The EKS Fargate logging manual provides a way to see if the ConfigMap is valid. The ConfigMap entry I'm using is valid, but there seem to be some issues in the plugin because the logs aren't created in Cloudwatch and I don't know why.
Turns out AWS provides a way - we need to put the flag flb_log_cw: "true" under data in the ConfigMap (ref), and that would output the FluentBit logs to Cloudwatch logs.

Change original creator of EKS cluster

Is it possible to change the original creator of an EKS cluster to another role. I still have access to the cluster, with both the original creator role and the new one I want to transfer the cluster to.
The new role is now encoded in de aws_auth config map, but we locked ourselves out by deleting the config map (in a terraform update). We were able to restore it using the creator role, but we'd rather not use that one anymore.
Is it possible to update the creator user, or do I need to create a new cluster with the proper role, and then transfer everything over?
From the Amazon Docs:
You don't need to add cluster_creator to the aws-auth ConfigMap to get admin access to the Amazon EKS cluster. By default, the cluster_creator has admin access to the Amazon EKS cluster that it created.

ECS fargate, permissions to download file from S3

I am trying to deploy a ECR image to ECS Fargate. In the Dockerfile I run an AWS cli command to download a file from S3.
However, I require the relevant permissions to access the S3 from ECS. There is a task role (under ECS task definition) screenshot below, that I presume I can grant ECS the rights to access S3. However, the dropdown only provided me with the default ecsTaskExecutionRole, and not a custom role I created myself.
Is this a bug? Or am I required to add the role elsewhere?
[NOTE] I do not want to include the AWS keys as an env variable to Docker due to security reasons.
[UPDATES]
Added a new ECS role with permissions boundary with S3. Task role still did not show up.
Did you grant ECS the right to assume your custom role? As per documentation:
https://docs.aws.amazon.com/AmazonECS/latest/userguide/task-iam-roles.html#create_task_iam_policy_and_role
The a trust relationship needs to established, so that ECS service can assume the role on your behalf.

Stream S3 file from a one AWS subaccount, Flink deployed on Kubernetes cluster in another AWS account

I have 2 AWS accounts, Account A and Account B.
Account A has a EKS cluster running with a flink cluster running on it. To manage the IAM roles, we use Kube2iam.
All the pods on cluster have specific roles assigned to them. For simplicity lets say the role for one of the pods is Pod-Role
The K8s worker nodes have the role Worker-Node-role
Kube2iam is correctly configured to make proper EC2 metadata calls when required.
Account B has a S3 bucket, which the Pod hosted in Account A worked node need to read.
Possible Solution:
Create a role in Account B, let's say, AccountB_Bucket_access_role with a policy that allows reading the bucket. Add Pod-Role as a trusted entity to it.
Add a policy in Pod-role which allows switching to AccountB_Bucket_access_role, basically the STS AssumeRole action.
Create a AWS profile in Pod, let's say, custom_profile, with role_arn set to AccountB_Bucket_access_role role's arn.
While deploying the flink pod, set AWS_PROFILE=AccountB_Bucket_access_role.
QUESTION: Given above whenever the flink app needs to talk to S3 bucket, it first assumes the AccountB_Bucket_access_role role and is able to read the S3 bucket. But setting AWS_PROFILE actually switches the role for flink app, hence all the POD-ROLE permissions are lost, and they are required for proper functioning of flink app.
Is there a way, that this AWS custom_profile could only be used when reading S3 bucket and it switches to POD-ROLE after that.
val flinkEnv: StreamExecutionEnvironment = AppUtils.setUpAndGetFlinkEnvRef(config.flink)
val textInputFormat = new TextInputFormat(new Path(config.path))
env
.readFile(
textInputFormat,
config.path,
FileProcessingMode.PROCESS_CONTINUOUSLY,
config.refreshDurationMs
)
This is what I use in flink job to read S3 file.
Nvm, we can configure a role of one account to access a particular bucket from another account. Access Bucket from another account

Does Heptio Authenticator be deployed automatically when creating EKS Cluster?

I have done the below steps.
Created an EKS Cluster
Installed aws-iam-authenticator client binary
Execute "aws eks update-kubeconfig --name <cluster_name>"
Execute "kubectl get svc"
I am able to view the services available in my cluster. When I see ~/.kube/config file it is using an external command called "aws-iam-authenticator".
My understanding is that "aws-iam-authenticator" uses my ~/.aws/credentials and retrieves the token from AWS(aws-iam-authenticator token -i cluster-1) and uses that token for "kubectl get svc" command. Is my understanding correct?
If my understanding correct, where does heptio comes into picture in this flow? Does Heptio Authenticator be deployed automatically when creating the EKS Cluster?
Basically, Heptio authenticator = aws-iam-authenticator.
You can check the details on here. If your aws-iam-authenticator is working fine, then you don't need to care about heptio additionally. They just renamed it.