How to access Amazon S3 bucket to the Kubernetes pods using IAM roles instead of Access key & secret keys? - amazon-s3

I am trying to mount S3 bucket using s3fs-fuse to the Kubernetes pod. My S3 bucket is protected by IAM roles and i dont have Access Keys and Secret Keys to access S3 bucket. I know how to access a S3bucket from the Kubernetes pod using Access & Secrets Keys, but how do we access S3 bucket using IAM roles ?
Does anyone has suggestion on doing this ?

You use the IRSA system, attaching an IAM role to a Kubernetes service account and then attaching that K8s SA to your pod. See https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html for a starting point.

Related

Getting AWS Credentials as configured on POD

Context:
We are using AWS SM for storing secrets
AWS Credentials were loaded from EC2 instance as below
public AWSSecretsManager getDefaultSecretsManagerClient(String region) {
return AWSSecretsManagerClientBuilder.standard()
.withCredentials(new InstanceProfileCredentialsProvider(false)) // <== Loads credentials from EC2 instance
.withRegion(region)
.build();
}
Current: We are planning to move to Amazon EKS. While running the container the AWS credentials were picked from EC2 instance rather than POD. Can someone please guide me on which credential provider to use here so that AWS creds gets picked up from POD rather than the underlying EC2 instance?
You might look at useing IAM Roles for Service Accounts (IRSA) to associate an IAM role with the K8s service account used for the POD. Once you setup IRSA it will behave much like roles for EC2, though you will likely want to use DefaultAWSCredentialsProviderChain.getInstance() instead of InstanceProfileCredentialsProvider to retrieve the credentials.
You can also use the secrets-store-csi-driver-provider-aws plugin to the CSI secret store driver to retrieve secrets from secrets manager (also using IRSA) as mounted files or etcd secrets. Note that the README for this project also has simplified instructions for setting up IRSA.

AWS Glue and S3 Access Points

Is AWS Glue supports S3 Access Point.
Suppose I create IAM Role and assign it to AWS Glue service.
(https://docs.aws.amazon.com/glue/latest/dg/create-an-iam-role.html)
And later I want to use this IAM Role in S3 Access point policies.
(https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-points-policies.html#access-points-policy-examples)
Is it supported ?

Docker registry on EKS using service account to store data on S3

I'm trying to configure docker registry v2 on a EKS cluster. I'd like to use S3 as storage backend with credential manage by service account but it seems that doesn't work.
I log in running POD to check permissions using:
aws sts get-caller-identity
aws s3 ls s3://BUCKET_NAME
aws s3 cp s3://BUCKET_NAME/FILENAME
aws s3api put-object --bucket BUCKETNAME --key KEY
and all seems to work properly but if I try to perform a "docker push" I get this error log:
s3aws: AccessDenied: Access Denied\n\tstatus code: 403
If I set ACCESS_KEY and SECRET_KEY it works but I'd like to use service account.
Any idea?
If I set ACCESS_KEY and SECRET_KEY it works but I'd like to use service account.
Yes, in Kubernetes you use Service Accounts. But the AWS API requires IAM Permissions for authorization.
You can setup IAM Roles for Service Accounts to associate a Kubernetes ServiceAccount with an IAM Role. You also need to add the needed IAM Permissions to that IAM Role. Using aws-cli or and AWS SDK should work with that solution from a Pod.
The fact that you can run aws CLI commands without getting any error messages means that your service account is setup properly and it can use those permissions but it doesn't mean all applications running on that pod can use them too. Your application (in your case docker registry) should use a AWS SDK version that supports assuming an IAM role via OIDC web identity token file. you can see the list of supported SDK versions here

EC2 instance launched S3 Endpoint subnet unable to list bucket object with endpoint bucket policy

I have created S3endpoint and added it to route table of a subnet.
Subnet has route to internet and able to open AWS console.
Next a bucket is created with bucket policy limiting access to it through VPC endpoint.
I have IAM user which has full permission to this bucket.
When i access the S3 bucket through S3 console webpage there is an error 'Access Denied' but i am able to upload files to the bucket.
Does S3 endpoint imply that only access will be through AWS CLI \SDKs? and console access is limited?
Does S3 endpoint imply that only access will be through AWS CLI \SDKs?
and console access is limited?
My understanding is that any calls done in the AWS Console will not use the endpoint setup within the VPC, even if you're accessing it via an EC2 instance within the VPC. This is because the UI within the AWS Console does not directly access the S3 API Endpoint, but instead goes through a proxy to reach the endpoint.
If you need to access the S3 bucket via the AWS Console, you'll need to amend your bucket policy.

How protect Amazon S3 via Basic Authentification

I am new to S3 and am wonding how I could protect access to S3 or cloud front via Basic Authentification while installing a private certificate into Chrome, that allows access. Is there anything like this?
It is not possible to use Basic Authentication with Amazon S3 nor Amazon CloudFront.
Amazon S3 access can be controlled via one or more of:
Access Control List on the object level
Amazon S3 Bucket Policy
AWS Identity and Access Management (IAM) Policy
Amazon CloudFront has its own method of controlling access via signed URLs and signed cookies.