Getting AWS Credentials as configured on POD - amazon-eks

Context:
We are using AWS SM for storing secrets
AWS Credentials were loaded from EC2 instance as below
public AWSSecretsManager getDefaultSecretsManagerClient(String region) {
return AWSSecretsManagerClientBuilder.standard()
.withCredentials(new InstanceProfileCredentialsProvider(false)) // <== Loads credentials from EC2 instance
.withRegion(region)
.build();
}
Current: We are planning to move to Amazon EKS. While running the container the AWS credentials were picked from EC2 instance rather than POD. Can someone please guide me on which credential provider to use here so that AWS creds gets picked up from POD rather than the underlying EC2 instance?

You might look at useing IAM Roles for Service Accounts (IRSA) to associate an IAM role with the K8s service account used for the POD. Once you setup IRSA it will behave much like roles for EC2, though you will likely want to use DefaultAWSCredentialsProviderChain.getInstance() instead of InstanceProfileCredentialsProvider to retrieve the credentials.
You can also use the secrets-store-csi-driver-provider-aws plugin to the CSI secret store driver to retrieve secrets from secrets manager (also using IRSA) as mounted files or etcd secrets. Note that the README for this project also has simplified instructions for setting up IRSA.

Related

Vault Hashicorp: Passing aws dynamic secret to a script

1/ Everyday at 3am, we are runnning a script alfa.sh on server A in order to send some backups to AWS (s3 bucket).
As a requirement we had to configure AWS (aws configure) on the server which means the Secret Key and Access Key are stored on this server. We now would like to use short TTL credential valid only from 3am to 3:15am . Vault Hashicorp does that very well
2/ On server B we have a Vault Hashicorp installed and we managed to generate short ttl dynamic secrets for our s3 bucket (access key / secret key).
3/We now would like to pass the the daily generated dynamic secrets to our alpha.sh. Any idea how to achieve this?
4/Since we are generating a new Secret Key and Access Key, I understand that a new AWS configuration "aws configure" will have to be performed on server A in order to be able to perform the backup. Any experience with this?
DISCLAIMER: I have no experience with aws configure so someone else may have to answer this part of the question. But I believe it's not super relevant to the problem here, so I'll give my partial answer.
First things first - solve your "secret zero" problem. If you are using the AWS secrets engine, it seems unlikely that your server is running on AWS, as you could skip the middle man and just give your server an IAM policy that allowed direct access to the S3 resource. So find the best Vault auth method for your use case. If your server is in a cloud like AWS, Azure, GCP, etc or container like K8S, CF provider, or has a JWT token delivered along with a JWKS endpoint Vault can trust, target one of those, and if all else fails, use AppRole authentication delivering a wrapped token via a trusted CI solution.
Then, log into Vault in your shell script using those credentials. The login will look different depending on the auth method chosen. You can also leverage Vault Agent to automatically handle the login for you, and cache secrets locally.
#!/usr/bin/env bash
## Dynamic Login
vault login -method="${DYNAMIC_AUTH_METHOD}" role=my-role
## OR AppRole Login
resp=$(vault write -format=json auth/approle/login role-id="${ROLE_ID}" secret-id="${SECRET_ID}")
VAULT_TOKEN=$(echo "${resp}" | jq -r .auth.client_token)
export VAULT_TOKEN
Then, pull down the AWS dynamic secret. Each time you read a creds endpoint you will get a new credential pair, so it is important not to make multiple API calls here, and instead cache the entire API response, then parse the response for each necessary field.
#!/usr/bin/env bash
resp=$(vault read -format=json aws/creds/my-role)
AWS_ACCESS_KEY_ID=$(echo "${resp}" | jq -r .data.access_key)
export AWS_ACCESS_KEY_ID
AWS_SECRET_KEY_ID=$(echo "${resp}" | jq -r .data.secret_key)
export AWS_SECRET_KEY_ID
This is a very general answer establishing a pattern. Your environment particulars will determine manner of execution. You can improve this pattern by leveraging features like CIDR binds, number of uses of auth credentials, token wrapping, and delivery via CI solution.

Docker registry on EKS using service account to store data on S3

I'm trying to configure docker registry v2 on a EKS cluster. I'd like to use S3 as storage backend with credential manage by service account but it seems that doesn't work.
I log in running POD to check permissions using:
aws sts get-caller-identity
aws s3 ls s3://BUCKET_NAME
aws s3 cp s3://BUCKET_NAME/FILENAME
aws s3api put-object --bucket BUCKETNAME --key KEY
and all seems to work properly but if I try to perform a "docker push" I get this error log:
s3aws: AccessDenied: Access Denied\n\tstatus code: 403
If I set ACCESS_KEY and SECRET_KEY it works but I'd like to use service account.
Any idea?
If I set ACCESS_KEY and SECRET_KEY it works but I'd like to use service account.
Yes, in Kubernetes you use Service Accounts. But the AWS API requires IAM Permissions for authorization.
You can setup IAM Roles for Service Accounts to associate a Kubernetes ServiceAccount with an IAM Role. You also need to add the needed IAM Permissions to that IAM Role. Using aws-cli or and AWS SDK should work with that solution from a Pod.
The fact that you can run aws CLI commands without getting any error messages means that your service account is setup properly and it can use those permissions but it doesn't mean all applications running on that pod can use them too. Your application (in your case docker registry) should use a AWS SDK version that supports assuming an IAM role via OIDC web identity token file. you can see the list of supported SDK versions here

How to access Amazon S3 bucket to the Kubernetes pods using IAM roles instead of Access key & secret keys?

I am trying to mount S3 bucket using s3fs-fuse to the Kubernetes pod. My S3 bucket is protected by IAM roles and i dont have Access Keys and Secret Keys to access S3 bucket. I know how to access a S3bucket from the Kubernetes pod using Access & Secrets Keys, but how do we access S3 bucket using IAM roles ?
Does anyone has suggestion on doing this ?
You use the IRSA system, attaching an IAM role to a Kubernetes service account and then attaching that K8s SA to your pod. See https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html for a starting point.

Add Github Identity Provider to AWS Cognito

I have created a Github OAuth app and I am trying to add the app as an OIDC application to AWS Cognito.
However, I cannot find a proper overview about the endpoints and data to fill in anywhere in the Github Docs.
The following fields are required:
Issuer -> ?
Authorization endpoint => https://github.com/login/oauth/authorize (?)
Token endpoint => https://github.com/login/oauth/access_token (?)
Userinfo endpoint => https://api.github.com/user (?)
Jwks uri => ?
I couldn't find the Jwks uri anywhere. Any help would be highly appreciated.
Seems like there is no way to get this working out of the box.
https://github.com/TimothyJones/github-cognito-openid-wrapper seems to be a way to get this working.
If any Cognito dev sees this, please add Github/Gitlab/Bitbucket support.
GitLab 14.7 (January 2022) might help:
OpenID Connect support for GitLab CI/CD
Connecting GitLab CI/CD to cloud providers using environment variables works fine for many use cases.
However, it doesn’t scale well if you need advanced permissions management or would prefer a signed, short-lived, contextualized connection to your cloud provider.
GitLab 12.10 shipped initial support for JWT token-based connection (CI_JOB_JWT) to enable HashiCorp Vault users to safely retrieve secrets. That implementation was restricted to Vault, while the logic we built JWT upon opened up the possibility to connect to other providers as well.
In GitLab 14.7, we are introducing a CI_JOB_JWT_V2 environment variable that can be used to connect to AWS, GCP, Vault, and likely many other cloud services.
Please note that this is an alpha feature and not ready for production use. Your feedback is welcomed in this epic.
For AWS specifically, with the new CI_JOB_JWT_V2 variable, you can connect to AWS to retrieve secrets, or to deploy within your account. You can also manage access rights to your cluster using AWS IAM roles.
You can read more on setting up OIDC connection with AWS.
The new variable is automatically injected into your pipeline but is not backward compatible with the current CI_JOB_JWT.
Until GitLab 15.0, the CI_JOB_JWT will continue to work normally but this will change in a future release. We will notify you about the change in time.
The secrets stanza today uses the CI_JOB_JWT_V1 variable. If you use the secrets stanza, you don’t have to make any changes yet.
See Documentation and Issue.

How can I create IAM Roles for Amazon EC2?

I am exploring IAM Roles. I am wondering how roles can be accessed on behalf of a user on EC2.
Any help is highly appreciated.
Thanks
You usually do not have to do anything special after launching an EC2 instance with an IAM Role for Amazon EC2 (I figure from your duplicate questions, that you've already done this), conceptually all you have to do are the following steps:
create an IAM role for EC2
configure IAM policies for that role to match your use case
launch an EC2 isntance with your IAM role
use IAM role aware tools, which will pick up the credentials from the role automatically
Let me stress the last aspect again: you do not need to do anything but configure the required IAM credentials on the role and IAM role aware tools will pick the resulting credentials up automatically from the EC2 instance metadata!
If you really must (but you shouldn't, see next paragraph), you could do the same yourself as explained in Retrieving Security Credentials from Instance Metadata.
In particular, you should use AWS only by one of the following means for everything but highly special use cases:
Command Line Usage
Unix/Linux/Windows - use the AWS Command Line Interface, which is a unified tool to manage your AWS services.
see Option #3 within AWS Credentials regarding the IAM role support
Windows only - use the AWS Tools for Windows PowerShell , which lets developers and administrators manage their AWS services from the Windows PowerShell scripting environment.
see section IAM Roles for EC2 Instances and the AWS Tools for Windows PowerShell within AWS Credentials regarding the IAM role support
Programmatic Usage
Use the appropriate AWS SDK for your language of choice, see Tools for Amazon Web Services for an extensive listing of what's available.
see each SDK's documentation for details regarding the IAM role support (again, it will just work once you have implemented steps 1-3 above correctly).