Why is there an aws command embedded in the EKS kube config file? - amazon-eks

I'm curious about something in the kube config file generated by the aws eks update-kubeconfig command. At the bottom of the file, there is this construct:
- name: arn:aws:eks:us-west-2:redacted:cluster/u62d2e14b31f0270011485fd3
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- u62d2e14b31f0270011485fd3
command: aws
It is clearly an invocation of the aws eks get-token command. Why is this here? Does this command get automatically called?

Why is this here?
The command gets an IAM token using your IAM account and pass along to EKS via the HTTP header Authorization: Bearer <token> for authentication. See here for details.
Does this command get automatically called?
Yes, by kubectl.

Related

gitlab pipeline fail to use aws service

This error show "An HTTP Client raised an unhandled exception: Invalid header value b'AWS4-HMAC-SHA256 Credential=********************************************" in gitlab ci pipeline
I am already create AWS varible in gitlab ci and created s3 bucket in aws console. my gitlab ci config is
upload to s3:
image:
name: banst/awscli
entrypoint: [""]
script:
- aws configure set region us-east-1
- aws s3 ls
please answer me!
How are you doing?
Next I will present the step by step to list the buckets through gitlab-ci.
1- Create a repository in gitlab.
2- In your GitLab project, go to Settings > CI/CD. Set the following CI/CD variables:
AWS_ACCESS_KEY_ID: Your Access key ID.
AWS_SECRET_ACCESS_KEY: Your secret access key.
AWS_DEFAULT_REGION: Your region code.
Variables of Aws' credentials
3- Create a file called .gitlab-ci.yml in your repository. I left attached an example of the pipeline with listing and creating buckets in S3. Aws has as a rule that the name of the bucket is unique in all accounts, so be creative or specific in the name.
File of gitlab.ci - Pipeline S3
4- When you commit to the repository, it will open a pipeline. I left the steps manually, so I need to press to perform the step.Steps of the Pipeline
I hope I have helped, if you have any questions I am at your disposal.
- build_s3
- create_s3
create-s3:
stage: create_s3
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest
when: manual
script:
- aws s3api create-bucket --bucket my-bucket-stackoverflow-mms --region us-east-1
build-s3:
stage: build_s3
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-base:latest
when: manual
script:
- aws s3 ls

Need to collect the logs from application running inside AWS's EKS Pod

I would like to achieve something similar as below but in AWS - EKS
In my kubeadm k8s cluster, if I have to collect application log from pod, I will run below command
kubectl cp podName:/path/to/application/logs/logFile.log /location/on/master/
But with AWS I am not sure, how to collect logs like above?
One workaround is persist the logs on S3 with PV and PVC and then get them from there.
volumeMounts:
- name: logs-sharing
mountPath: /opt/myapp/AppServer/logs/container
volumes:
- name: logs-sharing
persistentVolumeClaim:
claimName: logs-sharing-pvc
Other way would be use sidecar container with a logging agent
But I would appreciate if there is any workaround which will as easy as the one I mentioned above that I follow with kubeadm
...as easy as the one I mentioned above that I follow with kubeadm
kubectl cp is a standard k8s command, you can use it for EKS, AKS, GKE etc.

Custom path for Hashicorp Vault Kubernetes Auth Method does not work uisng CLI

When I enable kubernetes auth method at default path (-path=kubernetes) it works. However, if it is enabled at custom path, the vault init and sidecar containers don't start.
kubernetes auth method enable at auth/prod
vault auth enable -path=prod/ kubernetes
vault write auth/prod/config \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_ca_cert=#/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
vault write auth/prod/role/internal-app \
bound_service_account_names=internal-app \
bound_service_account_namespaces=default \
policies=internal-app \
ttl=24h
What could be wrong with these auth configurations?
Not sure how you have deployed the vault but if your injector is true
injector:
enabled: true
vault will be injecting the sidecars and init container. You should check the logs of side car or init container which is failing.
If you are using the K8s method to authenticate you should check out below annotation example and use them
annotations:
vault.hashicorp.com/agent-image: registry.gitlab.com/XXXXXXXXXXX/vault-image/vault:1.4.1
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-secrets: kv/secret-path-location
vault.hashicorp.com/auth-path: auth/<K8s-cluster-auth-name>
vault.hashicorp.com/role: app
You can keep the multiple auth-path for different K8s clusters to authenticate with a single vault instance also.
If the vault is injecting the sidecar you should check the logs of it.
https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar

What are the differences between tokens generated by `aws-iam-authenticator` and `aws eks get-token` when authenticate to kubernetes-dashboard?

kubectl is using aws eks get-token and works perfectly.
But when I try to login to kubernetes-dashboard with the token generated below I get Unauthorized (401): Invalid credentials provided:
AWS_PROFILE=MYPROFILE aws eks get-token --cluster-name myclustername | jq -r '.status.token'
But if I use the token generated with:
AWS_PROFILE=MYPROFILE aws-iam-authenticator -i myclustername token --token-only
then I can login to kubernetes-dashboard.
So in which way are those tokens different? I thought they were equivalent.
There should be not difference between the tokens generated by aws-iam-authenticator or aws eks get-token.
Make sure that you spelled the cluster name right in both commands as you can generate tokens for clusters that do not exist.
Double check that both commands authenticate:
kubectl --token=`AWS_PROFILE=MYPROFILE aws-iam-authenticator -i MYCLUSTERNAME token --token-only` get nodes
kubectl --token=`AWS_PROFILE=MYPROFILE aws --region eu-north-1 eks get-token --cluster-name MYCLUSTERNAME | jq -r '.status.token'` get nodes
Sometimes is very easy to misspell the cluster name and the tools will happily generate a token for it without producing any visible error or warning.

Why do I get Kubernetes error request: Forbidden from the IntelliJ Kubernetes plugin?

I have a ~/.kube/config that is working in the command line. I can run any kubectl command with no problem. The config points to an AWS EKS cluster and it follows the aws guide to create kubeconfig.
I can see that the Kubernetes plugin is able to parse the ~/.kube/config because the cluster name shows up in the Kubernetes service view.
But any attempt to get any information from this view will result on a Kubernetes Request Error: Forbidden.
Any idea on what is the cause or how to troubleshoot?
The ~/.kube/config for a AWS EKS cluster usually includes a section like this:
- name: arn:aws:eks:eu-west-1:xxxxx:cluster/yyyyyy
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-west-1
- eks
- get-token
- --cluster-name
- yyyyyy
command: aws
env:
- name: AWS_PROFILE
value: eks
This works in the cli because you probably have aws executable in your shell PATH, but when IntelliJ IDEA tries to find aws it will fail. An easy fix is to modify ~/.kube/config so that it points to the absolute path to aws like so:
command: /usr/local/bin/aws
In order to troubleshoot problems with the Kubernetes plugin you can Help > Debug Log Settings... and add
#com.intellij.kubernetes
Restart the IDE then Help > Show Log in Finder will get you to the log.