What are the differences between tokens generated by `aws-iam-authenticator` and `aws eks get-token` when authenticate to kubernetes-dashboard? - amazon-eks

kubectl is using aws eks get-token and works perfectly.
But when I try to login to kubernetes-dashboard with the token generated below I get Unauthorized (401): Invalid credentials provided:
AWS_PROFILE=MYPROFILE aws eks get-token --cluster-name myclustername | jq -r '.status.token'
But if I use the token generated with:
AWS_PROFILE=MYPROFILE aws-iam-authenticator -i myclustername token --token-only
then I can login to kubernetes-dashboard.
So in which way are those tokens different? I thought they were equivalent.

There should be not difference between the tokens generated by aws-iam-authenticator or aws eks get-token.
Make sure that you spelled the cluster name right in both commands as you can generate tokens for clusters that do not exist.
Double check that both commands authenticate:
kubectl --token=`AWS_PROFILE=MYPROFILE aws-iam-authenticator -i MYCLUSTERNAME token --token-only` get nodes
kubectl --token=`AWS_PROFILE=MYPROFILE aws --region eu-north-1 eks get-token --cluster-name MYCLUSTERNAME | jq -r '.status.token'` get nodes
Sometimes is very easy to misspell the cluster name and the tools will happily generate a token for it without producing any visible error or warning.

Related

Why is there an aws command embedded in the EKS kube config file?

I'm curious about something in the kube config file generated by the aws eks update-kubeconfig command. At the bottom of the file, there is this construct:
- name: arn:aws:eks:us-west-2:redacted:cluster/u62d2e14b31f0270011485fd3
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- --region
- us-west-2
- eks
- get-token
- --cluster-name
- u62d2e14b31f0270011485fd3
command: aws
It is clearly an invocation of the aws eks get-token command. Why is this here? Does this command get automatically called?
Why is this here?
The command gets an IAM token using your IAM account and pass along to EKS via the HTTP header Authorization: Bearer <token> for authentication. See here for details.
Does this command get automatically called?
Yes, by kubectl.

Custom path for Hashicorp Vault Kubernetes Auth Method does not work uisng CLI

When I enable kubernetes auth method at default path (-path=kubernetes) it works. However, if it is enabled at custom path, the vault init and sidecar containers don't start.
kubernetes auth method enable at auth/prod
vault auth enable -path=prod/ kubernetes
vault write auth/prod/config \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_ca_cert=#/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
vault write auth/prod/role/internal-app \
bound_service_account_names=internal-app \
bound_service_account_namespaces=default \
policies=internal-app \
ttl=24h
What could be wrong with these auth configurations?
Not sure how you have deployed the vault but if your injector is true
injector:
enabled: true
vault will be injecting the sidecars and init container. You should check the logs of side car or init container which is failing.
If you are using the K8s method to authenticate you should check out below annotation example and use them
annotations:
vault.hashicorp.com/agent-image: registry.gitlab.com/XXXXXXXXXXX/vault-image/vault:1.4.1
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-inject-secret-secrets: kv/secret-path-location
vault.hashicorp.com/auth-path: auth/<K8s-cluster-auth-name>
vault.hashicorp.com/role: app
You can keep the multiple auth-path for different K8s clusters to authenticate with a single vault instance also.
If the vault is injecting the sidecar you should check the logs of it.
https://www.hashicorp.com/blog/injecting-vault-secrets-into-kubernetes-pods-via-a-sidecar

How connect to MSK cluster from EKS cluster

I am having difficulties connecting to my MSK cluster from my EKS cluster even though both clusters share the same VPC and the same subnets.
The security group used by the MSK cluster has the following inbound rules
type
protocol
port range
source
all traffic
all
all
custom
SG_ID
all traffic
all
all
anywhere ipv4
0.0.0.0/0
Where SG_ID is the EKS' Cluster security group.
The one labeled: EKS created security group applied...
In the EKS cluster, I am using the following commands to test connectivity:
kubectl run kafka-consumer \
-ti \
--image=quay.io/strimzi/kafka:latest-kafka-2.8.1 \
--rm=true \
--restart=Never \
-- bin/kafka-topics.sh --create --topic test --bootstrap-server b-1.test.z35y0w.c4.kafka.us-east-1.amazonaws.com:9092 --replication-factor 2 --partitions 1 --if-not-exists
With the following result
Error while executing topic command : Call(callName=createTopics, deadlineMs=1635906680860, tries=1, nextAllowedTryMs=1635906680961) timed out at 1635906680861 after 1 attempt(s)
[2021-11-03 02:31:20,865] ERROR org.apache.kafka.common.errors.TimeoutException: Call(callName=createTopics, deadlineMs=1635906680860, tries=1, nextAllowedTryMs=1635906680961) timed out at 1635906680861 after 1 attempt(s)
Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: createTopics
(kafka.admin.TopicCommand$)
pod "kafka-consumer" deleted
pod default/kafka-consumer terminated (Error)
Sadly, the second bootstrap server displayed on the MSK Page gives the same result.
nc eventually times out
kubectl run busybox -ti --image=busybox --rm=true --restart=Never -- nc b-2.test.z35y0w.c4.kafka.us-east-1.amazonaws.com
nslookup fails as well
kubectl run busybox -ti --image=busybox --rm=true --restart=Never -- nslookup b-2.test.z35y0w.c4.kafka.us-east-1.amazonaws.com
If you don't see a command prompt, try pressing enter.
*** Can't find b-2.test.z35y0w.c4.kafka.us-east-1.amazonaws.com: No answer
Could anyone please give me a hint?
Thanks
I need to connect MSK from my EKS pod. So I searched this doc, I want to share my solution, hope can help others:
This my config file:
root#kain:~/work# cat kafkaconfig
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
This is my command:
./kafka-topics.sh --list --bootstrap-server <My MSK bootstrap server>:9098 --command-config ./kafkaconfig
For this command, there are 2 preconditions we need to make sure,
one is you have access to aws msk, (I access MSK from my eks pod, and my eks pod has OIDC to access the AWS).
Second is we need to has AWS auth jar file: aws-msk-iam-auth.jar
address: https://github.com/aws/aws-msk-iam-auth/releases
put it to kafkaclient libs directory or export CLASSPATH=/aws-msk-iam-auth-1.1.4-all.jar
reference doc: https://aws.amazon.com/blogs/big-data/securing-apache-kafka-is-easy-and-familiar-with-iam-access-control-for-amazon-msk/

How to remove certificate from Traefik acme storage when saved to consul KV

I have Traefik running with a Consul KV store. How do I remove a record from the acme certificate storage in Consul, or force a renewal for just one domain/frontend?
Problem:
Somehow one of the frontend domains has saved with the wrong certificate. It's referencing a certificate from a different domain (which is also a frontend in Traefik).
I was able to inspect the acme json by getting the consul value for the traefik/acme/account/object key, decode and unzip it and this is the record from the Certs array:
{
"Domains":{
"Main":"my.domain1.com",
"SANs":null
},
"Certificate":{
"Domain":"my.domain2.com",
"CertURL":"https://acme-v02.api.letsencrypt.org/acme/cert/idfordomain2",
"CertStableURL":"https://acme-v02.api.letsencrypt.org/acme/cert/idfordomain2",
"PrivateKey":"...",
"Certificate":"..."
}
}
As you can see, somehow the cert for my.domain2.com has been saved against the record for my.domain1.com so this results in an invalid certificate warning in the browser. I want to clear out the whole record so Traefik will get a fresh cert. I'm using Consul and its saved in binary so I can't just edit the json.
Here is how I solved this issue:
Your traefik network should be marked as attachable: true
Run on host:
docker run -it --rm --name consul-client --network traefik_traefik consul sh
Then run in created container:
export CONSUL_HTTP_ADDR=consul:8500
# get value from consul and store it to acme.json
consul kv get traefik/acme/account/object | gzip -dc > acme.json
# remove invalid domain and store it to acme-fixed.json
cat acme.json | jq -r 'del (.DomainsCertificate.Certs[] | select(.Domains.Main=="'yourdomain.com'"))' > acme-fixed.json
# gzip it
cat acme-fixed.json | gzip -c > acme-fixed.json.gz
# upload fixed and gzipped json back to consul
consul kv put traefik/acme/account/object #acme-fixed.json.gz
Simplest way is to use consul CLI utility. The utility is also used to run server and ideally you should use same version as the one used for your servers. Make sure you export environment variables: CONSUL_HTTP_ADDR - points to consul server, default is http://127.0.0.1:8500 and CONSUL_HTTP_TOKEN - is ACL token, if you have ACLs on your server enabled, as it should be on production environments.
Then you just run following command
consul kv put traefik/acme/account/object #traefik.json
Where trafik.json is json file that has updated values, that you wish to change in Consul KV store.
Or you can use HTTP API: Consul Create/Update Key
curl -X PUT --data #traefik.json http://<your-server-url>:<port>/v1/kv/traefik/acme/account/object
If your server is ACL enabled, you need to add following header to curl request, with <your-acl-token> that was issued to you: -H "X-Consul-Token: <your-acl-token>"

How can I setup kubeapi server to allow kubectl from outside the cluster

I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.