How to access eks cluster from local machine - amazon-eks

I have created a EKS cluster and able to run the kubectl commands from my ec2 instance. I have then downloaded the config file from ~/.kube/config location to my local machine. I am not able to run the kubectl commands and getting authentication error.
What is the right way to access an EKS cluster from local machine.

Try look into users section in ~/.kube/config, check the user under the name of the cluster, make sure your local machine has the same working profile as the EC2 instance.
...
command: aws
env:
- name: AWS_PROFILE
value: <make sure this entry is valid on your local machine>
If this doesn't work, can you briefly describe how you configured kubeconfig on the EC2 instance in your question.

Related

AWS EKS custom AMI managed Node Group Bootstrap file not exists

Below are steps i preformed to use custom AMI EKS managed node group.
bootstrap_user_data file has been created and its converted to base64 format as per the standard.
#!/bin/bash
set -ex
B64_CLUSTER_CA= <My eks cluster Certificate authority value>
API_SERVER_URL= <My EKS cluster API server URl>
/etc/eks/bootstrap.sh <cluster-name> --b64-cluster-ca $B64_CLUSTER_CA --apiserver-endpoint $API_SERVER_URL
cat bootstrap_user_data | base64
Launch template created via custom-configuration.json file with below data
cat config_custom_ami.json
{
"LaunchTemplateData": {
"EbsOptimized": false,
"ImageId": "ami-0e00c1f097aff7fe8",
"InstanceType": "t3.small",
"UserData": "bootstrap_user_data",
"SecurityGroupIds": [
"sg-0e9b58499f42bcd4b"
]
}
}
Security group has been selected EKS cluster security group it was created automatically while creating EKS cluster first time.
creating launch template using eksctl command
aws ec2 create-launch-template --region eu-central-1 --launch-template-name my-template-name --version-description "first version " --cli-input-json file://custom.config.json
creating node group using eksctl command
aws eks create-nodegroup --region eu-central-1 --cluster-name my-cluster --nodegroup-name my-node-group --subnets subnet-<subnet1> subnet-<subnet2> --node-role 'arn:aws:iam::123456789:role/EKSNODEGROUP' --launch-template name=my-template-name
After executing node group creation command it was taking 20 min to create node group at the same time desired VM is created as part of auto scaling group but nodes group not able to join to the cluster after 20 min.
Connect to your Amazon EKS worker node instance with SSH and check kubelet agent logs
ssh -i my.key ec2-user#1.2.3.4
sudo -i
cd /etc/eks/bootstrap.sh
-bash: cd: /etc/eks: No such file or directory
could you please some one help why my bootstrap.sh file not exists inside the /etc/eks location in other hand in AWS console launch template - Advanced tab - i can able to see my user data in decoded format.

Why do I get Kubernetes error request: Forbidden from the IntelliJ Kubernetes plugin?

I have a ~/.kube/config that is working in the command line. I can run any kubectl command with no problem. The config points to an AWS EKS cluster and it follows the aws guide to create kubeconfig.
I can see that the Kubernetes plugin is able to parse the ~/.kube/config because the cluster name shows up in the Kubernetes service view.
But any attempt to get any information from this view will result on a Kubernetes Request Error: Forbidden.
Any idea on what is the cause or how to troubleshoot?
The ~/.kube/config for a AWS EKS cluster usually includes a section like this:
- name: arn:aws:eks:eu-west-1:xxxxx:cluster/yyyyyy
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-west-1
- eks
- get-token
- --cluster-name
- yyyyyy
command: aws
env:
- name: AWS_PROFILE
value: eks
This works in the cli because you probably have aws executable in your shell PATH, but when IntelliJ IDEA tries to find aws it will fail. An easy fix is to modify ~/.kube/config so that it points to the absolute path to aws like so:
command: /usr/local/bin/aws
In order to troubleshoot problems with the Kubernetes plugin you can Help > Debug Log Settings... and add
#com.intellij.kubernetes
Restart the IDE then Help > Show Log in Finder will get you to the log.

How to ssh from k8s pod to other computers

Edit: Before you down vote, please comment why you down vote. so I can improve next time, thank you.
I tried to ssh from pod in kubernetes to another VM in GCE, mainly because I want to use rsync between these two. At the moment, I use gcloud compute scp to copy file to local computer then kubectl cp.
I used kubectl exec to access the pod, setting up ssh with ssh-keygen then copy rsa_id.pub to designated VM to /home/user/.ssh/, but when I try ssh -v user#ip it just said error connection timed out.
I tried setup gcloud inside pods and to be able to use gcloud compute ssh and I also tried gcloud compute config-ssh, the results are the same.
When I ssh with my own computer it works fine
I think firewall or network configuration is causing this problem but I'm not really sure how to fix it. Should I expose ssh port with k8s service LoadBalancer or should I edit my firewall rules in VPC network?
Beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons.
You can find the solution HERE
First, find your cluster's network:
gcloud container clusters describe [CLUSTER_NAME] --format=get"(network)"
Then get the cluster's IPv4 CIDR used for the containers:
gcloud container clusters describe [CLUSTER_NAME] --format=get"(clusterIpv4Cidr)"
Finally create a firewall rule for the network, with the CIDR as the source range, and allow all protocols:
gcloud compute firewall-rules create "[CLUSTER_NAME]-to-all-vms-on-network" --network="[NETWORK]" --source-ranges="[CLUSTER_IPV4_CIDR]" --allow=tcp,udp,icmp,esp,ah,sctp

How can I setup kubeapi server to allow kubectl from outside the cluster

I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.

SSH into Kubernetes cluster running on Amazon

Created a 2 node Kubernetes cluster as:
KUBERNETES_PROVIDER=aws NUM_NODES=2 kube-up.sh
This shows the output as:
Found 2 node(s).
NAME STATUS AGE
ip-172-20-0-226.us-west-2.compute.internal Ready 57s
ip-172-20-0-227.us-west-2.compute.internal Ready 55s
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Cluster validation succeeded
Done, listing cluster services:
Kubernetes master is running at https://52.33.9.1
Elasticsearch is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Grafana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
I can see the instances in EC2 console. How do I ssh into the master node?
Here is the exact command that worked for me:
ssh -i ~/.ssh/kube_aws_rsa admin#<masterip>
kube_aws_rsa is the default key generated, otherwise controlled with AWS_SSH_KEY environment variable. For AWS, it is specified in the file cluster/aws/config-default.sh.
More details about the cluster can be found using kubectl.sh config view.
"Creates an AWS SSH key named kubernetes-. Fingerprint here is the OpenSSH key fingerprint, so that multiple users can run the script with different keys and their keys will not collide (with near-certainty). It will use an existing key if one is found at AWS_SSH_KEY, otherwise it will create one there. (With the default Ubuntu images, if you have to SSH in: the user is ubuntu and that user can sudo"
https://github.com/kubernetes/kubernetes/blob/master/docs/design/aws_under_the_hood.md
You should see the ssh key-fingerprint locally in ssh config or set the ENV and recreate.
If you are throwing up your cluster on AWS with kops, and use CoreOS as your image, then the login name would be "core".