Below are steps i preformed to use custom AMI EKS managed node group.
bootstrap_user_data file has been created and its converted to base64 format as per the standard.
#!/bin/bash
set -ex
B64_CLUSTER_CA= <My eks cluster Certificate authority value>
API_SERVER_URL= <My EKS cluster API server URl>
/etc/eks/bootstrap.sh <cluster-name> --b64-cluster-ca $B64_CLUSTER_CA --apiserver-endpoint $API_SERVER_URL
cat bootstrap_user_data | base64
Launch template created via custom-configuration.json file with below data
cat config_custom_ami.json
{
"LaunchTemplateData": {
"EbsOptimized": false,
"ImageId": "ami-0e00c1f097aff7fe8",
"InstanceType": "t3.small",
"UserData": "bootstrap_user_data",
"SecurityGroupIds": [
"sg-0e9b58499f42bcd4b"
]
}
}
Security group has been selected EKS cluster security group it was created automatically while creating EKS cluster first time.
creating launch template using eksctl command
aws ec2 create-launch-template --region eu-central-1 --launch-template-name my-template-name --version-description "first version " --cli-input-json file://custom.config.json
creating node group using eksctl command
aws eks create-nodegroup --region eu-central-1 --cluster-name my-cluster --nodegroup-name my-node-group --subnets subnet-<subnet1> subnet-<subnet2> --node-role 'arn:aws:iam::123456789:role/EKSNODEGROUP' --launch-template name=my-template-name
After executing node group creation command it was taking 20 min to create node group at the same time desired VM is created as part of auto scaling group but nodes group not able to join to the cluster after 20 min.
Connect to your Amazon EKS worker node instance with SSH and check kubelet agent logs
ssh -i my.key ec2-user#1.2.3.4
sudo -i
cd /etc/eks/bootstrap.sh
-bash: cd: /etc/eks: No such file or directory
could you please some one help why my bootstrap.sh file not exists inside the /etc/eks location in other hand in AWS console launch template - Advanced tab - i can able to see my user data in decoded format.
Related
I have created a EKS cluster and able to run the kubectl commands from my ec2 instance. I have then downloaded the config file from ~/.kube/config location to my local machine. I am not able to run the kubectl commands and getting authentication error.
What is the right way to access an EKS cluster from local machine.
Try look into users section in ~/.kube/config, check the user under the name of the cluster, make sure your local machine has the same working profile as the EC2 instance.
...
command: aws
env:
- name: AWS_PROFILE
value: <make sure this entry is valid on your local machine>
If this doesn't work, can you briefly describe how you configured kubeconfig on the EC2 instance in your question.
I installed spinnaker on my AWS EC2, login into the dashboard in the first time but immediately after I logout and login again using the same base URL i am being directed to a different person github account, what might have happened, does it mean my account is hacked or what, somebody advise please.
Being directed to the link attached below, instead of the ip address taking me to the spinnaker dashboard and yet I am using the correct base address
These are the instructions i follow for Minnaker on EC2 (ap-southeast-2)
Pre-requisites
Obtain an AWS Elastic IP
From AWS EC2 console choose a Region preferably ap-southeast-2 and
launch an EC2 instance with 16 GB memory, 4 cpu min and 60 GB disk.
An initial deployment can be performed using instance= m4.xlarge
Attach the AWS Elastic IP to the Spinnaker Instance
Access the instance through SSH
Get minnaker
curl -LO https://github.com/armory/minnaker/releases/latest/download/minnaker.tgz
Untar
tar -xzvf minnaker.tgz
Go to minnaker directory
cd minnaker
Use the Public IP value from The Elastic IP as the $PUBLIC_IP
Obtain Private IP of the instance hostname -I and add them to local environment variables $PRIVATE_IP
export PRIVATE_IP=$(hostname -I)
export PUBLIC_IP=AWS_ELASTIC_IP
Execute the command below to install Open Source Spinnaker
./scripts/install.sh -o -P $PRIVATE_IP
Validate installation
UI
Validate installation going to generated URL https://PUBLIC_IP
Use user admin and get the password at etc/spinnaker/.hal/.secret/spinnaker_password
The UI should load
Kubernetes Deployment
Minnaker is deployed inside an EC2 as a lightweight Kubernetes K3S cluster
Run kubectl version
Get info from cluster kubectl cluster-info
Tweak bash completion and enable a simple alias.
kubectl completion bash
kubectl completion bash
echo 'source <(kubectl completion bash)' >>~/.bashrc
kubectl completion bash >/etc/bash_completion.d/kubectl
echo 'alias k=kubectl' >>~/.bashrc
`echo 'complete -F __start_kubectl k' >>~/.bashrc
Validate Spinnaker is running
k -n spinnaker get pods -o wide
Halyard Config
Validate a default halyard config is been set up
sudo chmod 755 /usr/local/bin/hal
#!/bin/bash
set -x
HALYARD=$(kubectl -n spinnaker get pod -l app=halyard -oname | cut -d'/' -f 2)
k -n spinnaker exec -it ${HAYLYARD} -- hal $# config
Minnaker repo
Clone the repository
Go to Scripts directory cd minnaker/scripts
Add permissions to the installation script chmod 775 all.sh
git clone https://github.com/armory/minnaker
References
armory/minnaker
I have a ~/.kube/config that is working in the command line. I can run any kubectl command with no problem. The config points to an AWS EKS cluster and it follows the aws guide to create kubeconfig.
I can see that the Kubernetes plugin is able to parse the ~/.kube/config because the cluster name shows up in the Kubernetes service view.
But any attempt to get any information from this view will result on a Kubernetes Request Error: Forbidden.
Any idea on what is the cause or how to troubleshoot?
The ~/.kube/config for a AWS EKS cluster usually includes a section like this:
- name: arn:aws:eks:eu-west-1:xxxxx:cluster/yyyyyy
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-west-1
- eks
- get-token
- --cluster-name
- yyyyyy
command: aws
env:
- name: AWS_PROFILE
value: eks
This works in the cli because you probably have aws executable in your shell PATH, but when IntelliJ IDEA tries to find aws it will fail. An easy fix is to modify ~/.kube/config so that it points to the absolute path to aws like so:
command: /usr/local/bin/aws
In order to troubleshoot problems with the Kubernetes plugin you can Help > Debug Log Settings... and add
#com.intellij.kubernetes
Restart the IDE then Help > Show Log in Finder will get you to the log.
I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.
Created a 2 node Kubernetes cluster as:
KUBERNETES_PROVIDER=aws NUM_NODES=2 kube-up.sh
This shows the output as:
Found 2 node(s).
NAME STATUS AGE
ip-172-20-0-226.us-west-2.compute.internal Ready 57s
ip-172-20-0-227.us-west-2.compute.internal Ready 55s
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Cluster validation succeeded
Done, listing cluster services:
Kubernetes master is running at https://52.33.9.1
Elasticsearch is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Grafana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
I can see the instances in EC2 console. How do I ssh into the master node?
Here is the exact command that worked for me:
ssh -i ~/.ssh/kube_aws_rsa admin#<masterip>
kube_aws_rsa is the default key generated, otherwise controlled with AWS_SSH_KEY environment variable. For AWS, it is specified in the file cluster/aws/config-default.sh.
More details about the cluster can be found using kubectl.sh config view.
"Creates an AWS SSH key named kubernetes-. Fingerprint here is the OpenSSH key fingerprint, so that multiple users can run the script with different keys and their keys will not collide (with near-certainty). It will use an existing key if one is found at AWS_SSH_KEY, otherwise it will create one there. (With the default Ubuntu images, if you have to SSH in: the user is ubuntu and that user can sudo"
https://github.com/kubernetes/kubernetes/blob/master/docs/design/aws_under_the_hood.md
You should see the ssh key-fingerprint locally in ssh config or set the ENV and recreate.
If you are throwing up your cluster on AWS with kops, and use CoreOS as your image, then the login name would be "core".