SSH into Kubernetes cluster running on Amazon - ssh

Created a 2 node Kubernetes cluster as:
KUBERNETES_PROVIDER=aws NUM_NODES=2 kube-up.sh
This shows the output as:
Found 2 node(s).
NAME STATUS AGE
ip-172-20-0-226.us-west-2.compute.internal Ready 57s
ip-172-20-0-227.us-west-2.compute.internal Ready 55s
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Cluster validation succeeded
Done, listing cluster services:
Kubernetes master is running at https://52.33.9.1
Elasticsearch is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Grafana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
I can see the instances in EC2 console. How do I ssh into the master node?

Here is the exact command that worked for me:
ssh -i ~/.ssh/kube_aws_rsa admin#<masterip>
kube_aws_rsa is the default key generated, otherwise controlled with AWS_SSH_KEY environment variable. For AWS, it is specified in the file cluster/aws/config-default.sh.
More details about the cluster can be found using kubectl.sh config view.

"Creates an AWS SSH key named kubernetes-. Fingerprint here is the OpenSSH key fingerprint, so that multiple users can run the script with different keys and their keys will not collide (with near-certainty). It will use an existing key if one is found at AWS_SSH_KEY, otherwise it will create one there. (With the default Ubuntu images, if you have to SSH in: the user is ubuntu and that user can sudo"
https://github.com/kubernetes/kubernetes/blob/master/docs/design/aws_under_the_hood.md
You should see the ssh key-fingerprint locally in ssh config or set the ENV and recreate.

If you are throwing up your cluster on AWS with kops, and use CoreOS as your image, then the login name would be "core".

Related

How to access eks cluster from local machine

I have created a EKS cluster and able to run the kubectl commands from my ec2 instance. I have then downloaded the config file from ~/.kube/config location to my local machine. I am not able to run the kubectl commands and getting authentication error.
What is the right way to access an EKS cluster from local machine.
Try look into users section in ~/.kube/config, check the user under the name of the cluster, make sure your local machine has the same working profile as the EC2 instance.
...
command: aws
env:
- name: AWS_PROFILE
value: <make sure this entry is valid on your local machine>
If this doesn't work, can you briefly describe how you configured kubeconfig on the EC2 instance in your question.

How to ssh from k8s pod to other computers

Edit: Before you down vote, please comment why you down vote. so I can improve next time, thank you.
I tried to ssh from pod in kubernetes to another VM in GCE, mainly because I want to use rsync between these two. At the moment, I use gcloud compute scp to copy file to local computer then kubectl cp.
I used kubectl exec to access the pod, setting up ssh with ssh-keygen then copy rsa_id.pub to designated VM to /home/user/.ssh/, but when I try ssh -v user#ip it just said error connection timed out.
I tried setup gcloud inside pods and to be able to use gcloud compute ssh and I also tried gcloud compute config-ssh, the results are the same.
When I ssh with my own computer it works fine
I think firewall or network configuration is causing this problem but I'm not really sure how to fix it. Should I expose ssh port with k8s service LoadBalancer or should I edit my firewall rules in VPC network?
Beginning with Kubernetes version 1.9.x, automatic firewall rules have changed such that workloads in your Kubernetes Engine cluster cannot communicate with other Compute Engine VMs that are on the same network, but outside the cluster. This change was made for security reasons.
You can find the solution HERE
First, find your cluster's network:
gcloud container clusters describe [CLUSTER_NAME] --format=get"(network)"
Then get the cluster's IPv4 CIDR used for the containers:
gcloud container clusters describe [CLUSTER_NAME] --format=get"(clusterIpv4Cidr)"
Finally create a firewall rule for the network, with the CIDR as the source range, and allow all protocols:
gcloud compute firewall-rules create "[CLUSTER_NAME]-to-all-vms-on-network" --network="[NETWORK]" --source-ranges="[CLUSTER_IPV4_CIDR]" --allow=tcp,udp,icmp,esp,ah,sctp

How can I setup kubeapi server to allow kubectl from outside the cluster

I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.

Redis Monitor using Prometheus and Grafana

I have installed redis in a server
I wish to monitor redis via Prometheus and Grafana
Installed redis_exporter in the redis installed server using docker
$ docker pull oliver006/redis_exporter
$ docker run -d --name redis_exporter -p 9121:9121 oliver006/redis_exporter
Checked the redis_exporter running status in the server.
Added the redis installed and redis exporter installed IP in prometheus.yml file in Grafana Server
- job_name: 'redis_exporter'
target_groups:
- targets: ['IP:9121']
labels:
alias: redis
Restarted Prometheus in Grafana server
Checked the status in prometheus status page
It shows UP for the redis server IP:9121 mentioned in the prometheus.yml
In Grafana :
I have imported Prometheus Redis dashboard;(https://grafana.com/dashboards/763)
But data is not loading in the dashboard. Also the IP is not listed in the dashboard
Two things to check here:
Try this url and see if you're able to get the metrics.
curl -s "<redis_exporter>:9121/scrape?target=redis://<redis_instance>:6379"
Update the grafana dashboard variables from label_values(redis_up, addr) to label_values(redis_up, instance)
In case you set a password authentication for redis, need to supply a Redis password to redis-exporter
sudo docker run -d --name redis_exporter -p 9121:9121 oliver006/redis_exporter --redis.addr=redis://10.0.0.175:6379 --redis.password=redis_password_here

Google DataProc Spark - getting "permission denied (publickey)" error when trying to SSH to a worker node

small cluster. 1 master, 2 workers. I can access all nodes (master+slave) just fine using gcloud SDK. However, once I access the master node and try to ssh to a slave node, I get "permission denied (publickey)" error. Note that I can ping the node successfully, but SSH does not work.
Dataproc does not install SSH keys between the master and worker nodes, so that is working as intended.
You may be able to use SSH agent forwarding. With something like:
# Add Compute Engine private key to SSH agent
ssh-add ~/.ssh/google_compute_engine
# Forward key to SSH agent of master
gcloud compute ssh --ssh-flag="-A" [CLUSTER]-m
# SSH into worker
ssh [CLUSTER]-w-0
You could also configure SSH keys using an initialization action or use gcloud ssh from the master node (if you gave the cluster the compute.rw scope).