Infinispan clustering is not working on EKS cluster - amazon-eks

I deploy the infinispan cluster using these following code:
https://github.com/infinispan/infinispan-helm-charts
On Minikube: Its working fine
When i run above command infinispan server got created and goes into cluster
on EKS : clustering is not happening
When i run above command infinispan server got created but NOT goes into cluster
helm lint ./infinispan-helm-charts
helm install -n qa infinispan-server ./infinispan-helm-charts
And then port forward to access
kubectl port-forward service/infinispan-server 11222:11222 -n qa

Related

why there are no logs on /var/log/spinnaker

Our Spinnaker is deployed on Ubuntu 18, Spinnaker version is 1.20.3. The only way we can view the logs is to run journalctl -u $microservice there are no logs on /var/log/spinnaker.
Is this normal?
Yes. The preferred way of installation for Spinnaker is in Kubernetes. A quick and easy way for you to get started and easily migrate is to backup all you config with halyard, export the pipelines as json and run Minnaker in any Ubuntu 18 Compute box
Then import your old spinnaker data and pipelines.
The Ubuntu18 debian deploy flavor that you are running could be useful to debug cloud driver issues or for development purposes.
I suggest that you perform the migration to a Kubernetes cluster.
The reason why none of the Spinnaker microservices output any logs to their log file directories in /var/log/spinnaker is because the preferred method of installation for Spinnaker is to use Kubernetes.
If the microservices were to create log files in /var/log/spinnaker, there is a good chance that the Kubernetes pods would die due to running out of storage, hence they all output their logs to STDOUT, and can be retrieved from Kubernetes by running:
kubectl -n spinnaker logs POD_NAME > my_logfile_name.log
If you prefer to run Spinnaker on a VM rather than in Kubernetes and want to enable the log files so that you can debug a specific issue instead of using journalctl, you can edit the systemd service file for the particular microservice, for example Clouddriver, and add the following line in the [Service] section:
StandardOutput=append:/var/log/spinnaker/clouddriver/clouddriver.log
Then you reload the systemctl daemon and restart the service and it will then output its logs to the specified log file instead of STDOUT, for example:
sudo systemctl daemon-reload
sudo systemctl restart clouddriver.service

How can I setup kubeapi server to allow kubectl from outside the cluster

I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.

Redis Monitor using Prometheus and Grafana

I have installed redis in a server
I wish to monitor redis via Prometheus and Grafana
Installed redis_exporter in the redis installed server using docker
$ docker pull oliver006/redis_exporter
$ docker run -d --name redis_exporter -p 9121:9121 oliver006/redis_exporter
Checked the redis_exporter running status in the server.
Added the redis installed and redis exporter installed IP in prometheus.yml file in Grafana Server
- job_name: 'redis_exporter'
target_groups:
- targets: ['IP:9121']
labels:
alias: redis
Restarted Prometheus in Grafana server
Checked the status in prometheus status page
It shows UP for the redis server IP:9121 mentioned in the prometheus.yml
In Grafana :
I have imported Prometheus Redis dashboard;(https://grafana.com/dashboards/763)
But data is not loading in the dashboard. Also the IP is not listed in the dashboard
Two things to check here:
Try this url and see if you're able to get the metrics.
curl -s "<redis_exporter>:9121/scrape?target=redis://<redis_instance>:6379"
Update the grafana dashboard variables from label_values(redis_up, addr) to label_values(redis_up, instance)
In case you set a password authentication for redis, need to supply a Redis password to redis-exporter
sudo docker run -d --name redis_exporter -p 9121:9121 oliver006/redis_exporter --redis.addr=redis://10.0.0.175:6379 --redis.password=redis_password_here

Azure ACS - Kubernetes inter-pod communication

I've made an ACS instance.
az acs create --orchestrator-type=kubernetes \
--resource-group $group \
--name $k8s_name \
--dns-prefix $kubernetes_server \
--generate-ssh-keys
az acs kubernetes get-credentials --resource-group $group --name $k8s_name
And run helm init it has provisioned tiller pod fine. I then ran helm install stable/redis and got a redis deployment up and running (seemingly).
I can kube exec -it into the redis pod, and can see it's binding on 0.0.0.0 and can log in with redis-cli -h localhost and redis-cli -h <pod_ip>, but not redis-cli -h <service_ip> (from kubectl get svc.)
If I run up another pod (which is how I ran into this issue) I can ping redis.default and it shows the DNS resolving to the correct service IP but gives no response. When I telnet <service_ip> 6379 or redis-cli -h <service_ip> it hangs indefinitely.
I'm at a bit of a loss as to how to debug further. I can't ssh into the node to see what docker is doing.
Also, I'd initially tried this with a standard Alphine-Redis image, so the helm was a fallback. I tried it yesterday and the helm one worked, but the manual one didn't. Today doing it (on a newly built ACS cluster) it's not working at all on either.
I'm going to spin up the cluster again to see if its a stable reproduce, but I'm pretty confident something fishy is going on.
PS - I have a VNet with overlapping subnet 10.0.0.0/16 in a different region, when I go into the address range I do get a warning there that there is a clash, could that affect it?
<EDIT>
Some new insight... It's something to do with alpine based images (which we've been aiming to use)...
So kube run a --image=nginx (which is ubuntu based) and I can shell in, install telnet and connect to the redis service.
But, e.g. kubectl run c --image=rlesouef/alpine-redis then shell in, and telnet doesn't work to the same redis service.
</EDIT>
There was a similar issue https://github.com/Azure/acs-engine/issues/539 that has been fixed recently. One thing to verify is to check if nslookup works in the container.

SSH into Kubernetes cluster running on Amazon

Created a 2 node Kubernetes cluster as:
KUBERNETES_PROVIDER=aws NUM_NODES=2 kube-up.sh
This shows the output as:
Found 2 node(s).
NAME STATUS AGE
ip-172-20-0-226.us-west-2.compute.internal Ready 57s
ip-172-20-0-227.us-west-2.compute.internal Ready 55s
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Cluster validation succeeded
Done, listing cluster services:
Kubernetes master is running at https://52.33.9.1
Elasticsearch is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Grafana is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://52.33.9.1/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
I can see the instances in EC2 console. How do I ssh into the master node?
Here is the exact command that worked for me:
ssh -i ~/.ssh/kube_aws_rsa admin#<masterip>
kube_aws_rsa is the default key generated, otherwise controlled with AWS_SSH_KEY environment variable. For AWS, it is specified in the file cluster/aws/config-default.sh.
More details about the cluster can be found using kubectl.sh config view.
"Creates an AWS SSH key named kubernetes-. Fingerprint here is the OpenSSH key fingerprint, so that multiple users can run the script with different keys and their keys will not collide (with near-certainty). It will use an existing key if one is found at AWS_SSH_KEY, otherwise it will create one there. (With the default Ubuntu images, if you have to SSH in: the user is ubuntu and that user can sudo"
https://github.com/kubernetes/kubernetes/blob/master/docs/design/aws_under_the_hood.md
You should see the ssh key-fingerprint locally in ssh config or set the ENV and recreate.
If you are throwing up your cluster on AWS with kops, and use CoreOS as your image, then the login name would be "core".