How is Kubernetes API setup within Rancher - api

As a proof of concept we are trying Kubernetes with Rancher.
Currently we have in total 10 machines for the environment.
3 machines for ETCD (labels: etcd=true)
3 machines for API (labels: orchestration=true )
4 machines as K8S worker nodes (lables: compute=true)
I need to evaluate how is the Kubernetes API being set in Rancher environment. From the K8S Dashboard there is only service "kubernetes" running in the "default" namespace on port 443.
I want to know how which containers are used by Rancher to run the API on ?
What HA model is used on the hosts with labels orchestration=true (master-master, master-slave )? API communication flow ? What can external user can get from it ?
Would be grateful for any kind of tips, links and docs.

Related

How does Apache Ignite deploy in K8S?

On the Ignite website, I see that in Amazon EKS, Microsoft Azure Kubernetes Service Deployment, and Google Kubernetes Engine Deployment, deploy on each of the three platforms ignite.If I am on my own deployed K8S, can I deploy?Is it the same as deploying the Ignite on three platforms?
Sure, just skip the initial EKS/Azure initialization steps since you don't need them and move directly to the K8s configuration.
Alternatively, you might try Apache Ignite and GridGain k8s operator that simplifies the deployment.

Should I register pod or kubernete service to consul on kubernetes cluster

I have deployed ocelot and consul on the kubernetes cluster. Ocelot acts as the api gateway which will distribute request to internal services. And consul is in charge of service discovery and health check. (BTW, I deploy the consul on the kubernetes cluster following the consul's official document).
And my service (i.e. asp.net core webapi) is also deployed to the kubernetes cluster with 3 replicas. I didn't create a kubernete service object as those pods will only be consumbed by the ocelot which is in the same cluster.
The architecture is something like below:
ocelot
|
consul
/\
webapi1 webapi2 ...
(pod) (pod) ...
Also, IMO, consul can de-register a pod(webapi) when the pod is dead. so I don't see any need to create a kubernete service object
Now My question: is it right to register each pod(webapi) to the consul when the pod startup? Or should I create a kubernete service object in front of those pods (webapi) and register the service object to the consul?
Headless Service is the answer
Kubernetes environment is more dynamic in nature.
deregister a service when the pod is dead
Yes
Kubernetes Pods are mortal. They are born and when they die, they are
not resurrected. While each Pod gets its own IP address, even those IP
addresses cannot be relied upon to be stable over time. A Kubernetes
Service is an abstraction which defines a logical set of Pods and
provides stable ip
That's why it is recomended to use headless service which basically fits into this situation. As they mentioned in first line in docs
Sometimes you don’t need or want load-balancing and a single service
IP. In this case, you can create “headless” services by specifying
"None" for the cluster IP (.spec.clusterIP)
headless service doesn't get the ClusterIP. If you do nslookup on the headless servive, it will resolve all IPs of pods that are under headless service. K8s will take care of adding/managing pod IP under the headless service. Please for more details. And I believe, you can register/provide this headless service name in Cosule.
Please refer this blog for detailed here
UPDATE1:
Please refer this Youtube video. May give you some idea.(Even I have to watch it..!!)

Kubernetes cluster internal load balancing

Playing a bit with Kubernetes (v1.3.2) I’m checking the ability to load balance calls inside the cluster (3 on-premise CentOS 7 VMs).
If I understand correctly the documentation in http://kubernetes.io/docs/user-guide/services/ ‘Virtual IPs and service proxies’ paragraph, and as I see in my tests, the load balance is per node (VM). I.e., if I have a cluster of 3 VMs and deployed a service with 6 pods (2 per VM), the load balancing will only be between the pods of the same VM which is somehow disappointing.
At least this is what I see in my tests: Calling the service from within the cluster using the service’s ClusterIP, will load-balance between the 2 pods that reside in the same VM that the call was sent from.
(BTW, the same goes when calling the service from out of the cluster (using NodePort) and then the request will load-balance between the 2 pods that reside in the VM which was the request target IP address).
Is the above correct?
If yes, how can I make internal cluster calls load-balance between all the 6 replicas? (Must I employ a load balancer like nginx for this?)
No, the statement is not correct. The loadbalancing should be across nodes (VMs). This demo demonstrates it. I have run this demo on a k8s cluster with 3 nodes on gce. It first creates a service with 5 backend pods, then it ssh into one gce node and visits the service.ClusterIP, and the traffic is loadbalanced to all 5 pods.
I see you have another question "not unique ip per pod" open, it seems you hadn't set up your cluster network properly, which might caused what you observed.
In your case, each node will be running a copy of the service - and load-balance across the nodes.

ActiveMQ cluster discovery on Openshift v3 / Kubernetes

ActiveMQ built-in cluster discovery mechanisms are basically based on multicast (excepting LDAP here).
Openshift v3 / Kubernetes don't support well multicast as it could be quite bad or misfunctioning on a public cloud infrastructure.
Is there any existing option to enable network of activemq brokers discovery within Openshift v3 ?
I saw the project jboss-openshift/openshift-ping enabling discovery for JGroups members on Openshift. I am looking for an equivalent for ActiveMQ.
fabric8 is a project that has a number of value-adds for OS3 / kubernetes platforms
http://fabric8.io/
There is clustered ActiveMQ out of the box
http://fabric8.io/guide/fabric8MQ.html
As the project is in development, you may get best help on irc chat on #fabric8 on freenode - all the guys hang out there.

Are there any ansible module(s) to manage OpenStack Load-balancers (OpenStack LBaas)?

I want to define a pool in Openstack LBaas (Load Balancer as a Service) and then assign a VIP to it in order to create a Load-Balance cluster of servers. I want to automate this things using Ansible. I am looking for Ansbile modules that could help achieve the required thing.
Ansible don't provide a core module for Neutron management yet and it doesn't appear on the openstack-ansible github project.
Checking the TODO for the openstack-ansible project shows that they are still planning on working on adding Neutron LBaas configuration.
Ansible 2.7 now provides what you need if you have Octavia installed and enabled on your OpenStack Cloud
Add/Delete load balancer from OpenStack Cloud:
https://docs.ansible.com/ansible/latest/modules/os_loadbalancer_module.html#os-loadbalancer-module
Add/Delete a listener for a load balancer from OpenStack Cloud
https://docs.ansible.com/ansible/latest/modules/os_listener_module.html#os-listener-module
Add/Delete a pool in the load balancing service from OpenStack Cloud
https://docs.ansible.com/ansible/latest/modules/os_pool_module.html#os-pool-module