Playing a bit with Kubernetes (v1.3.2) I’m checking the ability to load balance calls inside the cluster (3 on-premise CentOS 7 VMs).
If I understand correctly the documentation in http://kubernetes.io/docs/user-guide/services/ ‘Virtual IPs and service proxies’ paragraph, and as I see in my tests, the load balance is per node (VM). I.e., if I have a cluster of 3 VMs and deployed a service with 6 pods (2 per VM), the load balancing will only be between the pods of the same VM which is somehow disappointing.
At least this is what I see in my tests: Calling the service from within the cluster using the service’s ClusterIP, will load-balance between the 2 pods that reside in the same VM that the call was sent from.
(BTW, the same goes when calling the service from out of the cluster (using NodePort) and then the request will load-balance between the 2 pods that reside in the VM which was the request target IP address).
Is the above correct?
If yes, how can I make internal cluster calls load-balance between all the 6 replicas? (Must I employ a load balancer like nginx for this?)
No, the statement is not correct. The loadbalancing should be across nodes (VMs). This demo demonstrates it. I have run this demo on a k8s cluster with 3 nodes on gce. It first creates a service with 5 backend pods, then it ssh into one gce node and visits the service.ClusterIP, and the traffic is loadbalanced to all 5 pods.
I see you have another question "not unique ip per pod" open, it seems you hadn't set up your cluster network properly, which might caused what you observed.
In your case, each node will be running a copy of the service - and load-balance across the nodes.
Related
We have infinispan with 7 nodes in the cluster.
We have a set of clients connecting to three nodes as configured in hotrodclient.properties and another set of clients connecting to remaining nodes in the cluster.
Our objective is to distribute the load on the cluster . Is it ok to do like this ?
The Hot Rod client already performs load balancing by default. Based on the configured intelligence, it performs round-robing or contacts the server which owns the data to be accessed directly.
You have more information on the documentation page (Section 2.3 and 2.3.1).
I am using Redis Cluster on 3 Linux servers (CentOS 7). I have standard configuration i.e. 6 nodes, 3 master instances, and 3 slave instances (one master have one slave) distributed on these 3 Linux servers. I am using this setup for my web application for data caching, HTTP response caching. My aim is to read primary and write secondary i.e. Read operation should not fail or delayed.
Now I would like to ask is it necessary to configure any load balancer before by 3 Linux servers so that my web application requests to Redis cluster instances can be distributed properly on these Redis servers? Or Redis cluster itself able to handle the load distribution?
If Yes, then please mention any reference link to configure the same. I have checked official documentation Redis Cluster but it does not specify anything regarding load balancer setup.
If you're running Redis in "Cluster Mode" you don't need a load balancer. Your Redis client (assuming it's any good) should contact Redis for a list of which slots are on which nodes when your application starts up. It will hash keys locally (in your application) and send requests directly to the node which owns the slot for that key (which avoids the extra call to Redis which results in a MOVED response).
You should be able to configure your client to do reads on slave and writes on master - or to do both reads and writes on only masters. In addition to configuring your client, if you want to do reads on slaves, check out the READONLY command: https://redis.io/commands/readonly .
As i am having one application in which architecture is as per below,
users ----> Haproxy load balancer(TCP Connection) -> Application server 1
-> Application server 2
-> Application server 3
Now i am able to scale out and scale in application servers. But due to high no of TCP connections(around 10000 TCP connections), my haproxy load balancer node RAM gets full, so further not able to achieve more no of TCP connections afterwords. So need to add one more node for happroxy itself. So my question is how to scale out haproxy node itself or is there any other solution for the same?
As i am deploying application on AWS. Your solution is highly appreciated.
Thanks,
Mayank
Use AWS Route53 and create CNAMEs that point to your haproxy instances.
For example:
Create a CNAME haproxy.example.com pointing to haproxy instance1.
Create a second CNAME haproxy.example.com pointing to haproxy instance2.
Both of your CNAMEs should use some sort of routing policy. Simplest is probably round robin. Simply rotates over your list of CNAMEs. When you lookup haproxy.example.com you will get the addresses of both instances. The order of ips returned will change with each request. This will distribute load evenly between your two instances.
There are many more options depending on your needs: http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html
Overall this is pretty easy to configure.
Few unrelated issues: You can also setup health checks and route traffic to the remaining healthy instance(s) if needed. If your running at capacity with two instances you might want to add a few more to able to cope with an instance failing.
We are working on Mule HA cluster PoC with two separate server nodes. We were able to create a cluster. We have developed small dummy application with Http endpoint with reliability pattern implementation which loops for a period and prints a value. When we deploy the application in Mule HA cluster, even though its deploys successfully in cluster and application log file has been generated in both the servers but its running in only one server. In application we can point to only server IP for HTTP endpoint. Could any one please clarify my following queries?
In our case why the application is running in one server (which ever IP points to server getting executed).
Will Mule HA cluster create virtual IP?
If not then which IP we need to configure in application for HTTP endpoints?
Do we need to have Load balancer for HTTP based endpoints request? If so then in application which IP needs to be configured for HTTP endpoint as we don't have virtual IP for Mule HA cluster?
Really appreciate any help on this.
Environment: Mule EE ESB v 3.4.2 & Private cloud.
1) You are seeing one server processing requests because you are sending them to the same server each time.
2) Mule HA will not create a virtual IP
3/4) You need to place a load balancer in front of the Mule nodes in order to distribute the load when using HTTP inbound endpoints. You do not need to decide which IP to place in the HTTP connector within the application, the load balancer will route the request to one of the nodes.
creating a Mule cluster will just allow your Mule applications share information through its shared memory (VM transport and Object Store) and make the polling endpoints poll only from a single node. In the case of HTTP, it will listen in each of the nodes, but you need to put a load balancer in front of your Mule nodes to distribute load. I recommend you to read the High Availability documentation. But the more importante question is why do you need to create a cluster? You can have two separate Mule servers with your application deployed and have a load balancer send request to them.
I'm trying to configure master/slave configuration using apache zookeeper. I have 2 application servers only on which I'am running activemq. as per the tutorial given at
[1]: http://activemq.apache.org/replicated-leveldb-store.html we should have atleast 3 zookeeper servers running. since I have only 2 machines , can I run 2 zookeeper servers on 1 machine and remaining one on another ? also can I run just 2 zookeeper servers and 2 activemq servers respectively on my 2 machines ?
I will answer the zookeper parts of the question.
You can run two zookeeper nodes on a single server by specifying different port numbers. You can find more details at http://zookeeper.apache.org/doc/r3.2.2/zookeeperStarted.html under Running Replicated ZooKeeper header.
Remember to use this for testing purposes only, as running two zookeeper nodes on the same server does not help in failure scenarios.
You can have just 2 zookeeper nodes in an ensemble. This is not recommended as it is less fault tolerant. In this case, failure of one zookeeper node makes the zookeeper cluster unavailable since more than half of the nodes in the ensemble should be alive to service requests.
If you want just POC ActiveMQ, one zookeeper server is enough :
zkAddress="192.168.1.xxx:2181"
You need at least 3 AMQ serveur to valid your HA configuration. Yes, you can create 2 AMQ instances on the same node : http://activemq.apache.org/unix-shell-script.html
bin/activemq create /path/to/brokers/mybroker
Note : don't forget du change port number in activemq.xml and jetty.xml files
Note : when stopping one broker I notice that all stopping.