glassfish3 + apache + octopuslb - glassfish

I am trying to build a glassfish3 cluster made of 6 nodes which has as a frontend 6 apache web servers balanced by Octopus lb. The load balancer allows me to send requests depending on load that is registered on every node.
My setup is as follows:
client-> octopus load balancer -> apache web server -> glassfish server.
Glassfish communicate with Apache via ajp.
The problem that i have is that "it seems" that the sessions are not replicated as they should over the entire cluster.
I have found some documentation about clustering with glassfish v2 and the said that if i have a cluster with 3 nodes, the node 1 will replicate his sessions to n2 and n2 to n3 and n3 to n1, so that one or two may fail but the session will still be there.
Is it the same for gf3?
What i thought when i started to build this cluster was the fact that each node will replicate his sessions to all other nodes from cluster.
If the session replication works as in version 2 i guess that my setup will never work, because i request may be server from n1, and the the second from n5 (n1 does not replicate directly sessions to n5), so then i will lose my session data.
Any advices ??

Ok, as nobody yet answer to my questions i came back with some conclusions regarding my questions.
So the session are replicated as i read, i mean every node backup his session to another node. Unlike glassfish v2 where this replication was performed in ring node1->node2->node3->node1 in version 3 every node replicate his sessions to other node based on a hash algorithm.
Also you need a sticky session load balancer, because if you get a session from node 1 and node 1 replicates it`s sessions on node3, and the second request is forwarded to node2 then you have a problem. A sticky session will fix that.
The setup that have implemented now is like this:
octopus lb(cannot make it work with sticky session) -> apache http (mod_jk loadbalancer with sticky session) -> glassfish node. So even if octopus sends me to a wrong node apache load balancer based on the session cookie will send me to correct glassfish node.
If you wonder why i use octopus lb asa a fist balancer si that because i have balanced other services and php applications.
hope this will help someone...

Related

Redirecting redis client to slave if master is under a large transaction (redis cluster) and vice versa

I am trying to implement a 3 master 3 slave architecture with redis cluster. I want to redirect my client to slave if master is blocked (like undergoing a MULTI EXEC query) or redirect to master if slave is synchronising the MULTI EXEC query. Is there any way I can achieve it through redis configuration, or do I need to manually implement this logic with the client library (redis-rb) I am using?
Thanks in advance.
As I know, there isn't any proxy or balancing in redis cluster that you can control. In Redis Cluster nodes don't proxy commands to the right node in charge for a given key, but instead, they redirect clients to the right nodes serving a given portion of the keyspace. So you can't somehow control this from config.
Maybe your case with MULTI EXEC will be handled by the client library because it knows all about redis master nodes config

Does it require to put load balancer before Redis cluster

I am using Redis Cluster on 3 Linux servers (CentOS 7). I have standard configuration i.e. 6 nodes, 3 master instances, and 3 slave instances (one master have one slave) distributed on these 3 Linux servers. I am using this setup for my web application for data caching, HTTP response caching. My aim is to read primary and write secondary i.e. Read operation should not fail or delayed.
Now I would like to ask is it necessary to configure any load balancer before by 3 Linux servers so that my web application requests to Redis cluster instances can be distributed properly on these Redis servers? Or Redis cluster itself able to handle the load distribution?
If Yes, then please mention any reference link to configure the same. I have checked official documentation Redis Cluster but it does not specify anything regarding load balancer setup.
If you're running Redis in "Cluster Mode" you don't need a load balancer. Your Redis client (assuming it's any good) should contact Redis for a list of which slots are on which nodes when your application starts up. It will hash keys locally (in your application) and send requests directly to the node which owns the slot for that key (which avoids the extra call to Redis which results in a MOVED response).
You should be able to configure your client to do reads on slave and writes on master - or to do both reads and writes on only masters. In addition to configuring your client, if you want to do reads on slaves, check out the READONLY command: https://redis.io/commands/readonly .

Kubernetes cluster internal load balancing

Playing a bit with Kubernetes (v1.3.2) I’m checking the ability to load balance calls inside the cluster (3 on-premise CentOS 7 VMs).
If I understand correctly the documentation in http://kubernetes.io/docs/user-guide/services/ ‘Virtual IPs and service proxies’ paragraph, and as I see in my tests, the load balance is per node (VM). I.e., if I have a cluster of 3 VMs and deployed a service with 6 pods (2 per VM), the load balancing will only be between the pods of the same VM which is somehow disappointing.
At least this is what I see in my tests: Calling the service from within the cluster using the service’s ClusterIP, will load-balance between the 2 pods that reside in the same VM that the call was sent from.
(BTW, the same goes when calling the service from out of the cluster (using NodePort) and then the request will load-balance between the 2 pods that reside in the VM which was the request target IP address).
Is the above correct?
If yes, how can I make internal cluster calls load-balance between all the 6 replicas? (Must I employ a load balancer like nginx for this?)
No, the statement is not correct. The loadbalancing should be across nodes (VMs). This demo demonstrates it. I have run this demo on a k8s cluster with 3 nodes on gce. It first creates a service with 5 backend pods, then it ssh into one gce node and visits the service.ClusterIP, and the traffic is loadbalanced to all 5 pods.
I see you have another question "not unique ip per pod" open, it seems you hadn't set up your cluster network properly, which might caused what you observed.
In your case, each node will be running a copy of the service - and load-balance across the nodes.

Apache tomcat deployment with load balancer

I am trying to come up with a simple procedure for production deployments. I have 2 tomcat nodes, front ended by 2 apache nodes with a load balancer on top of apache nodes. For some reason I wont be able to do parallel deployments on Tomcats. I am trying to use balancer-manager for during deployment in which I will make sure I drain tomcat node 1 before the application changes. I want to make sure I validate the changes on the tomcat node before I put the node in to live state. I know, at this point, I can take the apache node 1 offline from load balancer and change balancer-manager to route requests only to tomcat node 1 and point all my requests to Apache node 1 to validate before I go live. I see this as a complex procedure to implement and I want to know if there is a better way I can achieve this. Just an FYI we load balance requests between two apache nodes at F5 and we load balance requests between 2 tomcat nodes using Apache.
Any help?
There are three ways, I'm aware of:
Use a service registry/service discovery tool like consul.io
Implement a health check into your application, which you can control during runtime. The F5 will access then the health check resource and decide, whether the node is healthy. Right before the deployment you change the health state of the node to unhealthy and the node will be removed from the load balancing after a couple of seconds.
Use red/blue deployments: This means, every host carries two tomcats (the red and the blue tomcat). Your Apache points either to the red or to the blue one. By this approach, you deploy on the red tomcat and make sure your app is started. Then you switch the config of the Apache to point on the red one and do a graceful restart - no requests are dropped. The blue is now inactive and the next time you deploy, you deploy to the blue tomcat and repeat the procedure.
I used all methods in production and large ISPs. Depends, on your infrastructure, application, and, how you want to deal with the HA issue.
HTH, Mark

Failing over with single Replication Group on ElastiCache Redis

I'm testing out ElastiCache backed by Redis with the following specs:
Using Redis 2.8, with Multi-AZ
Single replication group
1 master node in us-east-1b, 1 slave node in us-east-1c, 1 slave node in us-east-1d
The part of the application writing is directly using the endpoint for the master node (primary-node.use1.cache.amazonaws.com)
The part of the application doing only reads is pointing to a custom endpoint (readonly.redis.mydomain.com) configured in HAProxy, which then points to the two other read slave end points. (readslave1.use1.cache.amazonaws.com and readslave2.use1.cache.amazonaws.com)
Now lets say the primary node (master) fails in us-east-1b.
From what I understand, if the master instance fails, I won't have to change the url for the end point for writing to Redis (primary-node.use1.cache.amazonaws.com), although from there, I still have the following questions:
Do I have to change the endpoint names for the read only slaves?
How long until the missing slave is added into the pool?
If there's anything else I'm missing, I'd appreciate the advice/information.
Thanks!
If you are using ElastiCache, you should make use the "Primary EndpointThe" provided by AWS.
That endpoint actually is backed by Route53, if the primary (master) redis is down, since you enable MutliA-Z, it will auto fail over to one of the read replica (slave).
In that case, you don't need to modify the endpoint of your redis.
I don't know why you have such design, seems you only want write to master, but always read from slave.
For HA Proxy part, you should include TCP check for ALL 3 redis nodes, using their "Read Endpoint"
In haproxy, you can check if the endpoint is SLAVE, if yes, your haproxy should redirect the traffic to that.
Notice that in the application layer, if your redis driver don't support auto reconnect, your script will fail to connect to the new master nodes.
In addition to "auto reconnect", since AWS is using Route53 DNS to do fail over, some lib will NOT do NS lookup again, which means the DNS is still pointing to the OLD ip which is the old master.
Using HAproxy can solve this problem.