Apache Ignite server node health check - ignite

I am working on launching an Apache Ignite (v2.13.0) cluster in AWS. I am targeting using Amazon ECS for container management and running these container nodes on EC2 instances.
I am fronting these instances with an Application Load Balancer and using the Apache Ignite aws-ext modules TcpDiscoverALBIpFinder to find other nodes in the cluster. As part of setting up an ALB in AWS, you add a listener that routes traffic to a registered healthy target. These targets are represented by a target group. These nodes in the target group are tested periodically to check their health via a health check. The health check sends a request to a configured port and path and determines the health based on returned status codes.
My question is if there is an out of the box path on an Apache Ignite server that I should utilize for health checks?
I looked for additional documentation online on how others have set this up however came up dry.
Cheers!

you can use the PROBE/VERSION commands to implement these checks.
example usage: https://www.gridgain.com/docs/latest/installation-guide/kubernetes/amazon-eks-deployment
https://www.gridgain.com/docs/latest/developers-guide/restapi#probe

Most people use the REST API for health checks.

readinessProbe:
with auth: http://localhost:8080/ignite?cmd=probe&ignite.login=ignite&ignite.password=ignite
without auth: http://localhost:8080/ignite?cmd=probe
livenessProbe:
with auth: http://localhost:8080/ignite?cmd=version&ignite.login=ignite&ignite.password=ignite
without auth: http://localhost:8080/ignite?cmd=version

Related

Docker Swarm - load-balancing to closest node first

I'm trying to optimize Docker-Swarm load-balancing in a way that it will first route requests to services by the following priority
Same machine
Same DC
Anywhere else.
Given the following setup:
DataCenter-I
Server-I
Nginx:80
Server-II
Nginx:80
Worker
DataCenter-II
Server-I
Nginx:80
Worker
In case and DataCenter-I::Server-II::Worker will issue an API request over port 80, The desired behavior is:
Check if there are any tasks (containers) mapped to port:80 on local server (DataCenter-I::Server-II)
Fallback and check in local DataCenter (i.e DataCenter-I::Server-I)
Fallback and check in all clusters (i.e DataCenter-II::Server-I)
This case is very useful when using workers and response time doesn't matter while bandwidth does.
Please advise,
Thanks!
According to this question I asked before, docker swarm is currently only using round-robin and no indication to be pluginable yet.
However, Nginx Plus support least_time load balancing method, which I think there will be an similar open-source module, and it is similar to what you need, with perhaps the least effort.
ps: Don't run Nginx with the docker swarm. Instead, run Nginx with regular docker or docker-compose in the same docker network of your app. You don't want docker swarm to load balancing your load balancer.

Kubernetes cluster internal load balancing

Playing a bit with Kubernetes (v1.3.2) I’m checking the ability to load balance calls inside the cluster (3 on-premise CentOS 7 VMs).
If I understand correctly the documentation in http://kubernetes.io/docs/user-guide/services/ ‘Virtual IPs and service proxies’ paragraph, and as I see in my tests, the load balance is per node (VM). I.e., if I have a cluster of 3 VMs and deployed a service with 6 pods (2 per VM), the load balancing will only be between the pods of the same VM which is somehow disappointing.
At least this is what I see in my tests: Calling the service from within the cluster using the service’s ClusterIP, will load-balance between the 2 pods that reside in the same VM that the call was sent from.
(BTW, the same goes when calling the service from out of the cluster (using NodePort) and then the request will load-balance between the 2 pods that reside in the VM which was the request target IP address).
Is the above correct?
If yes, how can I make internal cluster calls load-balance between all the 6 replicas? (Must I employ a load balancer like nginx for this?)
No, the statement is not correct. The loadbalancing should be across nodes (VMs). This demo demonstrates it. I have run this demo on a k8s cluster with 3 nodes on gce. It first creates a service with 5 backend pods, then it ssh into one gce node and visits the service.ClusterIP, and the traffic is loadbalanced to all 5 pods.
I see you have another question "not unique ip per pod" open, it seems you hadn't set up your cluster network properly, which might caused what you observed.
In your case, each node will be running a copy of the service - and load-balance across the nodes.

Mule HA Cluster - Application configuration issue

We are working on Mule HA cluster PoC with two separate server nodes. We were able to create a cluster. We have developed small dummy application with Http endpoint with reliability pattern implementation which loops for a period and prints a value. When we deploy the application in Mule HA cluster, even though its deploys successfully in cluster and application log file has been generated in both the servers but its running in only one server. In application we can point to only server IP for HTTP endpoint. Could any one please clarify my following queries?
In our case why the application is running in one server (which ever IP points to server getting executed).
Will Mule HA cluster create virtual IP?
If not then which IP we need to configure in application for HTTP endpoints?
Do we need to have Load balancer for HTTP based endpoints request? If so then in application which IP needs to be configured for HTTP endpoint as we don't have virtual IP for Mule HA cluster?
Really appreciate any help on this.
Environment: Mule EE ESB v 3.4.2 & Private cloud.
1) You are seeing one server processing requests because you are sending them to the same server each time.
2) Mule HA will not create a virtual IP
3/4) You need to place a load balancer in front of the Mule nodes in order to distribute the load when using HTTP inbound endpoints. You do not need to decide which IP to place in the HTTP connector within the application, the load balancer will route the request to one of the nodes.
creating a Mule cluster will just allow your Mule applications share information through its shared memory (VM transport and Object Store) and make the polling endpoints poll only from a single node. In the case of HTTP, it will listen in each of the nodes, but you need to put a load balancer in front of your Mule nodes to distribute load. I recommend you to read the High Availability documentation. But the more importante question is why do you need to create a cluster? You can have two separate Mule servers with your application deployed and have a load balancer send request to them.

Requests dispatch to specific JBoss instance in cluster with apache mod_cluster 1.1?

I am trying to implement JBoss AS7 clustered environment with mod_cluster as load-balancer. Can anyone explain how to dispatch requests based on URL params to specific JBoss Node using mod_cluster?
could you elaborate more on what do you actually need to achieve?
mod_cluster is a smart load balancer that dynamically registers worker nodes and their contexts, so, for instance, if I had this cluster:
worker_0, contexts: houby/, ocet/, houbyaocet/
worker_1, contexts: houby/, ocet/, devtest/
worker_2, contexts: houby/, ocet/, devtest/
and this mod_cluster httpd load balancer:
http://mycompany.example.com
then requests to either houby/ or ocet/ could be balanced among all workers, whereas devtest/ could be handled by worker_1 and worker_2 only. Any request to houbyocet/ will end up on worker_0, because that is the only worker that registered this context.
You don't configure this whole logic, it's done automatically by mod_cluster. Worker nodes send special configuration messages to the balancer regarding which applications they have deployed, whether they are overloaded or not, whether tehy are shutting down/undeploying..etc.
HTH
Karm

Glassfish failover without load balancer

I have a Glassfish v2u2 cluster with two instances and I want to to fail-over between them. Every document that I read on this subject says that I should use a load balancer in front of Glassfish, like Apache httpd. In this scenario failover works, but I again have a single point of failure.
Is Glassfish able to do that fail-over without a load balancer in front?
The we solved this is that we have two IP addresses which both respond to the URL. The DNS provider (DNS Made Easy) will round robin between the two. Setting the timeout low will ensure that if one server fails the other will answer. When one server stops responding, DNS Made Easy will only send the other host as the server to respond to this URL. You will have to trust the DNS provider, but you can buy service with extremely high availability of the DNS lookup
As for high availability, you can have cluster setup which allows for session replication so that the user won't loose more than potentially one request which fails.
Hmm.. JBoss can do failover without a load balancer according to the docs (http://docs.jboss.org/jbossas/jboss4guide/r4/html/cluster.chapt.html) Chapter 16.1.2.1. Client-side interceptor.
As far as I know glassfish the cluster provides in-memory session replication between nodes. If I use Suns Glassfish Enterprise Application Server I can use HADB which promisses 99.999% of availability.
No, you can't do it at the application level.
Your options are:
Round-robin DNS - expose both your servers to the internet and let the client do the load-balancing - this is quite attractive as it will definitely enable fail-over.
Use a different layer 3 load balancing system - such as "Windows network load balancing" , "Linux Network Load balancing" or the one I wrote called "Fluffy Linux cluster"
Use a separate load-balancer that has a failover hot spare
In any of these cases you still need to ensure that your database and session data etc, are available and in sync between the members of your cluster, which in practice is much harder.