load balancing in jboss fuse 6.2 - fuseesb

I have installed Jboss fuse 6.2 in 2 unix boxes and deployed camel-cxf-contract-first application bundle in both the servers.
first server is master box and second one is slave box.how do you configure this set up?if master node is down then requests
should automatically should forward to slave box.how do we set up it in latest jboss fuse?

You need to create a Fabric for this. Once you have a Fabric and you have a CXF application deployed twice or more, you can use the http gateway components from Fuse to loadbalance any incoming requests over the available endpoints.
See here for documentation about the http gateway: https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_Fuse/6.2/html/Fabric_Guide/Gateway.html

Related

Apache Ignite server node health check

I am working on launching an Apache Ignite (v2.13.0) cluster in AWS. I am targeting using Amazon ECS for container management and running these container nodes on EC2 instances.
I am fronting these instances with an Application Load Balancer and using the Apache Ignite aws-ext modules TcpDiscoverALBIpFinder to find other nodes in the cluster. As part of setting up an ALB in AWS, you add a listener that routes traffic to a registered healthy target. These targets are represented by a target group. These nodes in the target group are tested periodically to check their health via a health check. The health check sends a request to a configured port and path and determines the health based on returned status codes.
My question is if there is an out of the box path on an Apache Ignite server that I should utilize for health checks?
I looked for additional documentation online on how others have set this up however came up dry.
Cheers!
you can use the PROBE/VERSION commands to implement these checks.
example usage: https://www.gridgain.com/docs/latest/installation-guide/kubernetes/amazon-eks-deployment
https://www.gridgain.com/docs/latest/developers-guide/restapi#probe
Most people use the REST API for health checks.
readinessProbe:
with auth: http://localhost:8080/ignite?cmd=probe&ignite.login=ignite&ignite.password=ignite
without auth: http://localhost:8080/ignite?cmd=probe
livenessProbe:
with auth: http://localhost:8080/ignite?cmd=version&ignite.login=ignite&ignite.password=ignite
without auth: http://localhost:8080/ignite?cmd=version

Mule HA Cluster - Application configuration issue

We are working on Mule HA cluster PoC with two separate server nodes. We were able to create a cluster. We have developed small dummy application with Http endpoint with reliability pattern implementation which loops for a period and prints a value. When we deploy the application in Mule HA cluster, even though its deploys successfully in cluster and application log file has been generated in both the servers but its running in only one server. In application we can point to only server IP for HTTP endpoint. Could any one please clarify my following queries?
In our case why the application is running in one server (which ever IP points to server getting executed).
Will Mule HA cluster create virtual IP?
If not then which IP we need to configure in application for HTTP endpoints?
Do we need to have Load balancer for HTTP based endpoints request? If so then in application which IP needs to be configured for HTTP endpoint as we don't have virtual IP for Mule HA cluster?
Really appreciate any help on this.
Environment: Mule EE ESB v 3.4.2 & Private cloud.
1) You are seeing one server processing requests because you are sending them to the same server each time.
2) Mule HA will not create a virtual IP
3/4) You need to place a load balancer in front of the Mule nodes in order to distribute the load when using HTTP inbound endpoints. You do not need to decide which IP to place in the HTTP connector within the application, the load balancer will route the request to one of the nodes.
creating a Mule cluster will just allow your Mule applications share information through its shared memory (VM transport and Object Store) and make the polling endpoints poll only from a single node. In the case of HTTP, it will listen in each of the nodes, but you need to put a load balancer in front of your Mule nodes to distribute load. I recommend you to read the High Availability documentation. But the more importante question is why do you need to create a cluster? You can have two separate Mule servers with your application deployed and have a load balancer send request to them.

Creating AMQ network of broker clusters on JBoss Fuse 6.2, without fabric

I want to create (2) broker clusters connected by network of brokers in JBoss Fuse 6.2; each cluster has 2 master/slave pairs.
It's a small cluster, so we don't intend to use Fabric/Zookeeper; everything will be statically configured, no auto discovery.
Questions
Is it possible to use fabric profiles to build the topology, but
avoid using fabric at runtime?
Can we use Git, or something similar, for centrally managing container config files, again, without fabric?
We tried creating profiles using fabric:mq-create, but the command is not available unless a fabric is first created, which defeats the purpose.
No fabric profiles requires using fabric. You can use git to store files, but you cannot have JBoss Fuse automatic use it such as it does with fabric. You would need to use git manually.
The AMQ broker in JBoss Fuse is just standard Apache ActiveMQ so you can configure it manually/static as a network of brokers. It just not very easy to do if you haven't done that before.
See the JBoss A-MQ documentation as that covers the broker: http://www.jboss.org/products/amq/overview/
for example at: https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_A-MQ/6.2/html/Using_Networks_of_Brokers/index.html

How does a GlassFish cluster find active IIOP endpoints?

I have a curiosity and I was searching for it without any result. In GlassFish documentation it is written:
If the GlassFish Server instance on which the application client is
deployed participates in a cluster, the GlassFish Server finds all
currently active IIOP endpoints in the cluster automatically. However,
a client should have at least two endpoints specified for
bootstrapping purposes, in case one of the endpoints has failed.
but I am asking myself how this list is created.
I've done some tests with a stand-alone client that is executed in a JVM and does some RMI calls on an application that is deployed in a GlassFish cluster and I can see from the logs that the IIOP endpoints list is completed automatically and it is set as com.sun.appserv.iiop.endpoints system property but if I stop a server instance or start another during the execution of the client the list remains the one that was created when the JVM was started.
GlassFish clustering is managed by the GMS (Group Management Service) which usually uses UDP Multicast, but can use TCP where that is not available.
See section 4 "Administering GlassFish Server Clusters" in the HA Administration Guide (PDF)
The Group Management Service (GMS) enables instances to participate in a cluster by
detecting changes in cluster membership and notifying instances of the changes. To
ensure that GMS can detect changes in cluster membership, a cluster's GMS settings
must be configured correctly.

WSO2 AM 1.6: Publish service to cluster

We create and publish a service through the web console (/publisher). This is environment is a 2 node WSO2 AM cluster with a loadbalancer on top of it (HAProxy).
When we invoke the service (SoapUI), via this load balancer, one request succeeds and the next one fails and so on.
IMHO: The cluster configuration should be correct. I can see the published service on both nodes if I start the /publisher app on each node.
axis2.xml:
- Hazelcast clustering is enabled
- Using multicast
master-datasources.xml
- pointing to Oracle database
api-manager/xml
- pointing to jdbc string in master-datasources.xml
Does anyone have some tips.
It seems that the Synapse Artifact is not deployed on two Gateways correctly. Can you go to /repository/deployment/server/synapse-config/default/api and see if the both node have an xml file for the published API. If you haven't enabled Deployment Synchroniser the artifact will only be created in one node.