Mule HA Cluster - Application configuration issue - mule

We are working on Mule HA cluster PoC with two separate server nodes. We were able to create a cluster. We have developed small dummy application with Http endpoint with reliability pattern implementation which loops for a period and prints a value. When we deploy the application in Mule HA cluster, even though its deploys successfully in cluster and application log file has been generated in both the servers but its running in only one server. In application we can point to only server IP for HTTP endpoint. Could any one please clarify my following queries?
In our case why the application is running in one server (which ever IP points to server getting executed).
Will Mule HA cluster create virtual IP?
If not then which IP we need to configure in application for HTTP endpoints?
Do we need to have Load balancer for HTTP based endpoints request? If so then in application which IP needs to be configured for HTTP endpoint as we don't have virtual IP for Mule HA cluster?
Really appreciate any help on this.
Environment: Mule EE ESB v 3.4.2 & Private cloud.

1) You are seeing one server processing requests because you are sending them to the same server each time.
2) Mule HA will not create a virtual IP
3/4) You need to place a load balancer in front of the Mule nodes in order to distribute the load when using HTTP inbound endpoints. You do not need to decide which IP to place in the HTTP connector within the application, the load balancer will route the request to one of the nodes.

creating a Mule cluster will just allow your Mule applications share information through its shared memory (VM transport and Object Store) and make the polling endpoints poll only from a single node. In the case of HTTP, it will listen in each of the nodes, but you need to put a load balancer in front of your Mule nodes to distribute load. I recommend you to read the High Availability documentation. But the more importante question is why do you need to create a cluster? You can have two separate Mule servers with your application deployed and have a load balancer send request to them.

Related

Websocket connection over Microsoft network load balancer (NLB)

I have an on premise NLB with 4 nodes, and on each of those nodes there is an app that communicates with the client over websockets using SignalR.
I have noticed through testing that at some point the client stops receiving messages over websocket, there are no errors, the socket just stops receiving messages.
My suspicion is that the client connects to node 1 through NLB, but when the node 1 has too much traffic the NLB switches the client to node 2, and since client isn't registered in the node 2 the messages stop coming and the client doesn't notice the change.
My questions are:
Am I correct to assume that in order to have default NLB configuration I would have to add a Redis backplane that will keep the list of all connections and allow each node to communicate with client regardless of which node was first to receive the connection? (Microsoft suggested solution https://learn.microsoft.com/en-us/aspnet/core/signalr/scale?view=aspnetcore-6.0)
Is there a way to change NLB configuration to allow specific apps to behave differently that others, for example, can I set up the NLB in such a way that the SignalR app so that all of the websocket traffic goes to a specific node, but the rest of the apps on the servers behave as they do with the default configuration?
If I were to move to the cloud, would I face the same issue or is this something that happens to Microsoft on premise NLB only, or is this something that can happen in Kubernetes or AWS as well? Does the cloud NLB use L7 and the on premise one L3?

spring.rabbitmq.addresses : load balancer address vs list of nodes addresses

I can't find in the recent documentation the pros/cons of using a loadbalancer instead of a list of nodes when setting
spring.rabbitmq.addresses
why it's better to use the list of nodes instead of loadbalancer?
My rabbitmq cluster (3 nodes) is consumed by 40 spring boot applications, and around 500 queues.
By default, only a single connection is opened, so a load balancer won't really do much for you, unless you change the CachingConnectionFactory CacheMode to CONNECTION.
The list of addresses is so the client can fail over to another broker without the need for a load balancer.

Mule: Backend Microservice Loadbalancing

I have this application with Angular based UI & Spring based Microservices in the backend. Mule sits between these two and orchestrates traffic between them.
While I know a lot about front end load balancing with HAProxy to redirect traffic between Mule nodes. I was interested in learning if mule can redirect traffic in the backend without having to use an external loadbalancer. I want the Mule to backend interaction to be as seamless and simple, is there some kind of software load balancing mule can do in this regard?
For On-Premise there is NO inbuilt \ embedded load-balancer
You can still go through link below which could help to imporove some performance using clustering.
Mule Runtime High Availability (HA) Cluster Overview
For CloudHub Mule deployments there are certain provisions.
Cloudhub Dedicated Load Balancer
I think this can be achieved using "first-successful" connector but it goes to the next block within it only if the first one fails and so on.

Best Practice for setting up RabbitMQ cluster in production with NServiceBus

Currently we have 2 load balanced web servers. We are just starting to expose some functionality over NSB. If I create two "app" servers would I create a cluster between all 4 servers? Or should I create 2 clusters?
i.e.
Cluster1: Web Server A, App Server A
Cluster2: Web Server B, App Server B
Seems like if it is one cluster, how do I keep a published message from being handled by the same logical subscriber more than once if that subscriber is deployed to both app server A and B?
Is the only reason I would put RabbitMQ on the web servers for message durability (assuming I didn't have any of the app services running on the web server as well)? In that case my assumption is that I am then using the cluster mirroring to get the message to the app server. Is this correct?
Endpoints vs Servers
NServiceBus uses the concept of endpoints. An endpoint is related to a queue on which it receives messages. If this endpoint is scaled out for either high availability or performance then you still have one queue (with RabbitMQ). So if you would have an instance running on server A and B they both (with RabbitMQ) get their messages from the same queue.
I wouldn't think in app servers but think in endpoints and their non functional requirements in regards to deployment, availability and performance.
Availability vs Performance vs Deployment
It is not required to host all endpoints on server A and B. You can also run service X and Y on server A and services U and V on server B. You then scale out for performance but not for availability but availability is already less of an issue because of the async nature of messaging. This can make deployment easier.
Pubsub vs Request Response
If the same logical endpoint has multiple instances deployed then it should not matter which instance processes an event. If it is then it probably isn't pub sub but async request / response. This is handled by NServiceBus by creating a queue for each instance (with RabbitMQ) on where the response can be received if that response requires affinity to requesting instance.
Topology
You have:
Load balanced web farm cluster
Load balanced RabbitMQ cluster
NServiceBus Endpoints
High available multiple instances on different machines
Spreading endpoints on various machines ( could even be a machine per endpoint)
A combination of both
Infrastructure
You could choose to run the RabbitMQ cluster on the same infrastructure as your web farm or do it separate. It depends on your requirements and available resources. If the web farm and rabbit cluster are separate then you can more easily scale out independently.

How to Load Balance Mule ESB without using Mule Management Console

I am working with Mule ESB and instead of using Mule Management Console (MMC). I just want to load balance so that if I am exposing my Mule ESB as a Service so in that case I don't want to use load balancer to balance my Mule ESB , because once the request will come Load Balancer, it is the single point of failure in case if it is down. So I just need a use case how to Expose Mule as a Service with Optimized Load Balancing without using MMC (Mule Management Console).
For load balancing incoming HTTP request, over multiple Mule instances, you will need a external loadbalancer. Mule ESB Enterprise Edition nor MMC will help you with that.
You can use a commercial one, such as a F5 BIP-IP, or setup a HAProxy. To avoid the loadbalancer to be a single point of failure you can setup a redundant HAProxy.
For JMS make sure to setup a external message broker cluster and connect to it using the normal jms:inbound-endpoint that way Mule will act as a competing consumer and you will achieve load balancing of messages.
I would also advice you to have a look at "MuleSoft Blueprint: Load Balancing Mule for Scalability and Availability" that covers this. It is a bit dated but most of the information in there is still valid.
It's unclear what transport are you using, anyhow you have just limited number of options.
Use Mule EE clustering feature for the VM transport.
Use a load balancer
Use a transport that support competing consumers like JMS or AMQP.
Could you provide a more detailed explanation of you deployment so I can provide more extact info?