Cluster-wide singleton in Websphere Cluster - singleton

I need to run a component using Apache Camel (or Spring Integration) under WAS ND 8.0 cluster. They both run some threads on startup, and stop them on shutdown normally. No problem to supply WAS managed threadpool. But that threads must run on single cluster's node at the same time. Moreover it must be high-available i.e. switch to other node when active node falls.
Solution I found - is WAS Partitioning Facility. It requires additional Extended Deployment licenses. Is it the only way, or there is some way to implement this using Network Deployment license only?
Thanks in advance.

I think that there is not a feature that address this interesting requirement.
I can imagine a "trick":
A Timer EJB send a message on a queue (let's say 1 per minute)
Configure a Service Integration Bus (SIB) with High Availability and No Scalability, so the HA Manager ensure that only one messaging engine (ME) is alive.
Create a non-reliable queue for high performances and low resource consumption.
The Activation Spec should be configured to listen only local ME.
A MDB implement the following logic: when the message arrives, it check if the singleton thread is alive, otherwise it start the thread.
Does it make sense?

Related

Can the clients of a RabbitMQ cluster reconnect to another Node if the one they are connected to fails?

I feel like I am missing something very fundamental here.
I can bring up a RabbitMQ cluster with three nodes (rabbit1, rabbit2 and rabbit3) without an issue. Then when I start writing my microservices it seems like each client connects to only one rabbit instance. So let's say I have all my services connect to rabbit1.
If rabbit1 then goes down will my entire infrastructure blow up? Do the services have a way of switching to another rabbit node? It seems like they cannot, in which case, what is the point of having a cluster?
In case someone else runs into this and has trouble (like myself) finding this in the documentation, RabbitMQ does not manage client connection auto-recovery. From the docs:
Some client libraries provide a mechanism for automatic recovery from
network connection failures... Other clients may consider network
failure recovery to be a responsibility of the application.
So first check if you library offers auto-recovery, if not you'll have to implement it yourself.

One mule app server in cluster polling maximum message from MQ

My mule application is comprised of 2 nodes running in a cluster, and it listens to IBM MQ Cluster (basically connecting to 2 MQ via queue manager). There are situations where one mule node pulls or takes more than 80% of message from MQ cluster and another mule node picks rest 20%. This is causing CPU performance issues.
We have double checked that all load balancing is proper, and very few times we get CPU performance problem. Please can anybody give some ideas what could be possible reason for it.
Example: last scenario was created where there are 200000 messages in queue, and node2 mule server picked 92% of message from queue within few minutes.
This issue has been fixed now. Got into the root cause - our mule application running on MULE_NODE01 reads/writes to WMQ_NODE01, and similarly for node 2. One of the mule node (lets say MULE_NODE02) reads from linux/windows file system and puts huge messages to its corresponding WMQ_NODE02. Now, its IBM MQ which tries to push maximum load to other WMQ node to balance the work load. That's why MULE_NODE01 reads all those loaded files from WMQ_NODE01 and causes CPU usage alerts.
#JoshMc your clue helped a lot in understanding the issues, thanks a lot for helping.
Its WMQ node in a cluster which tries to push maximum load to other WMQ node, seems like this is how MQ works internally.
To solve this, we are now connecting our mule node to MQ gateway, rather making 1-to-1 connectivity
This could be solved by avoiding the racing condition caused by multiple listeners. Configure the listener in the cluster to the primary node only.
republish the message to a persistent VM queue.
move the logic to another flow that could be triggered via a VM listener and let the Mule cluster do the load balancing.

RabbitMQ as Message Broker used by Spring Websocket dies under load

I develop an application where we need to handle 160k concurrent users which are connected to the backend via a websocket connection.
We decided to use the spring websocket implementation and RabbitMQ as the message broker.
In our application every user needs to subscribe to its user queue /exchange/amq.direct/update as well as to another queue where also other users can potential subscribe to /topic/someUniqueName.
In our first performance test we did the naive approach where every user subscribes to two new queues.
When running the test RabbitMQ dies silently when around 800 users are connected at the same time, so around 1600 queues are active (See the graph of all RabbitMQ objects here).
I read though that you should be careful opening many connections to RabbitMQ.
Now I wonder if the approach that is anticipated by Spring Websocket with opening one queue per user is a conceptional problem for systems with high load or if there is another error in my system.
Limiting factors for RabbitMQ are usually:
memory (can be checked in dashboard) that needs to grow with number of messages and number of queues (if you don't use lazy queues that go directly to disk).
maximum number of file descriptors (at least 1 per connection) that often defaults to too low values on many distributions (ref: https://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2012-April/019615.html)
CPU for routing the messages
I did find the issue. I actually misconfigured the RabbitMQ service and just gave it a 1024 file descriptor limit. Increasing it solved the issue.

Weblogic migratable JMS consumer doesn't follow the service to the new managed server if the old server remains running

I have a JMS service targeted at a migratable target (using an Auto-Migrate Exactly-Once policy) in a cluster which consists of 2 managed servers, at any point of time the service is hosted at one of them and the consumer (which is targeted at the cluster) is supposed to receive messages seamlessly no matter where the service is hosted.
When I manually switch the host of the migratable target (clicking migrate), without turning the hosting managed server off, the consumer fails to receive messages sent to the queues, unless I turn off the previous hosting managed server forcing the consumer to the new host.
I can rule out sender problems, I can see the messages in the queue right after them being sent.
I'll be grateful if anyone can advice on how to configure either the consumer or the migratable service to work seamlessly when migration happens.
I think that may just be a misunderstanding of how migration works. The docs state Auto-Migrate Exactly-Once:
indicates that if at least one Managed Server in the candidate list
is running, then the JMS service will be active somewhere in the
cluster if servers should fail or are shut down (either gracefully or
forcibly). For example, a migratable target hosting a path service
should use this option so if its hosting server fails or is shut down,
the path service will automatically migrate to another server and so
will always be active in the cluster. Note that this value can lead to
target grouping. For example, if you have five exactly-once migratable
targets and only one server member is started, then all five
migratable targets will be activated on that server member.
The docs also state:
Manual Service Migration—the manual migration of pinned JTA and
JMS-related services (for example, JMS server, SAF agent, path
service, and custom store) after the host server instance fails
Your server/service has neither failed or shut down, you are forcing it to migrate with a healthy host still running, so it has not met the criteria for migration.
See more here as well.
I have some experience that sounds reminiscent of what you're looking at. There was some WLS-specific capability around recognizing reconfiguration in JMS destinations as part of their clustered server design.
In one case I had to call a WLS-specific method: weblogic.jms.extensions.WLSession.setExceptionListener(). This was on their implementation of the JMS Session interface. This is analogous to the standard JMS Connection.setExceptionListener().
With this WLS-specific capability, the WLSession.setExceptionListener() callback would occur at a point where the consuming client should tear down and re-establish the connection / session / consumer in reaction to a reconfiguration (migration) that had happened.

Using Apache Camel for Load Balancing

Can I access SEDA or VM queue from another machine or JVM?
I actually want to implement load balancing with the help of Camel but do not want introduce another messaging framework for this. I just want to distribute load to different consumers from a producers using some in built queue.
Is it possible? If no then what are my options?
Another Approach:(Pull Approach)
Not sure how optimum new approach is or what are the advantages and disadvantages of new approach, So please help me to analyze this approach.
Messages will be put into a Master queue and all the worker systems will be listening to Master queue.Let's say 100,000 messages are being put into Master queue and 5 worker systems are listening to it. Worker systems will process the messages one by one from the master queue. There are two big benefits with this approach:
I don't need to worry about registering my worker systems with the producer. Sixth system just boot up and start listening to Master queue.
I don't need to worry about sending message to a consumer system which is free. When worker system will be done processing a message, it pick up another one from the Master queue.
Let me know your thoughts on it.
SEDA and VM:// work only on the same JVM.
Load balancing in Java messaging is usually achieved using the JMS and Competing Consumers pattern. You send messages to the queue and multiple consumers compete to process them.
If broker with its queue becomes a bottleneck - consider using fan-out pattern and the network of brokers.
SEDA and VM endpoints are valid for the host Context and JVM respectively. To facilitate JVM-to-JVM messaging you will need to use an over-the-wire protocol component such as, but not limited to, Mina, HTTP or JMS.
The easiest way is to use jms. If you have n routes listening on the same jms queue then they will automatically load balance. If one goes away the load will be balanced over the remaining ones. I recommend starting with ActiveMQ as it is very easy to setup and well integrated with Camel.To make the broker highly available you can either setup two standalone brokers or setup one embedded broker per camel instance.