JMS queue not synchronized in instances of GlassFish cluster - glassfish

I have a problem using Message Driven Beans in CLUSTERED Glassfish 3.1.1. The problem is with the queue in the Glassfish, the queue is not synchronized between the instances. I am trying by best to explain the scenario below.
I created 2 instances in a GlassFish cluster, created a JMS QueueConnectionFactory, created a JMS Queue. Their targets were made towards the cluster. Then I deployed the web application and the MessageDrivenBean module in the cluster. The web application sends a TextMessage to the JMS Queue.
Everything works well here, like the message is sent to the queue and served by the message driven beans in both the instances.
Then I disable the MessageDrivenBean module. Request the web application which sends the message to the JMS Queue in both the instances. Then I shutdown myInstance2. Re-deploy the MDB in the cluster. Now here is the problem, the MessageDrivenBean only receives the messages of myInstance1 and not the messages sent to myInstance2 queue. The messages in the queue of myInstance2 are only served when myInstance2 is started. Can anyone help me here with the settings that GlassFish uses to synchronize the queue in both the instance so that even for some reason when one instance is down and there are messages in that instance’s queue, the other instance will take the messages of that queue and serve them.
I am using OpenMQ, GlassFish 3.1.1 and I have turned on the HA(high availability) option in GlassFish, but still it does not work.
Thanks

The high-availability options for GlassFish and the high-availability options for Message Queue are configured separately. You need to configure your message queue cluster to be an "enhanced cluster" rather than a "conventional cluster". This is described in the GlassFish 3.1 High Availability Administration Guide.

Related

config Mass transit for both Kafka and RabbitMQ

I want to use Mass Transit with RabbitMQ and Kafka for message broker.
and in deployment phase can decide which one be used for transporting messages.
I know what's ConsumeContext,ISendEndpointProvider, or IPublishEndpoint. And according of the link this Interfaces shared between RabbitMQ and Kafka for publishing messages and consuming that.
I configured both RabbitMQ and Kafka in startup of my .Net Core App and by appSetting in deployment phase can decide which one must be register.
this approach is correct for having and configure both Kafka and RabbitMq in my app?
Using MassTransit, Kafka is a Rider which is configured along with a bus – and the bus still requires a supported transport. If you have RabbitMQ, that would be your bus transport, and the Kafka Rider would exist alongside the bus using the bus for publishing and sending messages.
With Kafka, messages can only be consumed or produced – they cannot be published, nor can they be sent. The ITopicProducer<T> interface is used to produce messages to Kafka topics.
Calling Publish or Send on the ConsumeContext will publish or send those messages to the rider's bus (which may be RabbitMQ, or any supported transport included the InMemory transport). Producing messages to Kafka topics must be done using the ITopicProducer<T> (which may be injected as a dependency to the consumer).
You can't "switch out" RabbitMQ with Kafka as the two services are very different.

How can we store failed messages in VM Connector iin MULE

How can we store failed messages in VM Connector in MULE
Assume it is a transient flow .
Scenario is like when ever mule server is down and at the same time messages sent to publish connector.
what will be best way. Hope I am clear or bear with me for any confusion.
thanks
The VM connector works like a queue in memory, but it is not an external message broker like for example ActiveMQ or IBM MQ. The VM connector implementation is inside the Mule Runtime implementation. It can not be used to send messages to other Mule servers, nor other non-Mule applications. Also if the Mule Runtime instance is down, then it will not work at all so there is not way to publish nor receive messages. If you want that kind of reliability you need to use an external JMS message broker.

Can RabbitMQ cluster be used as a single endpoint by application?

There are three nodes in a RabbitMQ cluster as below.
Within RabbitMQ, there are two queues, q1 and q2.
The master replica of q1 and q2 are distributed on different nodes. Both queues are mirrored by other nodes.
There is a load balancer in front of three nodes.
AMQP(node port 5672) and Management HTTP API(node port 15672) are exposed by load balancer.
When application establishes a connection through load balancer, it could reach a random RabbitMQ node behind. And this is invisible to application.
Question:
Is it ok for application to consume both queues in a single AMQP channel over a single connection no matter which RabbitMQ node it reaches?
It is ok for application to call management HTTP API no matter which RabbitMQ node its request hits?
When RabbitMQ is set up as a cluster and you have your queues mirrored across them, it doesn't matter to which node you are connected. Because the AMQP connection for a queue will be automatically routed to the node containing the master queue and this handled by RabbitMQ internally. So, if a request to publish or consume on queue q1 comes, it will be routed to Node #1.
Answers to your question.
It is not advisable to consume more than one queues in a single AMQP connection. Exception from one consuming process may cause the connection to close which will interrupt the other one.
It is ok for application to call management HTTP API no matter which RabbutMQ node its request hits. Once management plugin in a RabbitMQ cluster is enabled, all the nodes will accept the Management HTTP API requests.
Reference: https://www.rabbitmq.com/clustering.html

Consumer Proxy unable to pick up messages from queue due to service configuration in flux

The Consumer proxy is not picking up messages from queue. We have redeployed service and restarted servers. But it did not help. I am attaching logs in here.
<01-Mar-2019 10:39:53 o'clock GMT>
<01-Mar-2019 10:39:53 o'clock GMT>
According to Oracle support document 1573359.1:
CAUSE
The service has been re-deployed/changed while there were messaging being processed. Review Doc ID 1571958.1 "OSB SBConsole Activation - Limitations for configuration or deployment changes in production" for other reasons that this error can occur.
SOLUTION
Stop consumption on the jms queue, delete and re-deploy service.
Log in to Weblogic Console
Expand services -> Messaging -> JMS Modules -> Select the Queue your service is interacting with.
Select the Control tab
For both production and consumption, select pause.
Wait a short while (5 minutes) and restart the queue
Re-deploy your Proxy Services
If message still persist please check config.xml and make sure that there is a correct number of applications with name starting with "ALSB". The correct number depends on the kind of services you have deployed. JMS request-response, JMS plain request, JMS topic etc...
The easiest way to make sure that config.xml is correct is to do the following:
Delete all the JMS proxies from OSB configuration
Open WLS console go to "Deployments" and make sure that there are no application "_ALSB_xyz" deployed. If they are present delete them.
Re-deploy JMS proxies
Alternately, check Note 1382976.1 to locate the related deployments. Delete any application deployments starting with "ALSB" which are not related to any actively deployed JMS proxy service.

ActiveMQ initializer in OpenEJB/TomEE

I need to start a queue in OpenEJB in a "paused" state so no messages are processed by the consumer until some related data is available. I can programmatically pause the queue as shown here, so if there was some initializer function that is called when a queue is created I could use that method. The queue configuration documentation does not seem to support setting the paused state. Any ideas on how to configure the queue upon creation?
If you read the thread you link you will see a queue is not paused but a broker can be.
In TomEE broker is created from a factory using a spi (in tomee classloader so tomee/lib by default) so you can write your own if that's an option starting programmatically when you are ready.
Now I suspect you don't want to start connectors with the container but it is not an issue to start the broker. Said otherwise you don't want to be connected to any other machine through JMS to not receive anything but if JMS is started and deployed it is ok.
In such a case you can just not configure any connector on the broker and add them when ready. You can find brokers doing:
new org.apache.openejb.resource.activemq.ActiveMQ5Factory().getBrokers()