Glassfish - How to broadcast JMS message to all instances in a cluster? - glassfish

I am using Glassfish 3.1.2, and I set up a cluster with one node and two instances.
I have an message driven bean in my application that subscribes to a topic, which I deployed to the cluster.
When I publish a message to the topic I want both instances to receive the message.
However, in practice I am finding that only one instance receives the message.
I believe I am running into a feature called "shared subscriptions"
http://docs.oracle.com/cd/E18930_01/html/821-2438/gjzpg.html#MQAGgjzpg
The feature (which is enabled by default) says that beans in the cluster with the same client id are shared, and are effectively only one subscription.
It says that by default the client id of an MDB is its name, which means that both my instances are using the same client id.
So other than completely disabling this feature, I would like to know if it is possible to setup an MDB so that each instance subscribes with a different client ID? This seems a bit tricky since both instances are using the same WAR file. I think you can set the client ID in an annotation, but I'm not sure if that can be changed at runtime...

I'm not sure why you would completly disable this feature. In the link you provided, it states clearly that you configure this per ActivationSpec/MDB. So as far as I understand it, it would affect only the MDB you have at hand.
For an MDB, set the ActivationSpec property useSharedSubscriptionInClusteredContainer to false. Do this in exactly
the same way as with other ActivationSpec properties, using
annotations in the MDB itself or in the deployment descriptor
ejb-jar.xml or glassfish-ejb-jar.xml.
But you can of course set the client ID on a connection dynamically during runtime. Please note that you probably would have to handle the JMS connection yourself a bit more than relying on the features managed by the container.
http://docs.oracle.com/javaee/6/api/javax/jms/Connection.html#setClientID(java.lang.String)

Related

Why cant Jboss AS 7.1 with Hornet Que - Message Driven Bean use pooled connection?

We are trying using HornetQ for messaging on Jboss AS 7.1 and the documentation at
https://docs.jboss.org/author/display/AS71/Messaging+configuration
says
There is also a pooled-connection-factory which is special in that it leverages the outbound adapter of the HornetQ JCA Resource Adapter. It is also special because:
* It is only available to local clients, although it can be configured to point to a remote server.
* As the name suggests, it is pooled and therefore provides superior performance to the clients which are able to use it. The pool size can be configured via the max-pool-size and min-pool-size attributes.
* It should only be used to send (i.e. produce) messages.
* It can be configured to use specific security credentials via the user and password attributes. This is useful if the remote server to which it is pointing is secured.
Every thing made sense except the third bullet which says
* It should only be used to send (i.e. produce) messages.
My mdb uses Pooled connection factory and is consuming messages (Not sending).
My understanding is Pooled connection factory is what the MDB should use for better performance. Also the hornetq Author's say
http://hornetq.blogspot.com/2011/06/hornetq-on-jboss-as7.html
The pooled connection factories also define the incoming connection factory for MDB's,
the name of the connection factory refers to the resource adapter name used by the MDB,
Can some guru's throw some light on this ?
Thanks
Rama
This is something we try to make it easier for users but still some confusion.
the JCA Adapter specifies InBound Connections and Outbound connections...
InBound connections are used by MDBs and outBounds are done through using the JNDI and instantiating connections.
InBound connections don't need pooling for instance as they just instantiate the consumers for the MDBs and stay up as long as you have an MDB...
We keep definitions on the PooledConnection factories for defining the MDBs but underneath there are a few things happening as I said.
So, we could maybe reword this item you mentioned to explain that better.

Is it still possible to configure a vm queue as persistent in Mule 3.4.X?

I am trying to configure a vm connector like so:
<vm:connector name="recordDeletedActivityDLQStore">
<vm:queue-profile maxOutstandingMessages="500" >
<file-queue-store/>
</vm:queue-profile>
</vm:connector>
Mule Studio complains that is not a permitted child element of vm:queue-profile. This will not build and run either. I've tried other possible inputs for defining the nature of the queue store without any luck. I can't find any documentation on how to configure persistent vm queues that works. Specifically, I have tried adding the attribute persistent="true" to the queue-profile element as described in VM Transport Reference: http://www.mulesoft.org/documentation/display/34X/VM+Transport+Reference
This doesn't seem to be supported anymore either...
Is it still possible to configure a vm queue as persistent in Mule 3.4.X?
Your configuration is correct and works: you can see that messages are persisted on disk under .mule/${app_name}/queuestore/${queue_name}/.
Persistence only occurs for one-way VM queues, not request-response ones. For the latter, no queueing occurs whatsoever.
Also, disregard Studio complaints about your configuration not being valid. Mule has the final word on configuration validity, and yours is just fine.

Wanted to Close ActiveMq Connection

I need help with the Embedded ActiveMq and Spring Framework.
Problem is :-
I am using Embedded ActiveMQ with Spring framework JMS Support.
As part of development I have a component which publish messages to Virtual Topic. And i have another componet which subscribes messages from the topic.
Now problem here is the application which is subscribing messages is running in cluster environment i.e one master instance and one slave instance. Events which i have published are going to either master instance or slave instance. But i want messages to be subscribed by only master instance. Is there any way i can block slave instance not to subscribe events?
We have a system property set to differentiate master and slave instance.I have tried add condition by overriding createConnection method of ActiveMqConnectionFactory class.
if(master) {
ActiveMqConnectionFactory.createConnection}
else
return null.
in this case, DefaultMessageListener of Spring framework which we configured to listen events always trying to refresh connection, since i am returning null for slave, it is failing to create connection. the thread is going to infinite loop with 5000MS interval..
Is there any way i can say MessageListener to stop refreshing connection..
Please kindly help me to resolve this issue..
if the cluster is master/slave...then only one should be active at a given time...sounds like you have dual masters from an AMQ client perspective...
regardless, you can always use ActiveMQ security to control access to a given topic/queue based on connection credentials (see http://activemq.apache.org/security.html)

Multiple war in Tomcat 7 using a shared embedded ActiveMQ

I'm working on a project where I have several war files inside a tomcat 7 have to communicate with a single embedded activeMQ (5.5.1) broker inside the same Tomcat.
I'm wondering what was the best practice to manage this and how to start and stop the broker properly.
Actually I try tu use a global JNDI entry in server.xml and in each war gets my activemq connection with a lookup. The first connection to the broker implicitly starts it. But with this method I run into various problems like instance already existing or locks in data store.
Should I use instead an additional war which uses a BrokerFactory to start the broker explicitly? In this case how to make sure that this war executes first in Tomcat ? And how do I stop my broker and where?
Thanks for the help.
from the docs...
If you are using the VM transport and wish to explicitly configure an
Embedded Broker there is a chance that you could create the JMS
connections first before the broker starts up. Currently ActiveMQ will
auto-create a broker if you use the VM transport and there is not one
already configured. (In 5.2 it is possible to use the waitForStart and
create=false options for the connection uri)
So to work around this if you are using Spring you may wish to use the
depends-on attribute so that your JMS ConnectionFactory depends on the
embedded broker to avoid this happening. e.g.
see these pages for more information...
http://activemq.apache.org/vm-transport-reference.html
http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html
http://activemq.apache.org/how-do-i-restart-embedded-broker.html

Why are my WebLogic clustered MDB app deployments in warning state?

I have a WebLogic cluster on which I've deployed numerous topics and applications that use them. My applications uniformly show themselves in a Warning status. Looking at Monitoring on the deployment, I see the MDB application connects to Server #1, but on server #2 it shows this:
MDB application appName is NOT connected to messaging system.
My JMS Server is targetted to a migratable target, which is in turn targetted to the #1 server and has a cluster identified. And messages sent to either server all flow as expected. I just don't know why these deployments show in a Warning state.
WebLogic 11g
This can be avoided by using the parameter below
<start-mdbs-with-application>false</start-mdbs-with-application>
In the weblogic-application.xml, Setting start-mdbs-with-application to false forces MDBs to defer starting until after the server instance opens its listen port, near the end of the server boot up process.
If you want to perform startup tasks after JMS and JDBC services are available, but before applications and modules have been activated, you can select the Run Before Application Deployments option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppActivation attribute to “true”).
If you want to perform startup tasks before JMS and JDBC services are available, you can select the Run Before Application Activations option in the Administration Console (or set the StartupClassMBean’s LoadBeforeAppDeployments attribute to “true”).
Refer :http://docs.oracle.com/cd/E13222_01/wls/docs81/ejb/message_beans.html
this is applicable for the versions till 12c and later
I don't like unanswered questions, so I'm going to answer this one.
The problem is resolved, though I was not involved in its resolution. At present the problem only exists for the length of time it takes the JMS subsystem to fully initialize. During that period (with many queues, it can take a while) the JNDI system throws errors and the apps are truly in warning state. Once the JMS is fully initialized, everything goes green.
My belief is that someone corrected something in the JMS Server / Cluster config. I'll never know what it was.