HornetQ pooled-connection-factory* to receive messages via a JNDI look up - effects? - jboss7.x

The hornet Q documentation mentions that
A pooled-connection-factory looked up in JNDI or injected which is
referencing an in-vm-connector is suitable to be used by a local
client only to send messages to a local server. A
pooled-connection-factory used by an MDB which is referencing an
in-vm-connector is suitable only to consume messages from a local
server.
https://docs.jboss.org/author/display/WFLY8/Messaging+configuration#Messagingconfiguration-JMSConnectionFactories
Question - Why does the WILDFLY team recommend that pooled-connection factory not be used to receive messages via JNDI? What would be the expected impact if it's used that way?
I have an implementation that uses pooled-connection factory to receive message via a JNDI lookup and I am trying to understand what expected impact we should look for (no issues observed)?

Related

How do I find the connection information of a RabbitMQ server that is bound to a SCDF stream deployed on Tanzu (Pivotal/PCF) environment?

This is a follow-up question of How to implement HTTP request/reply when the response comes from a rabbitMQ reply queue using Spring Integration DSL?.
We were able to build the Spring Integration application and the SCDF stream successfully locally. We could send a http request to the rabbitMQ request queue which was bound to the SCDF stream rabbit source. We could also receive the response back from the rabbitMQ response queue which was bound to the SCDF stream rabbit sink.
We have deployed the SCDF stream into PCF environment which had a binding of an internal rabbitMQ broker. Now we need to specify the spring rabbitMQ connection information in the Spring Integration application properties - currently it's using the default localhost#5762, which is no longer valid. Does anyone know how to get this rabbitMQ configuration properties? We already checked the SCDF stream rabbit source/sink log files but couldn't find the information. I know we probably need to check internally whoever set up the SCDF/rabbitMQ in PCF environment, but so far we haven't heard the answers from them.
Also, it appears we can have a different approach that binds both the SCDF stream and the integration application to a separate rabbitMQ instance (instead of using the existing one bundled with the SCDF configuration). Is it a recommended solution?
Thanks,
It is unclear whether you're using the SCDF tile or the SCDF OSS (via manfest.yml) on PCF.
Suppose you're using the OSS, AFA. In that case, you are providing the right RMQ service-instance configuration (that you pre-created) in the manifest.yml, then SCDF would automatically propagate that RMQ service instance and bind it to the apps it is deploying to your ORG/Space. You don't need to muck around with connection credentials manually.
On the other hand, if you are using the SCDF Tile, the SCDF service broker will auto-create the RMQ SI and automatically bind it to the apps it deploys.
In summary, there's no reason to manually pass the connection credentials or pack them as application properties inside your apps. You can automate all this provided you're configuring all this correctly.

Why cant Jboss AS 7.1 with Hornet Que - Message Driven Bean use pooled connection?

We are trying using HornetQ for messaging on Jboss AS 7.1 and the documentation at
https://docs.jboss.org/author/display/AS71/Messaging+configuration
says
There is also a pooled-connection-factory which is special in that it leverages the outbound adapter of the HornetQ JCA Resource Adapter. It is also special because:
* It is only available to local clients, although it can be configured to point to a remote server.
* As the name suggests, it is pooled and therefore provides superior performance to the clients which are able to use it. The pool size can be configured via the max-pool-size and min-pool-size attributes.
* It should only be used to send (i.e. produce) messages.
* It can be configured to use specific security credentials via the user and password attributes. This is useful if the remote server to which it is pointing is secured.
Every thing made sense except the third bullet which says
* It should only be used to send (i.e. produce) messages.
My mdb uses Pooled connection factory and is consuming messages (Not sending).
My understanding is Pooled connection factory is what the MDB should use for better performance. Also the hornetq Author's say
http://hornetq.blogspot.com/2011/06/hornetq-on-jboss-as7.html
The pooled connection factories also define the incoming connection factory for MDB's,
the name of the connection factory refers to the resource adapter name used by the MDB,
Can some guru's throw some light on this ?
Thanks
Rama
This is something we try to make it easier for users but still some confusion.
the JCA Adapter specifies InBound Connections and Outbound connections...
InBound connections are used by MDBs and outBounds are done through using the JNDI and instantiating connections.
InBound connections don't need pooling for instance as they just instantiate the consumers for the MDBs and stay up as long as you have an MDB...
We keep definitions on the PooledConnection factories for defining the MDBs but underneath there are a few things happening as I said.
So, we could maybe reword this item you mentioned to explain that better.

Glassfish - How to broadcast JMS message to all instances in a cluster?

I am using Glassfish 3.1.2, and I set up a cluster with one node and two instances.
I have an message driven bean in my application that subscribes to a topic, which I deployed to the cluster.
When I publish a message to the topic I want both instances to receive the message.
However, in practice I am finding that only one instance receives the message.
I believe I am running into a feature called "shared subscriptions"
http://docs.oracle.com/cd/E18930_01/html/821-2438/gjzpg.html#MQAGgjzpg
The feature (which is enabled by default) says that beans in the cluster with the same client id are shared, and are effectively only one subscription.
It says that by default the client id of an MDB is its name, which means that both my instances are using the same client id.
So other than completely disabling this feature, I would like to know if it is possible to setup an MDB so that each instance subscribes with a different client ID? This seems a bit tricky since both instances are using the same WAR file. I think you can set the client ID in an annotation, but I'm not sure if that can be changed at runtime...
I'm not sure why you would completly disable this feature. In the link you provided, it states clearly that you configure this per ActivationSpec/MDB. So as far as I understand it, it would affect only the MDB you have at hand.
For an MDB, set the ActivationSpec property useSharedSubscriptionInClusteredContainer to false. Do this in exactly
the same way as with other ActivationSpec properties, using
annotations in the MDB itself or in the deployment descriptor
ejb-jar.xml or glassfish-ejb-jar.xml.
But you can of course set the client ID on a connection dynamically during runtime. Please note that you probably would have to handle the JMS connection yourself a bit more than relying on the features managed by the container.
http://docs.oracle.com/javaee/6/api/javax/jms/Connection.html#setClientID(java.lang.String)

where is RabbitMQ internal API reference for plugin development

I am developing a custom RabbitMQ Plugin, but I cannot find anywhere the internal API reference (in erlang of course) that is solely internal to RabbitMQ. Is this documented somewhere ? Notice this is not the erlang client API that I am looking for, just the internal API reference for use from within RabbitMQ plugins.
As an example: I want to identify the listening port from within the plugling without to look at the config file. I assume the config file is loaded internally in RabbitMQ and is accessible from some internal API such as rabbitmq:getenv("port"), etc.. This is not specific to my problem, I simply need to know where the whole internal API reference is

MQ With WLS Foreign Server

I am facing two issues when i try to connect to MQ which is deployed on a Remote Server from Weblogic Server(WLS) by creating a Foreign Server.
1. When I try to connect to MQ Queuemanager in Bindings mode(after importing the .Bindings file) i keep getting the below error in WLS console:
java.lang.UnsatisfiedLinkError: no mqjbnd05 in java.library.path
If i Switch the Transport to Client i keep getting:
JMSWMQ0018: Failed to connect to queue manager '' with connection mode 'Client' and host name 'localhost'. Check the queue manager is started and if running in client mode, check there is a listener running. Please see the linked exception for more information.
Has anyone seen this, and are there any performance implications which dictate the use of client over bindings and vice versa?
TIA
Finally i was able to resolve this, i had to recreate the .bindings file in the client mode, with changes to the IVTsetup.bat which is most likely present in
C:\Program Files\IBM\WebSphere MQ\java\bin, I had to run this
def qcf(psQCF) TRANSPORT(CLIENT) HOST(SMEKA) PORT(1415) CHANNEL(ps_SRV_CHANNEL) QMGR(psQM)
to generate the .bindings file.
Refer to this link for more details:
http://publib.boulder.ibm.com/infocenter/wbihelp/v6rxmx/index.jsp?topic=/com.ibm.wbia_adapters.doc/doc/peoplesoft/peopleso103.htm
Where the question states that I try to connect to MQ which is deployed on a Remote Server from Weblogic Server I assume this means that WLS and WMQ are on two different hosts. If that is the case, then a bindings mode connection (which relies on shared memory segments) won't work.
The client mode connection appears to be using a CF that is pointed to localhost rather than the IP or hostname of the WMQ server. This would work for an application on the same host as the queue manager but not when the app and QMgr are on separate servers.
As far as choosing between client and bindings mode, the answer is that if the QMgr is local use bindings. This provides highest reliability, best performance and XA transactionality. When using client mode, two-phase XA commit is not supported without the Extended Transactional Client. Per the JMS specification, there is an ambiguity that can exist if an app loses the connection during a COMMIT call. Depending on how the app handles this it's possible to end up with duplicate messages. (The JMS spec refers to these as "functionally duplicate.") This ambiguity is much less likely to occur with a bindings mode connection since there is no network latency and not even any traversal of the IP stack or interface. So use bindings mode where possible.
UPDATE:
Removed note about Extended Transactional Client being a chargeable component. As of April 24th, XTC is free of charge for all versions of WMQ on all platforms.