Connecting to ActiveMQ broker using IBM client - activemq

In my test I run inmem ActiveMQ and then instantiate ActiveMQConnectionFactory and do whatever I want in order to test it. I used this because that seemed to be the easiest way to create integration test. I thought the switch from ActiveMQConnectionFactory to com.ibm.mq.jms.MQTopicConnectionFactory would be straightforward. But it apparently is not. What would be the mapping from this snippet
<bean id="activeMqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<constructor-arg value="vm://localhost:61616"/>
</bean>
to that one:
<bean id="ibmConnectionFactory" class="com.ibm.mq.jms.MQTopicConnectionFactory">
<property name="hostName" value="??"/>
<property name="port" value="??"/>
<property name="queueManager" value="??"/>
<property name="channel" value="??"/>
<property name="transportType" value="?"/>
</bean>
Would that be even possible without some kind of weird bridges Camel has?

This is not possible. The JMS specification covers the API and behavior but vendors are free to implement any wire format and communication protocols that they wish. WebSphere MQ uses it's own formats and protocols and Active MQ has its own formats and protocols.
Bridge applications function by reading messages into memory from one transport then writing that message to the other transport. Although this works at a basic level, the two transports have different destination namespaces and security realms so these interfaces tend to be hard-coded point-to-point routes. This is usually the best you can expect when mixing JMS transport providers.

You cannot do that, either with Camel or ActiveMQ JMS bridge, since you need a WebSphere MQ broker to connect to if you are using the IBM jms classes (e.g. com.ibm.mq.jms.MQTopicConnectionFactory )
I have, however, done extactly what you are trying to do in one project. The core idea is not to use the vendor specific classes, but the JMS interfaces in the code. Then you could store the configuration in JNDI (one for integration testing and one for production/acceptance testing).
If you do not want to use JNDI, you could perhaps use different spring context for each scenario, (that was my approach).
Let's take a simple example:
You two separate applicationContext.xml files (one embedded test and one production)
int-test:
<beans>
<import resource="jmsTest.xml"/>
<import resource="mainApplication.xml"/>
</beans>
Prod:
<beans>
<import resource="jmsProd.xml"/>
<import resource="mainApplication.xml"/>
</beans>
Then create your jms contexts:
jmsTest.xml:
<bean id="connectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<constructor-arg value="vm://localhost:61616"/>
</bean>
jmsProd.xml
<bean id="connectionFactory" class="com.ibm.mq.jms.MQConnectionFactory">
<property name="hostName" value=".."/>
...
</bean>
mainApplication.xml (jms listeners etc), same
<bean id="myJmsHandlingClass" class="some.custom.Class"/>
<property name="connectionFactory" ref="connectionFactory"/>
</bean>
Then just make sure to follow the JMS specs and do nothing vendor specific, since both WMQ and AMQ has extensions to the jms standard that might be tempting to use.
One tricky part if you are doing topics, is that AMQ and WMQ use different topic separators by default.
In WMQ: root/subtopic/#
In AMQ: root.subtopic.*
So you might want to inject destinations via spring as well, it's simliar to the connection factory above.

Related

Apache Ignite Persistence and Services

Within my Ignite XML IgniteConfiguration, I have services defined that perform multiple functions including reading/writing JMS messages and interacting with the corresponding caches. I now want to add native persistence to the cached data, but once I turn on the persistence within any DataRegionConfiguration, my services no longer startup:
<bean class="org.apache.ignite.configuration.DataRegionConfiguration">
<property name="persistenceEnabled" value="true"/>
</bean>
I'm sure there is a step I'm missing within the configuration. Any help or even better, example configurations is greatly appreciated.

Spring AMQP Listener Timeout

I have a requirement for handling SpringAMQP listener timeout capability i.e We sends a message from producer , The consumer listener thread of Spring AMQP receives the message but say takes lot of time to execute itself and get hanged , Which will eventually lead to Listener thread being rendered unusable.
So is there any way that we have any consumer timeout setting provided by Spring AMQP so that the listener thread is freed again after given timeout time
Indeed you can mention the timeout using spring-amqp, here is how.
<bean id="connectionFactory" class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory">
<property name="connectionTimeout" value="1000" />
<property name="concurrency" value="16" /> <!-- in milliseconds -->
<property name="recoveryInterval" value="5000" />
</bean>
NOTE: If you are having limited consumer count and using manual ack and not sending the ack signal back for some reason, timeout might occur; it means you are holding the thread and not releasing it which will also impact your performance.
More here
doc
api
If the thread is stuck in your code, there is nothing the container can do to free it up. If it's in interruptible code, you could interrupt the thread.
If it's stuck in uninterruptible code, you are out of luck.

Using 'jms.useAsyncSend=true' for some messages and verfying it in the consumer side

We are using the embedded activemq broker within Fuse ESB (7.1.0) with consumers co-located .
The producer client is deployed on a remote GlassFish server. It uses activeMQ resource adapter (5.6.0) along with spring jms template. The client publish messages to different queues. I want some of the messages (going to a given queue) to use jms.useAsyncSend=true where as the other messages should use the default. I see below options
1) I can't append the the option 'jms.useAsyncSend=true' in the resource adapter URL because that will enforce it for all messages.
2) The JNDI look-up for the connection factory is returning an instance of 'org.apache.activemq.ra.ActiveMQConnectionFactory'. I was actually expecting an instance of org.apache.activemq.ActiveMQConnectionFactory, which would have allowed me to use setUseAsyncSend() for the corresponding jmsTemplate. So I can't use this option as well.
3) I have multiple connection factories configured under the GlassFish connectors (one for each queue). I am trying to pass the property 'jms.useAsyncSend=true' as an additional property to a particular connection factory. I am expecting this to be used only for the connections created in that particular connection pool. Now, having done this I want to verify if it really worked.
Question 1) Is there a way where I can check in the consumer side if the property 'useAsyncSend' was set in an inbound message? This is to verify what I have done at producer side has actually worked. Note that I am using camel-context to route messages to the end consumers. Is there a way to check this within the came-context? is there a header or any such thing corresponding to this?
Question 2) Is there a better way to set 'useAsyncSend' in the producer side where one resource adapter is used for sending messages to different queues with different values for 'useAsyncSend'.
I understand that 'useAsyncSend' is an activeMQ specific configuration hence not available in jmstemplate interface
Appreciate any help on this.
thanks
I don't know Fuse ESB, but I have created different ActiveMQ connections with different properties (in my case it was the RedeliveryPolicy). You then direct your producers and consumers to use the appropriate connection.
So, if using Spring xml to define your connections, it would look something like this:
<!-- Connection factory with useAsyncSend=true -->
<bean id="asyncCF" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${broker.url}" />
<property name="useAsyncSend" value="true" />
</bean>
<!-- Connection factory with useAsyncSend=false -->
<bean id="syncCF" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${broker.url}" />
<property name="useAsyncSend" value="false" />
</bean>
Or specify the useAsyncSend as a parameter on the url value.
This might help you with Question 2.

Does c3p0 connection pooling ensures max pool size?

I've gone through several question, this is somewhat related but doesn't answer my question.
Does the c3p0 connection pooling maxPoolSize ensures that the number of connections at a certain time never exceeds this limit? What if the maxPoolSize=5 and 10 users start using the app exactly at the same time?
My app. configurations
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
<property name="driverClass"><value>${database.driverClassName}</value>/property>
<property name="jdbcUrl"><value>${database.url}</value></property>
<property name="user"><value>${database.username}</value></property>
<property name="password"><value>${database.password}</value></property>
<property name="initialPoolSize"><value>${database.initialPoolSize}</value>/property>
<property name="minPoolSize"><value>${database.minPoolSize}</value></property>
<property name="maxPoolSize"><value>${database.maxPoolSize}</value></property>
<property name="idleConnectionTestPeriod"><value>200</value></property>
<property name="acquireIncrement"><value>1</value></property>
<property name="maxStatements"><value>0</value></property>
<property name="numHelperThreads"><value>3</value></property>
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
<property name="dataSource" ref="dataSource"/>
</bean>
it is important to distinguish between DataSources and Connection pools.
maxPoolSize is enforced by c3p0 on a per-pool basis. but a single DataSource may own multiple Connection pools, as there is (and must be) a distinct pool for each set of authentication credentials. if only the default dataSource.getConnection() method is ever called, then maxPoolSize will be the maximum number of Connections that the pool acquires and manages. However, if Connections are acquired using dataSource.getConnection( user, password ), then the DataSource may hold up to (maxPoolSize * num_distinct_users) Connections.
to answer your specific question, if maxPoolSize is 5 and 10 clients hit a c3p0 DataSource simultaneously, no more than 5 of them will get Connections at first. the remaining clients will wait() until Connections are returned (or c3p0.checkoutTimeout has expired).
some caveats: c3p0 enforces maxPoolSize as described above. but there is no guarantee that, even if only a single per-auth pool is used, you won't occasionally see more than maxPoolSize Connections checked out. for example, c3p0 expires and destroys Connections asynchronously. as far as c3p0 is concerned, a Connection is gone once it has been made unavailable to clients and marked for destruction, not when it has actually been destroyed. so, it is possible that, if maxPoolSize is 5, that you'd occasionally observe 6 open Connections at the database. 5 connections would be active in the pool, while the 6th is in queue for destruction but not yet destroyed.
another circumstance where you might see unexpectedly many Connections open is if you modify Connection pool properties at runtime. in actual fact, the configuration of interior Connection pools is immutable. when you "change" a pool parameter at runtime, what actually happens is a new pool is started with the new configuration, and the old pool is put into a "wind-down" mode. Connections checked out of the old pool remain live and valid, but when they are checked in, they are destroyed. only when all old-pool Connections have been checked back in is the pool truly dead.
so, if you have a pool with maxPoolSize Connections checked out, and then alter a configuration parameter, you might transiently see a spike of up to (2 * maxPoolSize), if the new pool is hit with lots of traffic before Connections checked out of the old pool have been returned. in practice, this is very rarely an issue, as dynamic reconfiguration is not so common, and Connection checkouts ought to be and usually are very brief, so the old-pool Connections disappear rapidly. but it can happen!
i hope this helps.
ps acquireIncrement is best set something larger than 1. acquireIncrement of 1 means no Connections are prefetched ahead of demand, so whenever load increases some Thread will directly experience the latency of Connection acquisition.

Configuring a duplex connector for linking with Apollo Broker

I have an Apollo broker configured as a stomp server. Now I want to configure an ActiveMQ broker which links to the Apollo broker and enable message propagation in both directions.
That is, I want the Apollo broker and ActiveMQ broker to work both as consumers and producers.
Will this networkconnector configuration at ActiveMQ broker meet my requirement?
<networkConnectors>
<networkConnector name="linkToApolloBroker"
uri="static:(stomp://apollo_broker_ip:61000)"
networkTTL="3"
duplex="true" />
</networkConnectors>
<persistenceAdapter>
<kahaDB directory="${activemq.data}/dynamic-broker1/kahadb"/>
</persistenceAdapter>
...
<transportConnectors>
<transportConnector name="stomp" uri="stomp://0.0.0.0:61613"/>
</transportConnectors>
Actually, I need the Apollo to provide services for the web while passing messages to and fro to ActiveMQ broker. If I have 2 brokers talking with each other, their local clients can have direct access to the locally persisted queues and to an extend remain immune to network fluctuations.
There is interoperability in the Network of brokers configuration between ActiveMQ and Apollo. You might be able to configure a bridge between the two using the JMS Bridge feature of ActiveMQ since Apollo does support openwire. The configuration you have won't work.
Have a look at the JMS to JMS bridge documentation.
Apache Camel is also a potential solution to your problem. You can probably create a Camel route that does what you want.