How to monitor PooledConnectionFactory (via JMX?) - activemq

I have a client app that is consuming from a queue in an activemq cluster. The app is running in tomcat 7 and uses camel (v2.10.3) and spring 3.1.2. I use a PooledConnectionFactory to connect.
Everything works for a while (sometimes days), but then all of the connections go away in the pool (the activemq broker web console shows no consumers. I figured it was the idletimeout issue, but adding the suggested config didn't help. I also upgraded to activemq-pool-5.10.0.jar, but also no luck.
SO, I'm trying to find out what is going on and was hoping to use JMX, but I can not find any related mbeans (via jconsole) that the pool registers. Is there a way to monitor/control the pool via JMX (or another/better way)?
My config fyi:
<bean id="jmsConnectionFactory" class="org.apache.activemq.ActiveMWSslConnectionFactory">
<property name="brokerURL" value="failover://ssl://...."/>
</bean>
<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory" init-method="start" destroy-method="stop">
<property name="connectionFactory" ref="jmsConnectionFactory"/>
<property name="idleTimeout" value="0"/>
</bean>

As simple as it sounds, I don't see any other option other than to turn on TRACE level logging for that class. Check out the logs of this question.

Related

Ignite Thin Client in Kubernetes

I'm trying to set up a distributed cache using Ignite and my java app through a thin client in a Kubernetes environment.
In my Kubernetes cluster, I have 2 pods with the java app and 2 pods of ignite. For the java pods to communicate with ignite pods, I have configured a thin client to connect with the ignite kubernetes service. With this configuration, I was expecting that the load balancing was on the kubernetes side. Here's what I have done in java code:
ClientConfiguration cfg = new ClientConfiguration()
.setAddresses("ignite-service.default.svc.cluster.local:10800")
.setUserName("user")
.setUserPassword("password");
IgniteClient igniteClient = Ignition.startClient(cfg);
While storing and getting objects from ignite, I deleted one of the ignite pods and, after a while, I was getting errors saying that "Ignite cluster is unavailable":
org.apache.ignite.client.ClientConnectionException: Ignite cluster is unavailable
With this behavior, I assume that the method setAddresses in ClientConfiguration class stores one of the IPs of the pods and channels all communication to that pod.
Is this what's happening in this method?
Ignite version 2.7
Kubernetes version 1.12.3
You need to pass several IP addresses to enable the failover (aka. automatic reconnect) on the thin client end. Find more details here.
Although you might have resolved the issue since the question was posted a long time back, but still putting an answer here for others.
With the Apache Ignite version(2.7+), you can modify your deployment to use Kubernetes IP Finder. With this Kubernetes will take care of discovering and connecting all server and client nodes.
TcpDiscoveryKubernetesIpFinder module will help you achieve this.
This is the discovery SPI that needs to be added to your configuration (Replace with appropriate Namespace and Service Name)
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<constructor-arg>
<bean class="org.apache.ignite.kubernetes.configuration.KubernetesConnectionConfiguration">
<property name="namespace" value="default" />
<property name="serviceName" value="ignite" />
</bean>
</constructor-arg>
</bean>
</property>
</bean>
</property>
Official documentation can be found here - https://ignite.apache.org/docs/latest/installation/kubernetes/amazon-eks-deployment

ActiveMQ connection in Fabric8 using Blueprint instead of DS

In Fabric8, the preferred way to obtain an ActiveMQ connection is via the mq-fabric profile, which provides an ActitveMQConnection object via Declarative Services. An example of this is given on GitHub, which works just fine.
However, I've yet to find a way for Declarative Services and Blueprint Services to collaborate in Fabric8 (or any OSGI-environment, really), thus, my OSGI application must either use DS or blueprint. Mixing both doesn't seem to be an option.
If you want to use blueprint (which I do), you must first create a broker through the web UI, then go back to the console and type cluster-list, finding the port that Fabric8 assigned to the broker and then configure a connection in blueprint like so:
<bean id="activemqConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="tcp://mydomain:33056" />
<property name="userName" value="admin" />
<property name="password" value="admin" />
</bean>
While this does work, it's not exactly deployment-friendly, as it involves a few manual steps that I'd like to avoid if possible. The main issue is that I don't know what that port is going to be. I've combed through the config files and couldn't find it anywhere.
Is there a cleaner, more automated way to obtain an ActiveMQ connection in Fabric8 via blueprint, or must we use Declarative Services?
Stumbled across a solution to this issue in the fabric-camel-demo, which illustrates how to instantiate an ActiveMQConnectionFactory bean in Fabric8 via Blueprint.
<!-- use the fabric protocol in the brokerURL to connect to the ActiveMQ broker registered as default name -->
<!-- notice we could have used amq as the component name in Camel, and avoid any configuration at all,
as the amq component is provided out of the box when running in fabric -->
<bean id="jmsConnectionFactory" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="discovery:(fabric:default)"/>
<property name="userName" value="admin"/>
<property name="password" value="admin"/>
</bean>
Hope this helps!

Properly Shutting Down ActiveMQ and Spring DefaultMessageListenerContainer

Our system will not shutdown when a "Stop" command is issued from the Tomcat Manager. I have determined that it is related to ActiveMQ/Spring. I have even figured out how to get it to shutdown, however my solution is a hack (at least I hope this isn't the "correct" way to do it). I would like to know the proper way to shutdown ActiveMQ so that I can remove my hack.
I inherited this component and I have no information about why certain architectural decisions were made, after a lot of digging I think I understand his thoughts, but I could be missing something. In other words, the real problem could be in the way that we are trying to use ActiveMQ/Spring.
We run in ServletContainer (Tomcat 6/7) and use ActiveMQ 5.9.1 and Spring 3.0.0 Multiple instances of our application can run in a "group", with each instance running on it's own server. ActiveMQ is used to facilitate communication between the multiple instances. Each instance has it's own embedded broker and it's own set of queues. Every queue on every instance has exactly 1 org.springframework.jms.listener.DefaultMessageListenerContainer listening to it, so 5 queues = 5 DefaultMessageListenerContainers for example.
Our system shut down properly until we fixed a bug by adding queuePrefetch="0" to the ConnectionFactory. At first I assumed that this change was incorrect in some way, but now that I understand the situation, I am confident that we should not be using the prefetch functionality.
I have created a test application to replicate the issue. Note that the information below makes no mention of message producers. That is because I can replicate the issue without ever sending/processing a single message. Simply creating the Broker, ConnectionFactory, Queues and Listeners during boot, is enough to keep the system from stopping properly.
Here is my sample configuration from my Spring XML. I will be happy to provide my entire project if someone wants it:
<amq:broker persistent="false" id="mybroker">
<amq:transportConnectors>
<amq:transportConnector uri="tcp://0.0.0.0:61616"/>
</amq:transportConnectors>
</amq:broker>
<amq:connectionFactory id="ConnectionFactory" brokerURL="vm://localhost?broker.persistent=false" >
<amq:prefetchPolicy>
<amq:prefetchPolicy queuePrefetch="0"/>
</amq:prefetchPolicy>
</amq:connectionFactory>
<amq:queue id="lookup.mdb.queue.cat" physicalName="DogQueue"/>
<amq:queue id="lookup.mdb.queue.dog" physicalName="CatQueue"/>
<amq:queue id="lookup.mdb.queue.fish" physicalName="FishQueue"/>
<bean id="messageListener" class="org.springframework.jms.listener.DefaultMessageListenerContainer" abstract="true">
<property name="connectionFactory" ref="ConnectionFactory"/>
</bean>
<bean parent="messageListener" id="cat">
<property name="destination" ref="lookup.mdb.queue.dog"/>
<property name="messageListener">
<bean class="com.acteksoft.common.remote.jms.WorkerMessageListener"/>
</property>
<property name="concurrentConsumers" value="200"/>
<property name="maxConcurrentConsumers" value="200"/>
</bean>
<bean parent="messageListener" id="dog">
<property name="destination" ref="lookup.mdb.queue.cat"/>
<property name="messageListener">
<bean class="com.acteksoft.common.remote.jms.WorkerMessageListener"/>
</property>
<property name="concurrentConsumers" value="200"/>
<property name="maxConcurrentConsumers" value="200"/>
</bean>
<bean parent="messageListener" id="fish">
<property name="destination" ref="lookup.mdb.queue.fish"/>
<property name="messageListener">
<bean class="com.acteksoft.common.remote.jms.WorkerMessageListener"/>
</property>
<property name="concurrentConsumers" value="200"/>
<property name="maxConcurrentConsumers" value="200"/>
</bean>
My hack involves using a ServletContextListener to manually stop the objects. The hacky part is that I have to create additional threads to stop the DefaultMessageListenerContainers. Perhaps I'm stopping the objects in the wrong order, but I've tried everything that I can imagine. If I attempt to stop the objects in the main thread, then they hang indefinitely.
Thank you in advance!
UPDATE
I have tried the following based on boday's recommendation but it didn't work. I have also tried to specify the amq:transportConnector uri as tcp://0.0.0.0:61616?transport.daemon=true
<amq:broker persistent="false" id="mybroker" brokerName="localhost">
<amq:transportConnectors>
<amq:transportConnector uri="tcp://0.0.0.0:61616?daemon=true"/>
</amq:transportConnectors>
</amq:broker>
<amq:connectionFactory id="connectionFactory" brokerURL="vm://localhost" >
<amq:prefetchPolicy>
<amq:prefetchPolicy queuePrefetch="0"/>
</amq:prefetchPolicy>
</amq:connectionFactory>
At one point I tried to add similar properties to the brokerUrl parameter in the amq:connectionFactory element and the shutdown worked properly, however after further testing I learned that the properties were resulting in an exception to be thrown from VMTransportFactory. This resulted in improper initialization and the basic message functionality didn't work.
In case anyone else is wondering, as far as I can see it's not possible to have a daemon ListenerContainer using ActiveMQ.
When the ActiveMQConnection is started, it creates a ThreadPoolExecutor with non-daemon thread. This is seemingly to avoid issues when failing over the connection from one broker to another.
https://issues.apache.org/jira/browse/AMQ-796
executor = new ThreadPoolExecutor(1, 1, 5, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>(), new ThreadFactory() {
#Override
public Thread newThread(Runnable r) {
Thread thread = new Thread(r, "ActiveMQ Connection Executor: " + transport);
//Don't make these daemon threads - see https://issues.apache.org/jira/browse/AMQ-796
//thread.setDaemon(true);
return thread;
}
});
try setting daemon=true on your TCP transport, this allows the process to run as a deamon thread which won't block the shutdown of your container
see http://activemq.apache.org/tcp-transport-reference.html

Restarting activemq route in camel when receiving a "Connection.start()" message

I have a an activemq camel route that stops receiving messages at some point and requires a restart. I am not sure how to programmatically detect and fix this situation.
My route looks like so:
from("activemq:queue:Consumer.app.VirtualTopic.msg?messageConverter=#convertMsg")
It's all configured in Spring like so:
<!-- Configure the Message Bus Factory -->
<bean id="jmsFactory" class="com.local.messaging.activemq.SpringSslContextConnectionFactory">
<property name="brokerURL" value="${jms.broker.url}" />
<property name="sslContext" ref="sslContext" />
</bean>
<!-- Connect the Message Bus Factory to Camel. The 'activemq' bean
name is necessary for Camel to pick it up automatically -->
<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent" depends-on="jmsFactory">
<property name="usePooledConnection" value="true" />
<property name="connectionFactory">
<bean class="org.apache.activemq.pool.PooledConnectionFactory">
<property name="maxConnections" value="20" />
<property name="maximumActive" value="10" />
<property name="connectionFactory" ref="jmsFactory" />
</bean>
</property>
</bean>
Finally, the broker URL is configured like so:
jms.broker.url=failover://(ssl://amq1:61616,ssl://amq1:61616)
This starts up just fine and works like a champ most of the time. Every so often, though, I see this message in the logs:
Received a message on a connection which is not yet started. Have you forgotten to call Connection.start()? Connection: ActiveMQConnection {<details>}
I suspect strongly that this happens after the message bus has restarted, but as I have no direct access to the message bus, I don't know that for certain. I don't know that that matters.
The keys for me are:
How do I programmatically detect this situation? There doesn't appear to be any exception thrown or the like and the only way I've seen this is by parsing through log files.
After detecting it, how do I fix it? Do I need to start() and stop() the route or is there a cleaner way?
Finally, I did see some suggestions that this case should be handled by activemq, using the failover scheme. As shown above, I am using failover and this still happens.

How to consume message from the endpoint in Apache Camel?

I had created a message with a topic name and set some information with key/value pair and sent the message to the MessageBus (i.e, produced the message to an endPoint - in my case endpoint is a messageBus).
How can consume the message from that endPoint? I know the uri, endpoint. what configurations needs to be done for my consumer ( any camel XML changes to done ?).
Please help.
see the camel-jms page for details, but you basically need to do some basic Spring XML to configure the ActiveMQ connection and then establish your route...
from("activemq:queue:inboundQueue").bean(MyConsumerBean.class);
<bean id="activemq" class="org.apache.activemq.camel.component.ActiveMQComponent">
<property name="connectionFactory">
<bean class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="vm://localhost?broker.persistent=false&broker.useJmx=false"/>
</bean>
</property>
</bean>
see these unit test for more information...
https://svn.apache.org/repos/asf/camel/trunk/components/camel-jms/src/test/java/org/apache/camel/component/jms/JmsRouteTest.java
https://svn.apache.org/repos/asf/camel/trunk/components/camel-jms/src/test/resources/org/apache/camel/component/jms/jmsRouteUsingSpring.xml