Spring AMQP Listener Timeout - rabbitmq

I have a requirement for handling SpringAMQP listener timeout capability i.e We sends a message from producer , The consumer listener thread of Spring AMQP receives the message but say takes lot of time to execute itself and get hanged , Which will eventually lead to Listener thread being rendered unusable.
So is there any way that we have any consumer timeout setting provided by Spring AMQP so that the listener thread is freed again after given timeout time

Indeed you can mention the timeout using spring-amqp, here is how.
<bean id="connectionFactory" class="org.springframework.amqp.rabbit.connection.CachingConnectionFactory">
<property name="connectionTimeout" value="1000" />
<property name="concurrency" value="16" /> <!-- in milliseconds -->
<property name="recoveryInterval" value="5000" />
</bean>
NOTE: If you are having limited consumer count and using manual ack and not sending the ack signal back for some reason, timeout might occur; it means you are holding the thread and not releasing it which will also impact your performance.
More here
doc
api

If the thread is stuck in your code, there is nothing the container can do to free it up. If it's in interruptible code, you could interrupt the thread.
If it's stuck in uninterruptible code, you are out of luck.

Related

How to monitor ActiveMQ queue in Apache James

We're using Apache James 3.0-beta4 which uses embedded ActiveMQ 5.5.0 for FIFO message queue, and sometimes messages get stuck. Therefore, we need to monitor it. Is there any way to monitor an ActiveMQ queue like message size and most recent message-id in the queue (if possible).
In the JAMES spring-server.xml I found that:
<amq:broker useJmx="true" persistent="true" brokerName="james" dataDirectory="filesystem=file://var/store/activemq/brokers" useShutdownHook="false" schedulerSupport="false" id="broker">
<amq:destinationPolicy>
<amq:policyMap>
<amq:policyEntries>
<!-- Support priority handling of messages -->
<!-- http://activemq.apache.org/how-can-i-support-priority-queues.html -->
<amq:policyEntry queue=">" prioritizedMessages="true"/>
</amq:policyEntries>
</amq:policyMap>
</amq:destinationPolicy>
<amq:managementContext>
<amq:managementContext createConnector="false"/>
</amq:managementContext>
<amq:persistenceAdapter>
<amq:amqPersistenceAdapter/>
</amq:persistenceAdapter>
<amq:plugins>
<amq:statisticsBrokerPlugin/>
</amq:plugins>
<amq:transportConnectors>
<amq:transportConnector uri="tcp://localhost:0" />
</amq:transportConnectors>
</amq:broker>
also one old part from readme:
- Telnet Management has been removed in favor of JMX with client shell
- More metrics counters available via JMX
...
* Monitor via JMX (launch any JMX client and connect to URL=service:jmx:rmi:///jndi/rmi://localhost:9999/jmxrmi)
which is confusion on how to use it.
This is part of the bigger "monolith" project which now is recreated for microservices but still need to be supported ;) All was fine till mid of March.
It looks like ActiveMQ management and monitoring is not possible because JMX is disabled.

Issue while sending messages in loop to windows rabbitmq broker

My Setup
I have a situation where i am sending some 15 messages in a loop from one machine to another machine via rabbitmq.
There is NAT setup between sending and recieving machine.
I am using spring rabbitmq for all rabbitmq operations.
On the receiving machine, i am losing 2 messages sometimes which is never received even after waiting for a long time.
And also i don't see any messages accumulated in the queue (in both sending machine and receiving machine).
And also there is only listener for the queue in the receiving machine.
My Question
if i send messages in a loop to rabbitmq, is there any chance that it rejects some messages if it cant handle? The overall size of 15 messages in close to 8mb.
I don't see any exceptions even after i perform send message to rabbitmq.
SENDING MACHINE CODE
#Override
public boolean send(final Message message, final String routingKey)
throws SinecnmsMessagingException {
private RabbitTemplate rabbitTemplate = null;
rabbitTemplate.send(routingKey, message);
}
RECIEVING MACHINE CODE
<rabbit:listener-container
connection-factory="connectionFactory">
<rabbit:listener ref="onMessageCommand"
queue-names="TestQueue" />
</rabbit:listener-container>
<bean id="onMessageCommand"
class="com.test.OnMessageListner">
<property name="callBackObject" ref="callbackEvent" />
<property name="template" ref="amqpTemplate" />
</bean>
<bean id="callbackEvent" class="com.test.SettingsListener"></bean>
OnMessageListner implements MessageListener.
In SettingsListener class, i recieve the messages. This is working fine for all me in other code that i have developed. Only in this use case which i have mentioned, i am observing this issue.
So does it mean that publisher confirms concept was introduced because some times rabbitmq may "reject/not accept" messages. With publisher confirms we can know if the first message was recieved by rabbitmq broker and then send the second message.
Can we conclude this?
No you cannot; waiting for each confirmation would slow down the publishing; confirms are designed so you send a bunch of messages and then wait for the confirms.
It was not >introduced because sometimes rabbitmq may "reject/not accept" messages; publishing with RabbitMQ is asynchronous; so a publish is generally successful - but anything can happen between sending the message and it arriving at the broker - if the connection is lost, the client is told, but that's too late for the publisher since he has already completed successfully.
NAT should make no difference but, perhaps, some flaky network router might be the problem.
You can use a network monitor (e.g. WireShark) to see what's happening.

Unable to undeploy mule flows if it has JMS Retry connection

I am using Mule community-3.8 version. I have a scenario where i need to connect MQ via JMS and should have retry strategy connection in forever mode. It is working fine on the happy scenario incase of MQ/channel restart happened.
But in case of queue manager down and Mule tries to connect forever, then that moment I couldn't able to undeploy the flows even if I remove the flow directory and anchor file as well still it retries.
I feel this is an open bug in Mule? Can anyone suggest is this the existing behaviour?
Code snippet
<spring:beans>
<spring:bean id="ConnectionFactory" name="ConnectionFactory" class="com.ibm.mq.jms.MQConnectionFactory">
</spring:bean>
</spring:beans>
<jms:connector name="jms-conn" username="xxxx" password="xxxx" specification="1.1" connectionFactory-ref="ConnectionFactory" validateConnections="true" numberOfConsumers="1" persistentDelivery="true">
<reconnect-forever frequency="30000" />
</jms:connector>
Moreover, I am able to undeploy the flow if i use blocking=false in the jms:connector. But I really don't need that feature to be there in my usecase.
JMS retry option will work in a single thread model, hence this can hold the new thread process until this gets reconnected successfully.

Using 'jms.useAsyncSend=true' for some messages and verfying it in the consumer side

We are using the embedded activemq broker within Fuse ESB (7.1.0) with consumers co-located .
The producer client is deployed on a remote GlassFish server. It uses activeMQ resource adapter (5.6.0) along with spring jms template. The client publish messages to different queues. I want some of the messages (going to a given queue) to use jms.useAsyncSend=true where as the other messages should use the default. I see below options
1) I can't append the the option 'jms.useAsyncSend=true' in the resource adapter URL because that will enforce it for all messages.
2) The JNDI look-up for the connection factory is returning an instance of 'org.apache.activemq.ra.ActiveMQConnectionFactory'. I was actually expecting an instance of org.apache.activemq.ActiveMQConnectionFactory, which would have allowed me to use setUseAsyncSend() for the corresponding jmsTemplate. So I can't use this option as well.
3) I have multiple connection factories configured under the GlassFish connectors (one for each queue). I am trying to pass the property 'jms.useAsyncSend=true' as an additional property to a particular connection factory. I am expecting this to be used only for the connections created in that particular connection pool. Now, having done this I want to verify if it really worked.
Question 1) Is there a way where I can check in the consumer side if the property 'useAsyncSend' was set in an inbound message? This is to verify what I have done at producer side has actually worked. Note that I am using camel-context to route messages to the end consumers. Is there a way to check this within the came-context? is there a header or any such thing corresponding to this?
Question 2) Is there a better way to set 'useAsyncSend' in the producer side where one resource adapter is used for sending messages to different queues with different values for 'useAsyncSend'.
I understand that 'useAsyncSend' is an activeMQ specific configuration hence not available in jmstemplate interface
Appreciate any help on this.
thanks
I don't know Fuse ESB, but I have created different ActiveMQ connections with different properties (in my case it was the RedeliveryPolicy). You then direct your producers and consumers to use the appropriate connection.
So, if using Spring xml to define your connections, it would look something like this:
<!-- Connection factory with useAsyncSend=true -->
<bean id="asyncCF" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${broker.url}" />
<property name="useAsyncSend" value="true" />
</bean>
<!-- Connection factory with useAsyncSend=false -->
<bean id="syncCF" class="org.apache.activemq.ActiveMQConnectionFactory">
<property name="brokerURL" value="${broker.url}" />
<property name="useAsyncSend" value="false" />
</bean>
Or specify the useAsyncSend as a parameter on the url value.
This might help you with Question 2.

Does c3p0 connection pooling ensures max pool size?

I've gone through several question, this is somewhat related but doesn't answer my question.
Does the c3p0 connection pooling maxPoolSize ensures that the number of connections at a certain time never exceeds this limit? What if the maxPoolSize=5 and 10 users start using the app exactly at the same time?
My app. configurations
<bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close">
<property name="driverClass"><value>${database.driverClassName}</value>/property>
<property name="jdbcUrl"><value>${database.url}</value></property>
<property name="user"><value>${database.username}</value></property>
<property name="password"><value>${database.password}</value></property>
<property name="initialPoolSize"><value>${database.initialPoolSize}</value>/property>
<property name="minPoolSize"><value>${database.minPoolSize}</value></property>
<property name="maxPoolSize"><value>${database.maxPoolSize}</value></property>
<property name="idleConnectionTestPeriod"><value>200</value></property>
<property name="acquireIncrement"><value>1</value></property>
<property name="maxStatements"><value>0</value></property>
<property name="numHelperThreads"><value>3</value></property>
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource"/>
</bean>
<bean id="transactionManager" class="org.springframework.orm.jpa.JpaTransactionManager">
<property name="entityManagerFactory" ref="entityManagerFactory"/>
<property name="dataSource" ref="dataSource"/>
</bean>
it is important to distinguish between DataSources and Connection pools.
maxPoolSize is enforced by c3p0 on a per-pool basis. but a single DataSource may own multiple Connection pools, as there is (and must be) a distinct pool for each set of authentication credentials. if only the default dataSource.getConnection() method is ever called, then maxPoolSize will be the maximum number of Connections that the pool acquires and manages. However, if Connections are acquired using dataSource.getConnection( user, password ), then the DataSource may hold up to (maxPoolSize * num_distinct_users) Connections.
to answer your specific question, if maxPoolSize is 5 and 10 clients hit a c3p0 DataSource simultaneously, no more than 5 of them will get Connections at first. the remaining clients will wait() until Connections are returned (or c3p0.checkoutTimeout has expired).
some caveats: c3p0 enforces maxPoolSize as described above. but there is no guarantee that, even if only a single per-auth pool is used, you won't occasionally see more than maxPoolSize Connections checked out. for example, c3p0 expires and destroys Connections asynchronously. as far as c3p0 is concerned, a Connection is gone once it has been made unavailable to clients and marked for destruction, not when it has actually been destroyed. so, it is possible that, if maxPoolSize is 5, that you'd occasionally observe 6 open Connections at the database. 5 connections would be active in the pool, while the 6th is in queue for destruction but not yet destroyed.
another circumstance where you might see unexpectedly many Connections open is if you modify Connection pool properties at runtime. in actual fact, the configuration of interior Connection pools is immutable. when you "change" a pool parameter at runtime, what actually happens is a new pool is started with the new configuration, and the old pool is put into a "wind-down" mode. Connections checked out of the old pool remain live and valid, but when they are checked in, they are destroyed. only when all old-pool Connections have been checked back in is the pool truly dead.
so, if you have a pool with maxPoolSize Connections checked out, and then alter a configuration parameter, you might transiently see a spike of up to (2 * maxPoolSize), if the new pool is hit with lots of traffic before Connections checked out of the old pool have been returned. in practice, this is very rarely an issue, as dynamic reconfiguration is not so common, and Connection checkouts ought to be and usually are very brief, so the old-pool Connections disappear rapidly. but it can happen!
i hope this helps.
ps acquireIncrement is best set something larger than 1. acquireIncrement of 1 means no Connections are prefetched ahead of demand, so whenever load increases some Thread will directly experience the latency of Connection acquisition.