ActiveMQ broker error "Setup of JMS message listener invoker failed for destination trying to recover. Cause: The Consumer is closed" - activemq

We have observed in our enviornment as consumer from ActiveMQ UI get removed. We have very low traffic and observed as initially we have 3 consumer and those got each of them removed after interval of couple hours and once we restart our consumer it again refresh connection for couple more hours, we dont see any error in logs except:
Setup of JMS message listener invoker failed for destination 'queue-1' - trying to recover. Cause: The Consumer is closed
I use AWS ActiveMQ broker and don't see any error in the broker logs.
We use PoolConnectionFactory with ActiveMQConnectionFactory for creating pool of connection for our consumer as recommended. we are using ActiveMQ 5.15
#Bean
public PooledConnectionFactory pooledConnectionFactory() {
ActiveMQConnectionFactory activeMQConnectionFactory =
new ActiveMQConnectionFactory();
activeMQConnectionFactory.setBrokerURL(brokerUrl);
activeMQConnectionFactory.setUserName(username);
activeMQConnectionFactory.setPassword(password);
activeMQConnectionFactory.setTrustAllPackages(true);
ActiveMQPrefetchPolicy activeMQPrefetchPolicy = new ActiveMQPrefetchPolicy();
activeMQPrefetchPolicy.setQueuePrefetch(100);
//activeMQPrefetchPolicy.setQueuePrefetch();
activeMQConnectionFactory.setPrefetchPolicy(activeMQPrefetchPolicy);
PooledConnectionFactory pooledConnectionFactory = new PooledConnectionFactory(activeMQConnectionFactory);
pooledConnectionFactory.setMaxConnections(poolSize);
return pooledConnectionFactory;
}
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(pooledConnectionFactory());
factory.setMessageConverter(jacksonJmsMessageConverter());
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
factory.setConcurrency("1-1");
factory.setErrorHandler(ActiveMQErrorHandler());
return factory;
}
#Bean
public JmsTemplate jmsTemplate() {
JmsTemplate jmsTemplate = new JmsTemplate(pooledConnectionFactory());
jmsTemplate.setMessageConverter(jacksonJmsMessageConverter());
return jmsTemplate;
}
#Bean
public Queue queue() {
return new ActiveMQQueue(queueName);
}
#Bean
public ErrorHandler ActiveMQErrorHandler() {
return t -> {
LOGGER.error("JMS_LISTENER_ERROR");
};
}

Given the information provided it sounds as though either the connection is dropping and the client isn't reporting that or the remote is closing the consumer on it's end which the pool will likely not notice until some user action is performed.
This is one of the gotchas of using a JMS pool which is that the pool doesn't have complete insight into what is going on with the client and therefore checking out a connection that's been sitting in the pool can result in obtaining a stale and no longer active connection due to the IO interruption that doesn't bubble up to the pool layer. One means of working around this would be to use the ActiveMQ client failover transport to allow it to automatically reconnect to the broker if the connection is dropped.
Another option you could try is to use the PooledJMS JMS connection pooling library which has some additional work done to attempt to validate failed connections and or closed resources sooner and match that with a ConnectionFactory that creates ActiveMQ connection's that use failover so that remotely closed resources like consumers could be caught in some cases.
Ultimately though your code will still need to deal with potential failure cases out of the JMS resources and retry where needed such as senders seeing security exceptions etc. The pooling bits don't make all your troubles go away, and in some cases they just introduce new one's you hadn't thought of yet.

Related

Handling Custom exception for rabbitmq listener connectivity issues

I have a listener which consumes the messages from the third party producer.I need to send the custom error message in case of the queue connection issues
I'm new to rabbit mq .May I know what are the ways to handle the custom exception here
You can use a ConnectionListener callback and inject it into a org.springframework.amqp.rabbit.connection.ConnectionFactory configuration:
void addConnectionListener(ConnectionListener listener);
https://docs.spring.io/spring-amqp/docs/current/reference/html/#connection-channel-listeners

Create RabbitMQ queue using spring cloud stream without a consumer or without an active consumer

Is there a way to create a RabbitMQ queue using spring cloud stream without having a consumer for the queue.
Our scenario is that we want to use the delay messaging strategy, so messages coming to the first queue would be held until expired and moved to a DLQ.
The application would be consuming the messages from the DLQ.
Wanted to check if there is a way we can use spring cloud stream to configure the queues, when we do not have a consumer for the first queue and it's just there to hold messages till expiry.
Yes; simply add a Queue bean (and binding if needed).
Boot auto configures a RabbitAdmin which will detect such beans when the connection is first established.
https://docs.spring.io/spring-amqp/docs/current/reference/html/#broker-configuration
#Bean
public Queue queue() {
return QueueBuilder.nonDurable("foo")
.autoDelete()
.exclusive()
.withArgument("foo", "bar")
.build();
}

Any reason why RabbitMQ doesn't create exchange on start application?

I have strange RabbitMQ behavior (as for me of course).
When I started spring boot web application, in my configuration I'm trying to create rabbit direct exchange:
#Bean
public DirectExchange exchange() {
return new DirectExchange(directExchangeName);
}
But when app started, I can't find this exchanger in RabbitMQ management. But it’s interesting, because in ApplicationContext I can see this bean.
This exchanger start to show in RabbitMQ management after first call to it.
Is I'm missing something? Or can it be issue with my configuration?
You need a RabbitAdmin #Bean to auto-declare exchanges, queues, bindings.
And, even then, the declarations will not occur until some component (listener container, template) opens a connection; the admin registers as a connection listener.

Parallel Listening of messages in rabbit mq

We have a requirement, where we create queues in rabbitMq on application startup with direct exchange, and then have to assign a single listener to each queue. We implemented that using Spring AMQP with the following configuration
#Bean(name= {"dispatcherListener"})
public SimpleMessageListenerContainer dispatcherListener() {
SimpleMessageListenerContainer listenerContainer = new SimpleMessageListenerContainer();
listenerContainer.setConnectionFactory(connectionFactory());
listenerContainer.setQueues(participantQueues());
listenerContainer.setMessageConverter(jsonMessageConverter());
listenerContainer.setMessageListener(subscriptionListener);
listenerContainer.setAcknowledgeMode(AcknowledgeMode.AUTO);
listenerContainer.setAutoStartup(false);
return listenerContainer;
}
But then we faced a problem, with the above configuration. When we publish the message to multiple queues , then the listener is reading the messages serially from each queue. But we expected it to listen to messages from each queue independent of other queue parallely.
Can someone please guide me, where i went wrong ?
Any help would be appreciated
That's correct behavior, since the default concurrency is 1, therefore we have only one listener for all queue.
Just consider to increase that value for your configuration.
More info in the Reference Manual.

performance about apache activemq using ssl

By default, the activemq uses tcp protocol. But now, I change it to use ssl.
If I deploy the publisher and server on one machine, it makes no difference with regard to the speed. But after I deploy them on different machine, it's much slower to use ssl than to use tcp. Is this normal? If not, what's probably wrong with my code?
Thanks.
Depends on how much slower your application is working.
If you process huge amount of data volumes, SSL will take a decent amount of CPU cycles to encrypt (and also decrypt) the data. Is it the ActiveMQ server that is slower or is it the client. Profile the system setup to get an overview where to find the bottenecks.
Another possibillity is frequent hand shakes. Say your client code (can you post it?) to send messages by opening a connection for each message, it might be the case that the latency for sending a message will suffer from the increased SSL handshake time compared to plain tcp.
UPDATE:
A speed up would be to reuse your connection. A SSL handshake has to be done for every message sent in your case which involves cpu expensive asymmetric crypto and a few more tcp roundtrips than plain TCP. It is easy to do, with the pooling connection factory provided by activemq. This example does not alter your code much:
public class MySender{
private static ConnectionFactory factory = new org.apache.activemq.pool.PooledConnectionFactory("ssl://192.168.0.111:61616");
public void send(){
Connection connection = factory.createConnection();
Session session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
Topic topic = session.createTopic(newDataEvent.getDataType().getType());
MessageProducer producer = session.createProducer(topic);
TextMessage message = session.createTextMessage();
message.setText(xstream.toXML(newDataEvent));
producer.send(message);
session.close();
connection.close();
}
}