Behavior of ActiveMQ with N producers and 1 consumer - activemq

In my architecture I have many producers who want to send messages to an ActiveMQ queue. A consumer will consume these messages from that queue in real time. Though the production of these messages is very fast the queue seems to be able to handle them. No messages are lost.
My purpose here is to stress this architecture, but I cannot find a documentation that explain what kind of problems might happen in this scenario. For example, could message loss happen? If so, when? Can the reception of messages produced by a producer P1 be inhibited by a huge production of messages from another producer P2?
I'm sending persistent JMS messages using this Maven dependency:
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-all</artifactId>
<version>5.15.15</version>
</dependency>
Here's my producer code:
// producer
import org.apache.activemq.ActiveMQConnection;
import org.apache.activemq.ActiveMQConnectionFactory;
import javax.jms.Connection;
import javax.jms.ConnectionFactory;
import javax.jms.Destination;
import javax.jms.JMSException;
import javax.jms.MessageProducer;
import javax.jms.Session;
import javax.jms.TextMessage;
//Producer constructor
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory(url);
Connection connection = connectionFactory.createConnection();
connection.start();
session = connection.createSession(false,Session.AUTO_ACKNOWLEDGE);
Destination destination = session.createQueue(jmsQueue);
producer = session.createProducer(destination);
...
//OnMessage do this
try {
stream(message);
} catch(JMSException e) {
System.out.println("ERROR: "+e);
}
private void stream(LogRecord message) throws JMSException {
TextMessage toSend =session.createTextMessage(message.getMessage());
producer.send(toSend);
}

If you're sending persistent messages then no message loss should happen short of a disk failure of some kind on the broker.
The essential speed of the broker can be maintained if and only if consumption keeps pace with production. Once messages start accumulating on the broker then either the messages fill up the heap and increase pressure on garbage collection or the messages will have to be paged out of memory to disk. In either case, performance will suffer. Keep in mind that ActiveMQ brokers are designed to be a conduit through which messages flow. They are not a storage platform like a database. They can buffer messages for a time, but if messages keep accumulating eventually a tipping point will come when performance degrades.
For what it's worth, if you're looking for the best performance from an ActiveMQ broker I would recommend taking a look at ActiveMQ Artemis - the next generation broker from ActiveMQ.

Related

ActiveMq - Durable Topic - Concurrency

I have multiple durable subscribers listening to a Durable-Topic. Say, all the subscribers are configured to concurrently handle messages to '2-8'.
Among these subscribers, one is not able to process the messages due to some runtime dependencies (say, an external service is unavailable), so this subscriber throws custom RuntimeException to allow ActiveMq to redeliver the message 7 times (default). What I see in the Activemq administrative console is, there are too many redelivery attempts for this particular subscriber - also, I see there dequeue count increases drastically, for one message, it increases more 36 and not consistent. Why is it? Am I doing anything wrong?
My Listener factory configuration
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setMessageConverter(messageConverter());
factory.setConnectionFactory(connectionFactory);
factory.setPubSubDomain(true);
factory.setSubscriptionDurable(true);
factory.setSessionTransacted(true);
factory.setSessionAcknowledgeMode(Session.SESSION_TRANSACTED);
factory.setConcurrency("2-8");
factory.setClientId("topicListener2");
return factory;

How spring-cloud-stream-rabbit-binder works when RabbitMQ disk or memory alarm is activated?

Versions:
Spring-cloud-stream-starter-rabbit --> 2.1.0.RELEASE
RabbitMQ --> 3.7.7
Erlang --> 21.1
(1) I have created a sample mq-publisher-demo & mq-subscriber-demo repositories on github for reference.
When Memory Alarm was activated
Publisher: was able to publish messages.
Subscriber: seems like, the subscriber was receiving messages in batch with few delays.
When Disk Alarm was activated
Publisher: was able to publish messages.
Subscriber: seems like, the subscriber was not receiving messages while Disk Alarm was activated. but once the alarm was deactivated, all messages were received by the subscriber.
Are the messages getting buffered somewhere?
Is this the expected behavior?
(because I was expecting RabbitMQ will stop receiving messages from the publisher and the subscriber will never get any subsequent messages once any of the alarms are activated.)
(2) Spring Cloud Stream document says below.
Does it mean the above behaviour? (avoiding deadlock & keep publisher publishing the messages)
Starting with version 2.0, the RabbitMessageChannelBinder sets the RabbitTemplate.userPublisherConnection property to true so that the non-transactional producers avoid deadlocks on consumers, which can happen if cached connections are blocked because of a memory alarm on the broker.
(3) Do we have something similar for Disk alarm also to avoid deadlocks?
(4) If the producer's message will not be accepted by RabbitMQ, then is it possible to throw specific exception to the publisher from spring-cloud-stream (saying alarms are activated and message publish failed)?
I'm kind of new about these alarms in spring-cloud-stream, please help me to understand clearly. Thanking you.
Are the messages getting buffered somewhere?
Yes, when resource alarm is set, messages will be published into network buffers.
Tiny messages will take some time to fill up Network buffer and then
block publishers.
Less network buffer size will block publishers
soon.
It's better to ask questions about the behavior of RabbitMQ itself (and the Java client that Spring uses) on the rabbitmq-users Google group; that's where the RabbitMQ engineers hang out.
(2) Spring Cloud Stream document says below. Does it mean the above behaviour?
That change was made so that if producers are blocked from producing, consumers can still consume.
(4) If the producer's message will not be accepted by RabbitMQ, then is it possible to throw specific exception to the publisher from spring-cloud-stream (saying alarms are activated and message publish failed)?
Publishing is asynchronous by default; you can enable transactions (which can slow down performance a lot; or enable errors on the producer and you'll get an asynchronous message on the error channel if you enable publisher confirms and returns.

how to requeue a message using spring ampq

While requeuing the message we want the message to be placed at the start/front of the queue.
This means if I have in queue as "D,C,B,A" and process A and then want to put back in Queue at start, my queue should looks like this:-
"A,D,C,B".
So, I should be able to process B, and since A moved at start of the queue should process it at end.
Interestingly, when tried with native AMQP library of rabbitMQ it wworks as expected above.
However, when we do it through spring AMQP library , the message still remains whereever it was, does not go to the front of the queue.
Here is the code which we tried:
public void onMessage(Message message, com.rabbitmq.client.Channel channel) throws Exception {
if(new String(message.getBody()).equalsIgnoreCase("A")){
System.out.println("Message = =="+new String(message.getBody()));
channel.basicReject(message.getMessageProperties().getDeliveryTag(), true);
}else{
System.out.println("Message ==="+new String(message.getBody()));
channel.basicAck(message.getMessageProperties().getDeliveryTag(), true);
}
}
Any idea why it does not work in Spring but works in the rabbimq amqp native library ?
Spring AMQP version : spring-amqp-1.4.5.RELEASE
Rabbitmq amqp client version : 3.5.1
RabbitMQ was changed in version 2.7 to requeue messages at the head of the queue. Previous versions requeued at the tail of the queue - see here.
Your observation is because Spring AMQP sets the prefetch to 1 by default (by calling basicQos) - which only allows 1 message to be outstanding at the client. If basicQos is not called, the broker will send all four messages to the client so it will appear that the rejected message went to the back of the queue if the queue is empty due to prefetch.
If you set the prefetchCount to at least 4, you will see the same behavior.
EDIT
If you really want to queue at the beginning you can use retry together with a RepublishMessageRecoverer.

Consumer allocation RabbitMQ

Need help in designing the rabbit-mq consumer distribution.
For eg,
There are 100 queues and 10 threads to consume messages from that 100 queue.
Each thread will be consuming messages from 10 queue each.
Question 1 : How to dynamically assign the threads to queues ?. If the threads are running in different machines ?
No more than one thread should consume from a queue (to maintain the order of processing the message in the respective queue)
Question 2 : When there is a need to increase the consumer threads while the system runs, How it can be done ?.
There are lot of posts about the messages order (FIFO), in you have a normal situation(one producer one consumer without network problem) you don’t have any problem. But as you can read here
In particular note the "unless the redelivered field is set" condition,
which means any disconnect by consumers can cause messages pending
acknowledgement to be subsequently delivered out of order.
Also, for example if you publish a message and there is some error during the publish you have to re-publish the message in the correct order.
It means that if you need absolutely the messages order you have to implement it, for example marking each packet with a sequential number, and you should also implement confirm publish .
I think, but this is my opinion, that when you use a messages system you shouldn’t worry about the messages order, because it should be your application able to manage the data.
Having said that,if we suppose that the 100 queues have to handle the same messages kind, you could use an ThreadPoolExecutor and shared it from all consumer.
For example:
public class ActualConsumer extends DefaultConsumer {
public ActualConsumer(Channel channel) {
super(channel);
}
#Override
public void handleDelivery(String consumerTag, Envelope envelope, BasicProperties properties, byte[] body) throws java.io.IOException {
MyMessage message = new MyMessage(body);
mythreadPoolExecutorShared.submit(new MyHandleMessage(message))
}
}
In this way you can balance the messages between the threads.
Also for the threadPool you can use different policies, for example a static allocation with fixed thread number or dynamic thread allocation.
Please read this post about the threadpool resize (Can you dynamically resize a java.util.concurrent.ThreadPoolExecutor while it still has tasks waiting)
You can apply this pattern to all nodes, in this way you can balance the dispatching messages and assign a correct threads number.
I hope it can be useful,I'd like to be more detailed, but your question is a bit generic.

Consumer is not receiving messages from ActiveMQ

We are facing a random issue with ActiveMQ and its consumers. We observe that, few consumers are not receiving messages, even though they are connected to ActiveMQ queue. But it works fine after the consumer restart.
We have a queue named testQueue at ActiveMQ side. A consumer is trying to de-queue the messages from this queue. We are using Spring's DefaultMessageListenerContainer for this purpose. Message is being delivered to the consumer node from ActiveMQ Broker. From the tcpdump as well, it was obvious that, message is reaching the consumer node, But the actual consumer code is not able to see the message. In other words, the message seems to be stuck either in ActiveMQ consumer code or in Spring’s DefaultMessageListenerContainer.
See refer to the below fig. for more clarity on the issue. Message is reaching Consumer node, but it is not reaching the “Actual Consumer Class”, which means that the message got stuck either in AMQ consumer code or Spring DMLC.
Below are the details captured from ActiveMQ admin.
Queue-Name /Pending-Message-Count /Consumer-Count /Messages-Enqueued /Messages-Dequeued
testQueue /9 /1 /9 /0
Below are the more details.
Connection-ID /SessionId /Selector /Enqueues /Dequeues /Dispatched /Dispatched-Queue /Prefetch
ID:bearsvir52-45176-1375519181268-3:5 /1 / /9 /0 /9 /9 /250
From the second table it is obvious that, messages are being delivered to the consumer, but the consumer is not acknowledging the message. Hence the messages are stuck in Dispatched-Queue at broker side.
Few points for to your notice:
1)There is no time difference b/w Broker node and consumer node.
2)Observed the tcpdump at consumer side. We can see MessageDispatch(Openwire) packet being transferred to consumer node, But could not find the MessageAck(Openwire) for the same.
3)Sometimes it is working on a node, and sometimes it is creating problem on the same node.
One cause of this can be incorrectly using a CachingConnectionFactory (with cached consumers) with a listener container that dynamically adjusts the consumers (max consumers > consumers). You can end up with a cached consumer just sitting in the pool and not being actively used. You never need to cache consumers with a listener container.
For problems like this, I generally recommend running with TRACE logging and you can see all the consumer activity.
It took lot of time to figure out the solution. There seems to be some issue with the org.apache.activemq.ActiveMQConnection.java class, in case of AMQ fail over. The connection object is not getting started at consumer side in such cases.
Following is the fix i have added in ActiveMQConnection.java file and compiled the sources to create activemq-core-x.x.x.jar
private final Object startMutex = new Object();
added a check in createSession method
public Session createSession(boolean transacted, int acknowledgeMode) throws JMSException {
synchronized (startMutex) {
if(!isStarted()) {
start();
}
}