I have multiple durable subscribers listening to a Durable-Topic. Say, all the subscribers are configured to concurrently handle messages to '2-8'.
Among these subscribers, one is not able to process the messages due to some runtime dependencies (say, an external service is unavailable), so this subscriber throws custom RuntimeException to allow ActiveMq to redeliver the message 7 times (default). What I see in the Activemq administrative console is, there are too many redelivery attempts for this particular subscriber - also, I see there dequeue count increases drastically, for one message, it increases more 36 and not consistent. Why is it? Am I doing anything wrong?
My Listener factory configuration
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setMessageConverter(messageConverter());
factory.setConnectionFactory(connectionFactory);
factory.setPubSubDomain(true);
factory.setSubscriptionDurable(true);
factory.setSessionTransacted(true);
factory.setSessionAcknowledgeMode(Session.SESSION_TRANSACTED);
factory.setConcurrency("2-8");
factory.setClientId("topicListener2");
return factory;
Related
In my system I use spring-cloud-stream and RabbitMQ for sending and receiving events. I got my RabbitMQ running, service A up and service B down. Service A sends an event to service B. Then I turn up my service B and now I expect Rabbit to deliever the event - but nothing happens. Is it correct behaviour? I'm new to RabbitMQ but I though that it should guarantee that all events will eventually finds its receivers. My application is simple, based on example on github with no extra configuration. What do I miss?
If your consumers don't have a group, the queue is an anonymous, auto-delete queue. You need a group for persistence. See consumer groups.
Producers don't bind queues to the exchange, consumers do.
If you bind the producer first, before a new consumer group, messages will also be lost.
With the RabbitMQ binder, if you know the consumer groups ahead of time, you can set the ...producer.requiredGroups property and the queue(s) will be bound.
See the documentation.
requiredGroups
A comma-separated list of groups to which the producer must ensure message delivery even if they start after it has been created (e.g., by pre-creating durable queues in RabbitMQ).
I have an application with RabbitMQ at the backend. So I want to develop custom 3rd party analysis code which it connects application queues on RabbitMQ and collect data. So my issue is I want to be sure both application and my code do not lose any data from rabbitmq.
If it is possible how can I configure RabbitMQ queues? I have administrative access on RabbitMQ.
I hope it's not code of producer issue because I don't have access the application code
Thanks for your help
Change the current exchange/queue mapping to allow for message replication
At the moment we can simplify that existing producer sends a message to existing exchange, that routes the message to some queue, from which the messages are now consumed:
[producer-app] ---> existing-exchange ---> existing-queue ---> [existing-consumer]
Now, what you want to have a following design, with new consumer consuming the same messages:
[producer-app] ---> existing-exchange ---> existing-queue ---> [existing-consumer]
\--> new-queue --------> [your-consumer]
You might need to change configuration of existing-exchange to allow replication of your message - for example direct and fanout will create the same message on each of the queues.
Depending on your application it might be quite easy to perform without changes in producer, but you need to be aware of possible pitfalls:
producer might re-declare exchanges/queues/bindings from time to time, and throw exceptions if the current state cannot be change to its request (this might happen if you change exchange's type)
you need to manage the new-queue on your own (preferably from your consumer artifact), as it is going to receive all the messages; in case your consumer shuts down, the queue is not going to disappear unless it is made exclusive or has TTL set
Need help in designing the rabbit-mq consumer distribution.
For eg,
There are 100 queues and 10 threads to consume messages from that 100 queue.
Each thread will be consuming messages from 10 queue each.
Question 1 : How to dynamically assign the threads to queues ?. If the threads are running in different machines ?
No more than one thread should consume from a queue (to maintain the order of processing the message in the respective queue)
Question 2 : When there is a need to increase the consumer threads while the system runs, How it can be done ?.
There are lot of posts about the messages order (FIFO), in you have a normal situation(one producer one consumer without network problem) you don’t have any problem. But as you can read here
In particular note the "unless the redelivered field is set" condition,
which means any disconnect by consumers can cause messages pending
acknowledgement to be subsequently delivered out of order.
Also, for example if you publish a message and there is some error during the publish you have to re-publish the message in the correct order.
It means that if you need absolutely the messages order you have to implement it, for example marking each packet with a sequential number, and you should also implement confirm publish .
I think, but this is my opinion, that when you use a messages system you shouldn’t worry about the messages order, because it should be your application able to manage the data.
Having said that,if we suppose that the 100 queues have to handle the same messages kind, you could use an ThreadPoolExecutor and shared it from all consumer.
For example:
public class ActualConsumer extends DefaultConsumer {
public ActualConsumer(Channel channel) {
super(channel);
}
#Override
public void handleDelivery(String consumerTag, Envelope envelope, BasicProperties properties, byte[] body) throws java.io.IOException {
MyMessage message = new MyMessage(body);
mythreadPoolExecutorShared.submit(new MyHandleMessage(message))
}
}
In this way you can balance the messages between the threads.
Also for the threadPool you can use different policies, for example a static allocation with fixed thread number or dynamic thread allocation.
Please read this post about the threadpool resize (Can you dynamically resize a java.util.concurrent.ThreadPoolExecutor while it still has tasks waiting)
You can apply this pattern to all nodes, in this way you can balance the dispatching messages and assign a correct threads number.
I hope it can be useful,I'd like to be more detailed, but your question is a bit generic.
I'm working for a company where we're considering Mule ESB. We would need to set up Mule in a clustered configuration to get what Mule coins a Mule High Availability (HA) Cluster.
Now, we need to persist incoming messages to a queue in case of power outage or disk failure. As far as I understand, we can either go with the default Mule Object Store which "persists" messages to a shared memory grid. However, my first thought here is that this can't be any good if we get a power outage which takes the entire cluster out of action.
Our other option is to use a separate queue product such as RabbitMQ or ActiveMQ. However, do these integrate alright with a HA cluster? Are there any mechanism in these products which ensures that the same message won't be picked up by two machines at the same time?
Consider this scenario (based on the observer pattern):
Mule receives a message, puts it on a queue and responds with an OK
to the client which delivered the message.
Mule picks up a message from the queue, and attempts to deliver it to a subscriber.
The subscriber accepts the message, and Mule removes it from the queue.
What happens if another Mule instance in the HA cluster attempts to pick up the message between 2 and 3 above? Is there a mechanism where Mule can indicate that a message is picked up from the queue to be "attempted delivered" but then, if the delivery fails, update the message on the queue as "not delivered" if delivery fails?
Both RabbitMQ and ActiveMQ will give you the once-and-only-once functionality I think you are looking for.
Both platforms ensure that each message in a queue is received by only one subscriber.
In ActiveMQ, to return a message to a queue in the event of a failure, you can use explicit message acknowledgement or JMS transactions. Here's a quick overview.
In RabbitMQ, you do it using acknowledgements.
Also, you might want to consider reliability for your message broker. Both ActiveMQ and RabbitMQ offer highly available broker configuration options.
We are facing a random issue with ActiveMQ and its consumers. We observe that, few consumers are not receiving messages, even though they are connected to ActiveMQ queue. But it works fine after the consumer restart.
We have a queue named testQueue at ActiveMQ side. A consumer is trying to de-queue the messages from this queue. We are using Spring's DefaultMessageListenerContainer for this purpose. Message is being delivered to the consumer node from ActiveMQ Broker. From the tcpdump as well, it was obvious that, message is reaching the consumer node, But the actual consumer code is not able to see the message. In other words, the message seems to be stuck either in ActiveMQ consumer code or in Spring’s DefaultMessageListenerContainer.
See refer to the below fig. for more clarity on the issue. Message is reaching Consumer node, but it is not reaching the “Actual Consumer Class”, which means that the message got stuck either in AMQ consumer code or Spring DMLC.
Below are the details captured from ActiveMQ admin.
Queue-Name /Pending-Message-Count /Consumer-Count /Messages-Enqueued /Messages-Dequeued
testQueue /9 /1 /9 /0
Below are the more details.
Connection-ID /SessionId /Selector /Enqueues /Dequeues /Dispatched /Dispatched-Queue /Prefetch
ID:bearsvir52-45176-1375519181268-3:5 /1 / /9 /0 /9 /9 /250
From the second table it is obvious that, messages are being delivered to the consumer, but the consumer is not acknowledging the message. Hence the messages are stuck in Dispatched-Queue at broker side.
Few points for to your notice:
1)There is no time difference b/w Broker node and consumer node.
2)Observed the tcpdump at consumer side. We can see MessageDispatch(Openwire) packet being transferred to consumer node, But could not find the MessageAck(Openwire) for the same.
3)Sometimes it is working on a node, and sometimes it is creating problem on the same node.
One cause of this can be incorrectly using a CachingConnectionFactory (with cached consumers) with a listener container that dynamically adjusts the consumers (max consumers > consumers). You can end up with a cached consumer just sitting in the pool and not being actively used. You never need to cache consumers with a listener container.
For problems like this, I generally recommend running with TRACE logging and you can see all the consumer activity.
It took lot of time to figure out the solution. There seems to be some issue with the org.apache.activemq.ActiveMQConnection.java class, in case of AMQ fail over. The connection object is not getting started at consumer side in such cases.
Following is the fix i have added in ActiveMQConnection.java file and compiled the sources to create activemq-core-x.x.x.jar
private final Object startMutex = new Object();
added a check in createSession method
public Session createSession(boolean transacted, int acknowledgeMode) throws JMSException {
synchronized (startMutex) {
if(!isStarted()) {
start();
}
}