Consumer allocation RabbitMQ - rabbitmq

Need help in designing the rabbit-mq consumer distribution.
For eg,
There are 100 queues and 10 threads to consume messages from that 100 queue.
Each thread will be consuming messages from 10 queue each.
Question 1 : How to dynamically assign the threads to queues ?. If the threads are running in different machines ?
No more than one thread should consume from a queue (to maintain the order of processing the message in the respective queue)
Question 2 : When there is a need to increase the consumer threads while the system runs, How it can be done ?.

There are lot of posts about the messages order (FIFO), in you have a normal situation(one producer one consumer without network problem) you don’t have any problem. But as you can read here
In particular note the "unless the redelivered field is set" condition,
which means any disconnect by consumers can cause messages pending
acknowledgement to be subsequently delivered out of order.
Also, for example if you publish a message and there is some error during the publish you have to re-publish the message in the correct order.
It means that if you need absolutely the messages order you have to implement it, for example marking each packet with a sequential number, and you should also implement confirm publish .
I think, but this is my opinion, that when you use a messages system you shouldn’t worry about the messages order, because it should be your application able to manage the data.
Having said that,if we suppose that the 100 queues have to handle the same messages kind, you could use an ThreadPoolExecutor and shared it from all consumer.
For example:
public class ActualConsumer extends DefaultConsumer {
public ActualConsumer(Channel channel) {
super(channel);
}
#Override
public void handleDelivery(String consumerTag, Envelope envelope, BasicProperties properties, byte[] body) throws java.io.IOException {
MyMessage message = new MyMessage(body);
mythreadPoolExecutorShared.submit(new MyHandleMessage(message))
}
}
In this way you can balance the messages between the threads.
Also for the threadPool you can use different policies, for example a static allocation with fixed thread number or dynamic thread allocation.
Please read this post about the threadpool resize (Can you dynamically resize a java.util.concurrent.ThreadPoolExecutor while it still has tasks waiting)
You can apply this pattern to all nodes, in this way you can balance the dispatching messages and assign a correct threads number.
I hope it can be useful,I'd like to be more detailed, but your question is a bit generic.

Related

Implement Redis point-to-point queue to guarantee message processing once by any consumer

Redis has very good implementation for PUBSUB where a message published on a channel will be received by multiple receivers registered for the topic.
What is the ideal way to implement point-to-point (i.e. queue) semantics where e.g. multiple receivers are registered with a single queue and as soon as a message is pushed to the queue it will be processed by only 1 receiver (listener)? Any Java reference example would help.
Here the idea is to a read huge file containing transaction records and hence each transaction should be processed only once.
I could see Redis Streams is advised, but I do not see a sound Java reference implementation
Just use LPUSH msgqueue to put items in the the list and have all clients do:
while running
BRPOP msgqueue
# process popped item
done
Note that there is no direct implementation like for PUBSUB topic . Another reason to use Queue here is that there is no Durable Subscription in Publish Subscribve as well i.e. when subscriber is not up when the message sent your message will be lost. but below Queue implementation would save bit here.
Final Answer
LPUSH TEST_QUEUE Test message
LLEN TEST_QUEUE
BRPOP TEST_QUEUE 0
Java Implementation
while(true){
redisTemplate.opsForList().leftPop("TEST_QUEUE",0, TimeUnit.SECONDS);
int keySize =this.pushOperation.size("TEST_QUEUE");
//iterate keySize {
redisTemplate.opsForList().leftPop("TEST_QUEUE",0, TimeUnit.SECONDS);
}
}

MassTransit generates _skipped queues which I want to ignore

Can anyone guess what the problem can be because I'm clueless on how to solve this. MassTransit generates _skipped queues and I don't have a clue why it is generating those. It is being generated when doing a publish request response.
Request Client is created using following method in MassTransit.RequestClientExtensions
public static IRequestClient<TRequest, TResponse> CreatePublishRequestClient<TRequest, TResponse>(this IBus bus, TimeSpan timeout, TimeSpan? ttl = null, Action<SendContext<TRequest>> callback = null) where TRequest : class where TResponse : class
{
return (IRequestClient<TRequest, TResponse>) new PublishRequestClient<TRequest, TResponse>(bus, timeout, ttl, callback);
}
And Request is done as follows:
TResponse response = TaskUtil.Await(() => requestClient.Request(request));
As you can see this is Request Response scenario where Request is being sent to all consumers. But because at the moment we have only one consumer it only is being sent to that consumer. deadletters appear easily if a publishrequestresponse is done to multiple consumers, once a consumer responds, the other consumer doesn't know where to respond and a deadletter is generated. But because we have one consumer here, we can eliminate this possibility.
So what could be other reasons for these skipped queues? Huge thanks for any help on how I can troubleshoot this...
I have to say, in the Consume method, in some condition, we raise a RequestTimeoutException and catch it in the requesting application. This is tested and this doesn't generate skipped queues.
Skipped queue is a dead letter queue. It means that your endpoint queue has a binding to some message exchange but there is no consumer for that message any longer. Maybe you change the topology and moved the consumer. You can go to the RMQ management UI and check the bindings for your endpoint exchange. If you look at messages that ended up in the skipped queue, you will find out what message types to look for.
Exchanges are named after message types so it will be easy to find the obsolete binding.
Then, in the management UI, you can manually remove the binding that is obsolete and there will be no more messages coming to the skipped queue.

RabbitMQ Consumer Design for Multiple Exchange-Queue Model

I have a RabbitMQ setup with following configuration.
each Exchange is FANOUT type
Multiple Queue attached to each Exchange.
BlockingConnection is made by consumer.
Single Consumer to handle all callbacks.
Problem -
Some payload take longer time to process than others, which leads the consumer to stay idle even when there are payloads in other queue.
Question -
How should I implement the consumer to avoid long waits ? Should I
run separate consumer for each module ? any user experience ?
Can I configure RabbitMQ to handle these situations ? if so how.?
First it would be nice to know why do you have more than one fanout exchange? Do you really need this? Fanout exchange sends messages to all queues...
Just have more consumers. Check this example from rabbitmq tutorial.
You don't really need to configure rabbitmq explicitly, everything can be done with the clients (publishers and subscribers), you just need to figure out how many exchanges do you need and which type should they be etc.
First, what programming language are u using? Most common languages, such as python, java, c#, all support creating additional threads for parallel process.
Let's say you consume the queue like below (pseu code):
def callback(ch, method, properties, body) ...
def threaded_function(ch, method, properties, body) ...
channel.basic_qos(prefetch_count=3)
channel.basic_consume(callback, queue='task_queue')
channel.start_consuming()
first, setting "prefetch_count=3" allows your consumer to have at-most 3 messages in not-ack status concurrently.
In the callback method, you should start a thread for executing each message with threaded_function. At the end of the threaded_function method body, do:
ch.basic_ack(delivery_tag = method.delivery_tag)
so that, at-most 3 messages could be processed concurrently, even it takes longer time for one or two of the threads to run, the others could still process next messages.

rabbitListener execute task when no message in the queue

we are using #RabbitListener to listen to the queue and when there are messages we process them. However I want to do some reporting task when queue is empty, this happens when our app just processed tons of messages in the queue and no more message for a while. that's the time I want to report. How can i do that with #RabbitListener?
here is my code:
#RabbitListener(queues = "${consumer.queue}", containerFactory = "ListenerContainerFactory")
public void handleMessage(Message message) {
processEvent(message);
}
As I answered your other question a few weeks ago there is no mechanism in AMQP (and hence Spring AMQP) to notify the consumer that there are currently no messages in the queue.
We could modify the container to fire an ApplicationEvent once, when no message is found after messages have been processed, but that would require a modification to the framework. Your listener would have to implement ApplicationListener to get such an event.
Feel free to open a New Feature JIRA Issue if you want and we might be able to take a look at it.

In pub/sub model, how to make Subscriber pause processing based on some external state?

My requirement is to make the Subscriber pause processing the messages depending on whether a web service is up or not. So, when the web service is down, the messages should keep coming to the subscriber queue from Publisher and keep piling up until the web service is up again. (These messages should not go to the error queue, but stay on the Subscriber queue.)
I tried to use unsubscribe, but the publisher stops sending messages as the unsubscribe seems to clear the subscription info on RavenDB. I have also tried setting the MaxConcurrencyLevel on the Transport class, if I set the worker threads to 0, the messages coming to Subscriber go directly to the error queue. Finally, I tried Defer, which seems to put the current message in audit queue and creates a clone of the message and sends it locally to the subscriber queue when the timeout is completed. Also, since I have to keep checking the status of service and keep defering, I cannot control the order of messages as I cannot predict when the web service will be up.
What is the best way to achieve the behavior I have explained? I am using NServiceBus version 4.5.
It sounds like you want to keep trying to handle a message until it succeeds, and not shuffle it back in the queue (keep it at the top and keep trying it)?
I think your only pure-NSB option is to tinker with the MaxRetries setting, which controls First Level Retries: http://docs.particular.net/nservicebus/msmqtransportconfig. Setting MaxRetries to a very high number may do what you are looking for, but I can't imagine doing so would be a good practice.
Second Level Retries will defer the message for a configurable amount of time, but IIRC will allow other messages to be handled from the main queue.
I think your best option is to put retry logic into your own code. So the handler can try to access the service x number of times in a loop (maybe on a delay) before it throws an exception and NSB's retry features kick in.
Edit:
Your requirement seems to be something like:
"When an MyEvent comes in, I need to make a webservice call. If the webservice is down, I need to keep trying X number of times at Y intervals, at which point I will consider it a failure and handle a failure condition. Until I either succeed or fail, I will block other messages from being handled."
You have some potentially complex logic on handling a message (retry, timeout, error condition, blocking additional messages, etc.). Keep in mind the role that NSB is intended to play in your system: communication between services via messaging. While NSB does have some advanced features that allow message orchestration (e.g. sagas), it's not really intended to be used to replace Domain or Application logic.
Bottom line, you may need to write custom code to handle your specific scenario. A naive solution would be a loop with a delay in your handler, but you may need to create a more robust in-memory collection/queue that holds messages while the service is down and processes them serially when it comes back up.
The easiest way to achieve somewhat the required behavior is the following:
Define a message handler which checks whether the service is available and if not calls HandleCurrentMessageLater and a message handler which does the actual message processing. Then you specify the message handler order so that the handler which checks the service availability gets executed first.
public interface ISomeCommand {}
public class ServiceAvailabilityChecker : IHandleMessages<ISomeCommand>{
public IBus Bus { get; set; }
public void Handle(ISomeCommand message) {
try {
// check service
}
catch(SpecificException ex) {
this.Bus.HandleCurrentMessageLater();
}
}
}
public class ActualHandler : IHandleMessages<ISomeCommand>{
public void Handle(ISomeCommand message) {
}
}
public class SomeCommandHandlerOrdering : ISpecifyMessageHandlerOrdering{
public void SpecifyOrder(Order order){
order.Specify(First<ServiceAvailabilityChecker>.Then<ActualHandler>());
}
}
With that design you gain the following:
You can check the availability before the actual business code is invoked
If the service is not available the message is put back into the queue
If the service is available and your business code gets invoked but just before the ActualHandler is invoked the service becomes unavailable you get First and Second Level retries (and again the availability check in the pipeline)