RabbitMQ X-delayed plugin : How to access processing data in exchanges? - rabbitmq

I'm searching for few hours now about how to retrieve informations in a RabbitMQ exchange.
Let me explain my goal :
I designed a system to avoid burning gmail API calls limits (per second) in my application. To do so I set up a cron which scale the sendings in an hour : basically I defined a delay in my cron and then push my data into a delayed-queue which is itself bound to the x-delayed-exchanger (type direct). This part is working pretty well.
In addition, I have a consumer which consumes and send the emails from my queue. It's perfectly working too.
My problem come here : Some manual actions coming from my users need to be send ASAP. That so, I want to retrieve the few next delayed messages which are going to be sent from my delayed exchange to the queue and put this new message between the two next delayed message :
As an example :
my-delayed-exchange has [message1: will be published in 3000ms, message2: will be published in 6000ms]: I want to insert messageToSendAsap: will be published in 4500ms] that way I'll be insure that I control my API limits.
Does anyone hear about a method to achieve this ?
Thank you in advance
PS : I code in NodeJS with the amqp lib

Based on the example on the github page of the plugin, you can simply set the x-delay value to 1 (I think it cannot be zero). That is if you are sending message M1 with delay of X and message M2 with delay of Y, so that Y < X the message M2 will be delivered to queue(s) before M1.
Also if you want the message to be sent right away (so not in-between the next two as you wrote in your example) you can simply have another "classical" direct exchnage (without any delays).

Related

ActiveMQ: How do I limit the number of messages being dispatched?

Let's say I have one ActiveMQ Broker and an undefined numbers of consumers.
Problem:
To process a message, consumers need an external service which is either "DATA1" or "DATA2" (specified in the message)
Each server, "DATA1" and "DATA2", can only handle 20 connections
So at most 20 "DATA1" and 20 "DATA2" messages must be dispatched at any time
Because of priorization, the messages must be enqueued in the same queue
Even if message A has a higher prio than message B, if A can't be processed because the external service has no free slots, message B needs to be processed instead
How can this be solved? As long as I was using message pulling (prefetch of 0), I was able to do this by using a BrokerPlugin that, on messagePull, achieved this by using semaphores and selectors. If the limits were reached, the pull returned null.
However, due to performance issues I had to set prefetch to 1 and use push instead. Therefore, my messagePull hack no longer works (it's never called).
So far I'm considering implementing a custom Cursor but I was wondering if someone knows a better solution.
Update the custom cursor worked but broke features like message removal. I tried with a custom Queue and QueueDispatchSelector (which is a pain to configure since there isn't a proper API to do so) and it mostly works but I still have synchronisation issues.
Also, a very suitable API seems to be DispatchPolicy, however, while it is referenced by Queue, it's never used.
Queues give you buffering for system processing time for free. Messages are delivered on demand. With prefetch=0 or prefetch=1, should effectively get you there. Messages will only be delivered to a consumer when the consumer is ready (ie.. during the consumer.receive() method).
consumer.receive() is a blocking call, so you should not need any custom plugin or other to delay delivery until the consumer process (and its required downstream services) are ready to handle it.
The behavior should work out-of-the-box, or there are some details to your use case that are not provided to shed more light on the scenario.

How to reschedule messages with a specific time in RabbitMQ

My problem:
I pull off a message from a RabbitMQ-Queue. I try to process this message and realize that it can't be processed yet. So i would like to add it back to the queue and let it return only on a specific time + 5000ms. Unfortunately that is more challenging than i thought.
What i've tried:
RabbitMQ Dead Letter Attributes -> My issue here is, even though the manual says that the default exchange is binded to every queue it doesnt forward it according to the routing criteria. I've tried to add expires = "5000" and x-dead-letter-routing-key = "queuename" also "x-dead-letter-exchange = "" as the default exchange should work. The only part which works is the expires part. The message will disappear and go into the dark. This also occurs with the dead-letter-exchange beeing amq.direct including the binding on the targeted queue.
Open gaps for me:
Where i'm a bit left in the dark is if the receivers have to be dead letter queues and if i the dead letter queue is a basic queue with extended functionality. It is also not clear if those parameters (x-dead-letter..) are only for DLX Queues. I would like to do this delayed delivery persistent and purely via. the message attributes and not via. queue configurations (only if required).
I've searched on the web and checked many different dead-letter infos. Im trying to build a micro-service like architecture while using RabbitMQ as the delivery mechanism (i use processes which take their work from the queue and forward it). I would believe other people here have the same running already but i couldn't find any blogs about this.
I had to come to the conclusion that on the message level it is not possible.
I've created now for each queue which is in use a separate queue ("name.delayed") , where i can add the message with the argument "expiration" = 5000
The queue settings itself has to be a dead letter queue routing it to the queue "name"

How to get/know a message that is processing in RabbitMQ?

I need to know if it's possible to know or get a message from RabbitMQ (consumers) which is processing (maybe if it's taking long time). But, I don't want to stop the service. I hope my question is clear enough.
I guess you mean if RabbitMQ provides any build-in notification or tracking about whether a specific message is being processed or not. The answer is No. But you should easily be able to implement one by your own. For instance, when begin to process a message in the consumer, the consumer could send a notification message to a notification queue.

RabbitMQ status of a message

I'm currently using RabbitMQ (bunny) at VersionEye to import meta information about GitHub repos via the GitHub API. Depending on the number of the GitHub repos a task can take a view seconds up to a couple minutes. I create a new message/task like this:
queue.publish( msg, :persistent => true )
One of the workers will get the message and perform the work. The worker sets his status (RUNNING, DONE) in Memcached. That way I know when a task is done!
I would like to get rid of Memcached in that process. I would like to get a status to a msg from RabbitMQ. Something like this would be ideal:
status = queue.publish( msg, :persistent => true )
status = queue.status( msg )
Unfortunately I couldn't find anything like that in the RabbitMQ or Bunny docu.
Does anybody know how to get the status of a message from RabbitMQ?
There are no such feature to get specific message status in RabbitMQ out of the box.
Alternatively to Memcache-based solution to track message status you can use RabbitMQ itself - declare message-specific queue in application and track it status. The whole concept is very close to Remote procedure call (RPC). Say, if it doesn't exists - message still in queue, if it exists - check it content (send "message received" message from message consumer when it was received and "message processed" when message will be processed). The biggest cons is that you queues takes some resources - mainly, memory, which matters if you have huge message flow. Also you will have to cleanup response queues (may be done with queue ttl extension - don't be confused with per-queue message ttl).
To use RabbitMQ in a way described above first make sure you really need this (maybe old-school RPC pattern is what you want, + maybe add some message ttl + dead lettering support to limit response time to reasonable time). If you sure, it is always good idea to move tons of queues to separate vhost for ease of maintenance the reset of queues. Also make sure you will not exceed server resources limit.

RabbitMQ Message Lifetime Replay Message

We are currently evaluating RabbitMQ. Trying to determine how best to implement some of our processes as Messaging apps instead of traditional DB store and grab. Here is the scenario. We have a department of users who perform similar tasks. As they submit work to the server applications we would like the server app to send messages back into a notification window saying what was done - to all the users, not just the one submitting the work. This is all easy to do.
The question is we would like these message to live for say 4 hours in the Queue. If a new user logs in or say a supervisor they would get all the messages from the last 4 hours delivered to their notification window. This gives them a quick way to review what has recently happened and what is going on without having to ask others, "have you talked to John?", "Did you email him is itinerary?", etc.
So, how do we publish messages that have a lifetime of x hours from the time they were published AND any new consumers that connect will get all of these messages delivered in chronological order? And preferably the messages just disappear after they have expired from the queue.
Thanks
There is Per-Queue Message TTL and Per-Message TTL in RabbitMQ. If I am right you can utilize them for your task.
In addition to the above answer, it would be better to have the application/client publish messages to two queues. Consumer would consume from one of the queues while the other queue can be configured using per queue-message TTL or per message TTL to retain the messages.
Queuing messages you do to get a message from one point to the other reliable. So the sender can work independently from the receiver. What you propose is working with a temporary persistent store.
A sql database would fit perfectly, but also a mongodb would work nicely. You drop a document in mongo, give it a ttl and let the database handle the expiration.
http://docs.mongodb.org/master/tutorial/expire-data/