I am using RabbitMQ to send notifications to the user. The user can read his queue at any time.
The problem I am facing is that the queue is filled with lots of notifications during the night, and when the user returns in the morning, he has to process these messages sequentially. A lot of these notifications are even duplicates.
I guess it would make sense to improve this at the publisher side. That is, before adding a new notification, we investigate if there are already pending notifications in the queue. If this is the case, we only queue a new notification if it is really a new notification, hence avoiding duplicates.
We might even go further and extend this by combining notifications: instead of simply queuing a new notification, we could replace the present notifications from the queue by a new one which holds the sum of these notifications and the new one (for example in an array of inner notifications).
Is this possible with AMQP/RabbitMQ?
This rabbitmq plugin has been written to tackle your issue.
You can enable de-duplication on a queue via setting its x-message-deduplication argument to true.
Then, your publishers will need to provide the x-deduplication-header message header with a value meaningful for de-duplication. The value could be a unique message ID or the MD5/SHA1 hash of the body for example.
No, by default you can't replace an existing message.
Related
In my app(multiple instances), we occasionally see the case where connection is lost between my app and rabbitmq due to network issues(my app and rabbitmq are both alive), then after connection is recovered(re-established) we will receive messages that are unacked.
This creates an issue for us, because my app wasn't dead, and it is still processing the same message it received before, but now the message is redeivered, and it causes the app to process the message again (which can be fatal to us).
Since the app has multiple instances, it is not easy for an instance to check if another instance is processing the same message at the same time. We can't simply filter out redelivered message, because we need this feature to handle instance/app crashes/re-deployments.
It doesn't seem that there is an api to tell rabbitmq when to not redeliver unacked messages.
So what is the recommended practice to handle this situation ?
Thanks,
The general solution for such scenario is to make the consumers handle the messages in an idempotent manner . Generally what I do is from the producer side ( in case there is no unique identifier in the message body ) I add an attribute idempotencyId to the message body which is a guid and on the consumer side for each message this id is validated against the stored value in database , any duplicates are rejected.
This approach also works for messages which might be shoveled from another cluster or if in a same cluster multiple instances of consumers are listening then too this approach guarantee one time processing.
Would suggest to go over the RabbitMQ Reliability Guide here
Yeah, exactly-once delivery is not something RabbitMQ is good at. In fact, I'd say you should probably not be using it for these kinds of problems. Honestly, the only way to truly fix this is to use distributed transactions or locking.
Anyway, you could turn the problem on its head by ack'ing the message as soon as the consumer gets it, before it starts working on it. That would avoid the RabbitMQ-related duplication issue at least. This is at-most-once delivery.
Of course, it means that if the consumer crashes, the message is lost forever. So you need to persist the message right before you ack it so you can recover it later and also the consumer should remove it once it's complete.
Considering that crashes are rare, you can then have a single dedicated process that just works on those persisted messages. Or for that matter, handle them manually.
Just be aware that you are pushing the duplication problem in front of you, because the consumer might fail to remove the persisted message after it's done working with it anyway, but at least you have the option to implement it however you want.
Storage in this case could be anything from files, a RDBMS or something like ZooKeeper or Redis to lock/unlock in-flight messages.
I'm trying to use RabbitMQ in a more unconventional way (though at this point i can pick any other message queue implementation if needed)
I have one queue (I can have more if needed) that where customers are fetching N messages asynchronous. After they do their work I send the results from the client to the db.
I have two problems: first I don't want that they will work on the same message, second I want to grantee that I wont lose messages in case that my customer will close the browser or just stop working.
I looked at the documentation and saw the TTL which was perfect for me if I could alter that message that got timeout isn't going to be deleted but to move to another queue. can't find a way to alter this.
Moreover I looked at the confirmation option which in the first glance looked what I wanted,that mechanism is working like this: when the consumer gets a message he send confirmation to queue, I thought I can delay this confirm and send it when the work is done on the client side.
my problem was that I can't program the queue that if any message didn't get confirm then return it to the queue (or to another).
I also find how to do a scheduled message but it didn't help either because I don't want that the message will be inserted to the queue in five min,I want that when a customer will receive a message it will be locked in the queue for 5 min until confirm to delete is set otherwise return it to the queue.
Can I do temporary queue that enables my mechanism?
If someone can help with one of the problems or suggest another architecture or option to do it in another MQ it would be great.
Resources:
confirmation:
http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms/
post about locks but his problem was a batcher component:
Locks and batch fetch messages with RabbitMq
TTL:
https://www.rabbitmq.com/ttl.html
Schedule a message:
https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq/
my problem was that I can't program the queue that if any message
didnt get confirm then return it to the queue (or to another).
RabbitMQ does this anyhow, so all you have to do is switch off the auto-ack flag, you figured this out
I thought I can delay this confirm and send it when the work is done
on the client side.
so just send the ACK once you've finished with processing the message.
All the unacknowledged messages remain in the queue and are re-delivered to next consumer (or the same one when it's up again, depending on your setup)
In my last project, I am using MassTransit (2.10.1) with RabbitMQ.
On some scenarios, a producer is allowed to send a bulk of messages to the queue.
For example - the user set to a bulk notification to his list of contacts - the list could be as large as 100000 contacts on some cases. This will send a message per each contact to the queue (I need to keep track of each message). Now since - as I understand - messages are being processed in the order of entrance, that user is clogging up the queue for a large amount of time while another user, which may have done a simple thing such as send a test message to himself, waits for the processing to end.
I have considered separating queues for regular VS bulk operations but this still doesn't solve the problem for small bulks (user with dozens of contacts waiting for users with hundred thousands) and causes extra maintenance.
The ideal solution for me - I think - would involve manipulating the routing in such a way that the consumer will be handling x messages from the same user, move the X messages from the next user, than again, and than moving back to the beginning of the queue, until all messages are processed.
Is that possible? Is there a better solution?
Thanks in advance.
You will to have to write code to manage this yourself. RabbitMQ doesn't really have any built-in mechanism to handle a scenario like this, without your code getting involved.
If you want to process a few at a time from bulk, then back to normal, then back to bulk, you'll need 2 queues and code to manage which one is being pulled from, when.
Just my opinion, seeing as how there is no built in way to my knowledge...Have you considered using whatever storage you are using to store the notifications, then just publish one message, with a List of Notifications, store it in you DB, and then have a retrieve notifications for user consumer. the response would be one message, it may have a massive payload, but even if that gets bogged down, add a skip and take property to the message and force it to be between 0 and 50 (or whatever). In what scenario would you want to show a user 100,000 notifications at once?
Question
I want to pass data between applications, in a publish-subscribe manner. Data may be produced at a much higher rate than consumed and messages get lost, which is not a problem. Imagine a fast sensor and a slow sensor data processor. For that, I use redis pub/sub and wrote a class which acts as a subscriber, receives every message and puts that into a buffer. The buffer is overwritten when a new message comes in or nullified when the message is requested by the "real" function. So when I ask this class, I immediately get a response (hint that my function is slower than data comes in) or I have to wait (hint that my function is faster than the data).
This works pretty good for the case that data comes in fast. But for data which comes in relatively seldom, let's say every five seconds, this does not work: imagine my consumer gets launched slightly after the producer, the first message is lost and my consumer needs to wait nearly five seconds, until it can start working.
I think I have to solve this with Redis tools. Instead of a pub/sub, I could simply use the get/set methods, thus putting the cache functionality into Redis directly. But then, my consumer would have to poll the database instead of the event magic I have at the moment. Keys could look like "key:timestamp", and my consumer now has to get key:* and compare the timestamps permamently, which I think would cause a lot of load. There is no natural possibility to sleep, since although I don't care about dropped messages (there is nothing I can do about), I do care about delay.
Does someone use Redis for a similar thing and could give me a hint about clever use of Redis tools and data structures?
edit
Ideally, my program flow would look like this:
start the program
retrieve key from Redis
tell Redis, "hey, notify me on changes of key".
launch something asynchronously, with a callback for new messages.
By writing this, an idea came up: The publisher not only publishes message on topic key, but also set key message. This way, an application could initially get and then subscribe.
Good idea or not really?
What I did after I got the answer below (the accepted one)
Keyspace notifications are really what I need here. Redis acts as the primary source for information, my client subscribes to keyspace notifications, which notify the subscribers about events affecting specific keys. Now, in the asynchronous part of my client, I subscribe to notifications about my key of interest. Those notifications set a key_has_updates flag. When I need the value, I get it from Redis and unset the flag. With an unset flag, I know that there is no new value for that key on the server. Without keyspace notifications, this would have been the part where I needed to poll the server. The advantage is that I can use all sorts of data structures, not only the pub/sub mechanism, and a slow joiner which misses the first event is always able to get the initial value, which with pub/sib would have been lost.
When I need the value, I obtain the value from Redis and set the flag to false.
One idea is to push the data to a list (LPUSH) and trim it (LTRIM), so it doesn't grow forever if there are no consumers. On the other end, the consumer would grab items from that list and process them. You can also use keyspace notifications, and be alerted each time an item is added to that queue.
I pass data between application using two native redis command:
rpush and blpop .
"blpop blocks the connection when there are no elements to pop from any of the given lists".
Data are passed in json format, between application using list as queue.
Application that want send data (act as publisher) make a rpush on a list
Application that want receive data (act as subscriber) make a blpop on the same list
The code shuold be (in perl language)
Sender (we assume an hash pass)
#Encode hash in json format
my $json_text = encode_json \%$hash_ref;
#Connect to redis and send to list
my $r = Redis->new(server => "127.0.0.1:6379");
$r->rpush("shared_queue","$json_text");
$r->quit;
Receiver (into a infinite loop)
while (1) {
my $r = Redis->new(server => "127.0.0.1:6379");
my #elem =$r->blpop("shared_queue",0);
#Decode hash element
my $hash_ref=decode_json($elem\[1]);
#make some stuff
}
I find this way very usefull for many reasons:
The element are stored into list, so temporary disabling of receiver has no information loss. When recevier restart, can process all items into the list.
High rate of sender can be handled with multiple instance of receiver.
Multiple sender can send data on unique list. In ths case should be easily implmented a data collector
Receiver process that act as daemon can be monitored with specific tools (e.g. pm2)
From Redis 5, there is new data type called "Streams" which is append-only datastructure. The Redis streams can be used as reliable message queue with both point to point and multicast communication using consumer group concept Redis_Streams_MQ
We are currently evaluating RabbitMQ. Trying to determine how best to implement some of our processes as Messaging apps instead of traditional DB store and grab. Here is the scenario. We have a department of users who perform similar tasks. As they submit work to the server applications we would like the server app to send messages back into a notification window saying what was done - to all the users, not just the one submitting the work. This is all easy to do.
The question is we would like these message to live for say 4 hours in the Queue. If a new user logs in or say a supervisor they would get all the messages from the last 4 hours delivered to their notification window. This gives them a quick way to review what has recently happened and what is going on without having to ask others, "have you talked to John?", "Did you email him is itinerary?", etc.
So, how do we publish messages that have a lifetime of x hours from the time they were published AND any new consumers that connect will get all of these messages delivered in chronological order? And preferably the messages just disappear after they have expired from the queue.
Thanks
There is Per-Queue Message TTL and Per-Message TTL in RabbitMQ. If I am right you can utilize them for your task.
In addition to the above answer, it would be better to have the application/client publish messages to two queues. Consumer would consume from one of the queues while the other queue can be configured using per queue-message TTL or per message TTL to retain the messages.
Queuing messages you do to get a message from one point to the other reliable. So the sender can work independently from the receiver. What you propose is working with a temporary persistent store.
A sql database would fit perfectly, but also a mongodb would work nicely. You drop a document in mongo, give it a ttl and let the database handle the expiration.
http://docs.mongodb.org/master/tutorial/expire-data/