Reading a message in Azure storage queue multiple times - azure-storage

Context: I have an Azure storage queue that is used as the input queue for a Queue Trigger Function. So, whenever a message gets added to the queue, some function X is triggered and starts running. I want to test that the message was successfully put in the queue and consumed. How can I do that from the queue only (assuming I do not have visibility into my function X, and I can not change the settings for the Queue Trigger Function)? To further break down this question:
After the Queue Trigger Function dequeues the message, would the message still be available for me to read from it when testing? If yes, how can I access it?
Since there is a race condition here, if I dequeue the message when testing before the Queue Trigger Function gets to do it, how would that interfere with the function of the Queue Trigger? Is it possible to dequeue the message when testing, but at the same time, have it available for Queue Trigger to dequeue it and trigger my function X with no interference at all?
Bottom line, I have a queue message in Azure storage queue that I want to read twice from two different sources, with no interference between the two operations. Is this possible and supported? If yes, how can I do it?
Thanks!

I don't think the way you're trying to do this will work. You can get part of the way by using Peek Messages to read queue messages without dequeuing them, but if the Function gets to a message before you then you'll never see it in the first place.
However, you might might be able to get the information you need by using Storage Analytics Logging to track queue activity, or by using Service Bus topics instead of a queue so your messages can have multiple subscribers.

Related

ActiveMQ: How do I limit the number of messages being dispatched?

Let's say I have one ActiveMQ Broker and an undefined numbers of consumers.
Problem:
To process a message, consumers need an external service which is either "DATA1" or "DATA2" (specified in the message)
Each server, "DATA1" and "DATA2", can only handle 20 connections
So at most 20 "DATA1" and 20 "DATA2" messages must be dispatched at any time
Because of priorization, the messages must be enqueued in the same queue
Even if message A has a higher prio than message B, if A can't be processed because the external service has no free slots, message B needs to be processed instead
How can this be solved? As long as I was using message pulling (prefetch of 0), I was able to do this by using a BrokerPlugin that, on messagePull, achieved this by using semaphores and selectors. If the limits were reached, the pull returned null.
However, due to performance issues I had to set prefetch to 1 and use push instead. Therefore, my messagePull hack no longer works (it's never called).
So far I'm considering implementing a custom Cursor but I was wondering if someone knows a better solution.
Update the custom cursor worked but broke features like message removal. I tried with a custom Queue and QueueDispatchSelector (which is a pain to configure since there isn't a proper API to do so) and it mostly works but I still have synchronisation issues.
Also, a very suitable API seems to be DispatchPolicy, however, while it is referenced by Queue, it's never used.
Queues give you buffering for system processing time for free. Messages are delivered on demand. With prefetch=0 or prefetch=1, should effectively get you there. Messages will only be delivered to a consumer when the consumer is ready (ie.. during the consumer.receive() method).
consumer.receive() is a blocking call, so you should not need any custom plugin or other to delay delivery until the consumer process (and its required downstream services) are ready to handle it.
The behavior should work out-of-the-box, or there are some details to your use case that are not provided to shed more light on the scenario.

managing lock on message in RabbitMQ

I'm trying to use RabbitMQ in a more unconventional way (though at this point i can pick any other message queue implementation if needed)
I have one queue (I can have more if needed) that where customers are fetching N messages asynchronous. After they do their work I send the results from the client to the db.
I have two problems: first I don't want that they will work on the same message, second I want to grantee that I wont lose messages in case that my customer will close the browser or just stop working.
I looked at the documentation and saw the TTL which was perfect for me if I could alter that message that got timeout isn't going to be deleted but to move to another queue. can't find a way to alter this.
Moreover I looked at the confirmation option which in the first glance looked what I wanted,that mechanism is working like this: when the consumer gets a message he send confirmation to queue, I thought I can delay this confirm and send it when the work is done on the client side.
my problem was that I can't program the queue that if any message didn't get confirm then return it to the queue (or to another).
I also find how to do a scheduled message but it didn't help either because I don't want that the message will be inserted to the queue in five min,I want that when a customer will receive a message it will be locked in the queue for 5 min until confirm to delete is set otherwise return it to the queue.
Can I do temporary queue that enables my mechanism?
If someone can help with one of the problems or suggest another architecture or option to do it in another MQ it would be great.
Resources:
confirmation:
http://www.rabbitmq.com/blog/2011/02/10/introducing-publisher-confirms/
post about locks but his problem was a batcher component:
Locks and batch fetch messages with RabbitMq
TTL:
https://www.rabbitmq.com/ttl.html
Schedule a message:
https://www.rabbitmq.com/blog/2015/04/16/scheduling-messages-with-rabbitmq/
my problem was that I can't program the queue that if any message
didnt get confirm then return it to the queue (or to another).
RabbitMQ does this anyhow, so all you have to do is switch off the auto-ack flag, you figured this out
I thought I can delay this confirm and send it when the work is done
on the client side.
so just send the ACK once you've finished with processing the message.
All the unacknowledged messages remain in the queue and are re-delivered to next consumer (or the same one when it's up again, depending on your setup)

Application that uses own queues as holders of long-term process operations

I want to make a long-term process handler and use for it NServiceBus.
The role of NServiceBus is to hold an operations of that process (some kind of batch process)
The problem is that I have more than one type of long-term processes and each of them must run parallel, so pushing all messages in one queue is not that I have to do, I think.
Logic is:
1) Receive an order of a long-term process,
2) Divide it into N operations,
3) Each operation "pack" into the message and push in the queue,
4) According to the type of message, particular handler will handle messages and do the operation it holds.
I can't put all of the operations in one queue because my application should handle another messages, that requires fast response. If queue would be full of operations, another messages would wait a lot of time to be processed
So, does anyone know how to solve that problem ?
You should properly set the number of worker threads in the access queue config settings of the long-running process endpoint.
if you are using MSMQ check out this and especially the tag <MsmqTransportConfig ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5"/>
Every idle worker thread pull out a message from the queue although another thread is still processing another message. In this way you shoud achieve the parallel computation requirement you described in your scenario.

How to write handler for Error queues in NServiceBus Saga?

I have a situation where the Maxtries in my MSMQ is 5. After 5 times nservicebus sends the message to the Error que that I have defined. Now I want to perfomr some further action when this happens (I have to update status of some processes to Error)
Is it possible to write a handler in my Saga class to read these error queues?
Thanks in Advance
Haris
If your are using 2.x you may want to consider writing a separate endpoint where the error queue is its input queue. The downside to this is that the messages will come off the queue. Assuming you still want to store them, you'll have to push them off to a database or some other type of storage.
You could also write a Saga that polls the error queue to check for messages and updates the appropriate status. After each time you check the queue, you would need to request another Timeout.
In 3.0, you have more control over the exceptions, and can implement your own way to handle the errors. If you implement the interface IManageMessageFailures, you can do your work there.
As an alternative to the solutions provided by Adam, you can subscribe to events raised by ServiceControl which are raisesd when a messages is sent to the errorqueue. See the official documentation about this here: http://docs.particular.net/servicecontrol/contracts
Another approach would be the notification API as described here: http://docs.particular.net/nservicebus/errors/subscribing-to-error-notifications. It allows you to subscribe to certain events (not event messages) like "MessageSentToErrorQueue" directly on the endpoint, so you wouldn't need to consume the error queue.

Removing Message from Queue only if user does some operation

We are having MVC application which reads data from MSMQ.
We are trying to find out a way to read message from queue and remove it from queue only if user has done a successful operation on the queue.
The message should remain in the queue until user completes the processing, the message should not be available to anyone else until the user who is processing the message object has finished the operation.
Is there a property for a Message object to be set as Peeked which will not allow reading of this message again until ether it is put back into the queue or removed from the queue?
We are not sure if using MSMQ is a good idea in this case?
It sounds like you need to use your queue(s) in transactional mode. Then, your client can receive a message, process it, and then commit the transaction, at which point the message will be finally dequeued. While the transaction is active, however, other clients will not see the message -- it will be held in reserve until the transaction completes or is aborted.
This MSDN article has a decent overview of usage patterns for reliable messaging with MSMQ:
http://msdn.microsoft.com/en-us/library/ms978430.aspx
The Queue is the right idea. Your approach of "leave it in the queue, locked, but still kind-of-available" is wrong.
You may need multiple queues.
Process A enqueues something in Queue 1
Process B dequeues from Queue 1 and starts work.
If B is successful, that's it.
Otherwise, it gets queued somewhere else (perhaps the same queue, or perhaps Queue 2) for follow-up work.
If it went back into Queue 1, B will find it again, eventually. If it went to another Queue, then another process does cleanup, logging, error fixup or whatever, possibly putting something back in Queue 1.
A Queue isn't a database -- there's nothing stateful (no "don't look at me, I'm being processed").
A Queue is transient storage. Someone writes, someone else reads, and that's it.
If you want reliability, read this: http://msdn.microsoft.com/en-us/library/ms978430.aspx
And this: http://blogs.msdn.com/shycohen/archive/2006/02/20/535717.aspx
And this: http://www.request-response.com/blog/PermaLink,guid,03fb0e40-b446-42b5-ad90-3be9b0260cb5.aspx
Reliability is a feature of the queue, not your application. You can do a "recoverable read". It's a transaction that's part of the queue API.