Can webjob consume related messages before executing? - nservicebus

Is it possible for a webjob to listen to a queue and consume all related messages (based on message id or session id or something else) with a small delay after each new message?
For instance, three users edit and update three schedules. Two are for the same employee. Each schedule generates an event for PayrollUpdated for: 1) employee 100, week 1, year 2017, 2) employee 200, week 1, year 2017, 3) employee 100, week1, year 2017.
I’d like the webjob to listen to the queue and execute two concurrent webjobs, one for each employee,week,year identifier.
I cannot tell if batches or sessions can accomplish this. I am also unable to wrap my head around NServiceBus to know if that is an option or not.
Note: the webjob task is idempotent.

Is it possible for a webjob to listen to a queue and consume all related messages (based on message id or session id or something else) with a small delay after each new message?
Out of the box no. To consume all related messages you'd need to use Message Sessions, which are not supported by the ServiceBus trigger. There is an issue for exactly this problem (and also an issue for Azure Functions since they rely on WebJobs SDK). Alternatively, you could look into creating a custom WebJob extension that would use Message Sessions.
I am also unable to wrap my head around NServiceBus to know if that is an option or not.
With NServiceBus you could achieve it using Sagas.

Related

MassTransit - Prioritize RabbitMQ Message on Routing Slip

RabbitMQ supports message priority: https://www.rabbitmq.com/priority.html
MassTransit allows user to set this up when configuring endpoints and when sending/publishing a message.
Question: Would it be possible to set a message priority when using a Routing Slip in MassTransit?
My Problem: We have a screen that can schedule items or process them right away. If scheduled, items can be processed in batches. If hundreds of items are processed at the same time, saving a record on the screen can take minutes because the message would go to the end of the queue, which can lead to a bad user experience.
So, if it's not possible to set the priority, what is the alternative here?
Thanks!
Your easiest option? Setup your activity services so that they host two endpoints, one for execute (anything, including batch) and one for execute-interactive, that you use when it is an interactive request. When you build the routing slip, use the appropriate queues for the activity execution, and you're off and running. Batch won't interfere because it's on a separate set of endpoints.
Your other option is a lot harder, and would involve creating send middleware that looks for RoutingSlip and checks some value and sets the priority.

Nservicebus Sequence

We have a requirement for all our messages to be processed in the order of arrival to MSMQ.
We will be exposing a WCF service to the clients, and this WCF service will post the messages using NServiceBus (Sendonly Bus) to MSMQ.
We are going to develop a windows service(MessageHandler), which will use Nservicebus to read the message from MSMQ and save it to the database. Our database will not be available for few hours everyday.
During the db downtime we expect that the process to retry the first message in MSMQ and halt processing other messages until the database is up. Once the database is up we want NServicebus to process in the order the message is sent.
Will setting up MaximumConcurrencyLevel="1" MaximumMessageThroughputPerSecond="1" helps in this scenario?
What is the best way using NServiceBus to handle this scenario?
We have a requirement for all our messages to be processed in the
order of arrival to MSMQ.
See the answer to this question How to handle message order in nservicebus?, and also this post here.
I am in agreement that while in-order delivery is possible, it is much better to design your system such that order does not matter. The linked article outlines the following soltuion:
Add a sequence number to all messages
in the receiver check the sequence number is the last seen number + 1 if not throw an out of sequence exception
Enable second level retries (so if they are out of order they will try again later hopefully after the correct message was received)
However, in the interest of anwering your specific question:
Will setting up MaximumConcurrencyLevel="1"
MaximumMessageThroughputPerSecond="1" helps in this scenario?
Not really.
Whenever you have a requirement for ordered delivery, the fundamental laws of logic dictate that somewhere along your message processing pipeline you must have a single-threaded process in order to guarantee in-order delivery.
Where this happens is up to you (check out the resequencer pattern), but you could certainly throttle the NserviceBus handler to a single thread (I don't think you need to set the MaximumMessageThroughputPerSecond to make it single threaded though).
However, even if you did this, and even if you used transactional queues, you could still not guarantee that each message would be dequeued and processed to the database in order, because if there are any permanent failures on any of the messages they will be removed from the queue and the next message processed.
During the db downtime we expect that the process to retry the first
message in MSMQ and halt processing other messages until the database
is up. Once the database is up we want NServicebus to process in the
order the message is sent.
This is not recommended. The second level retry functionality in NServiceBus is designed to handle unexpected and short-term outages, not planned and long-term outages.
For starters, when your NServiceBus message handler endpoint tries to process a message in it's input queue and finds the database unavailable, it will implement it's 2nd level retry policy, which by default will attempt the dequeue 5 times with increasing infrequency, and then fail permanently, sticking the failed message in it's error queue. It will then move onto the next message in the input queue.
While this doesn't violate your in-order delivery requirement on its own, it will make life very difficult for two reasons:
The permanently failed messages will need to be re-processed with priority once the database becomes available again, and
there will be a ton of unwanted failure logging, which will obfuscate any genuine handling errors.
If you have a regular planned outages which you know about in advance, then the simplest way to deal with them is to implement a service window, which another term for a schedule.
However, Windows services manager does not support the concept of service windows, so you would have to use a scheduled task to stop then start your service, or look at other options such as hangfire, quartz.net or some other cron-type library.
It kinds of depends why you need the messages to arrive in order. If it's like you first receive an Order message and then various OrderLine messages that all belong to a certain order, there are multiple possibilities.
One is to just accept that there can be OrderLine messages without an Order. The Order will come in later anyway. Eventual Consistency.
Another one is to collect messages (and possible state) in an NServiceBus Saga. When normally MessageA needs to arrive first, only to receive MessageB and MessageC later, give all three messages the ability to start the saga. All three messages need to have something that ties them together, like a unique GUID. Then the saga will make sure it collects them properly and when all messages have arrived, perhaps store its final state and mark the saga as completed.
Another option is to just persist all messages directly into the database and have something else figure out what belongs to what. This is a scenario useful for a data warehouse where the data just needs to be collected, no matter what. Some data might not be 100% accurate (or consistent) but that's okay.
Asynchronous messaging makes it hard to process them 100% in order, especially when the client calling the WCF is making mistakes and/or sending them out of order. It wouldn't be the first time I had such a requirement and out-of-order messages.

NServiceBus Pub/Subscribe using SQLServer transport - can the subscriber scale out?

Using the latest version of NServiceBus 4.4 I believe.
We are looking to implement NServiceBus and this section is using SQLServer as a transport. We want to pub/subscribe, which is fine but how would it work with scaling out the subscribers?
I have done a PoC where I ran the recieving endpoint of a SQLServer transport multiple times and when a message came in, the first instance of the running reciever got the message and processed it, resulting in the other process NOT processing it, which is correct.
In a pub/subscribe architecture using SQLServer, would this same method of running multiple instances of the subscriber work and since we are using a common queue (SQLServer) it will just sort itself out and not process the message multiple times?
When using SQL Server persistence, the subscribers for your events and messages are held in the Subscription table within the NServiceBus database, so you can check which endpoints are subscribing to what messages or events by viewing the contents of that.
It's worth noting that you can only publish "message" classes with NServiceBus that are implementing the IEvent interface (unless you make use of unobtrusive mode).
When you publish a message or event using bus.Publish, all subscribers to that type will subscribe to it, as long as the individual endpoint names are different.
More information from Particular Software is here:
And here.

RabbitMQ Message Lifetime Replay Message

We are currently evaluating RabbitMQ. Trying to determine how best to implement some of our processes as Messaging apps instead of traditional DB store and grab. Here is the scenario. We have a department of users who perform similar tasks. As they submit work to the server applications we would like the server app to send messages back into a notification window saying what was done - to all the users, not just the one submitting the work. This is all easy to do.
The question is we would like these message to live for say 4 hours in the Queue. If a new user logs in or say a supervisor they would get all the messages from the last 4 hours delivered to their notification window. This gives them a quick way to review what has recently happened and what is going on without having to ask others, "have you talked to John?", "Did you email him is itinerary?", etc.
So, how do we publish messages that have a lifetime of x hours from the time they were published AND any new consumers that connect will get all of these messages delivered in chronological order? And preferably the messages just disappear after they have expired from the queue.
Thanks
There is Per-Queue Message TTL and Per-Message TTL in RabbitMQ. If I am right you can utilize them for your task.
In addition to the above answer, it would be better to have the application/client publish messages to two queues. Consumer would consume from one of the queues while the other queue can be configured using per queue-message TTL or per message TTL to retain the messages.
Queuing messages you do to get a message from one point to the other reliable. So the sender can work independently from the receiver. What you propose is working with a temporary persistent store.
A sql database would fit perfectly, but also a mongodb would work nicely. You drop a document in mongo, give it a ttl and let the database handle the expiration.
http://docs.mongodb.org/master/tutorial/expire-data/

Long running workflow in asp.net mvc

I'm developing an intranet site using asp.net mvc4 to manage some of our data. One important feature of this site is to trigger import/export jobs. These jobs can take anywhere between 5 minutes to 1 hour. Users of the site need to be able to determine whether a job is currently running as well as the status of prior jobs. Many jobs will often include warning messages concerning duplicate data and these warnings need to be visible on the site.
My plan is to implement these long running processes as a WCF Workflow Service that the asp.net site will interact with. I've got much of the business logic implemented via activities and have tested it using a simple console application. I should note I'm using a correlation handle in order to partition the service based on specific "Projects" on the site.
My problem is how do I go by querying the status of an active job (if one exists) as well as the warning messages of previous jobs. I suspect the best way to do this would be to use the AppFabric tracking service and have my asp.net query a SQL monitoring store and report back on the current status. After setting up AppFabric and adding custom tracking messages, I ran into a few issues. My first issue is that I cannot figure out how to filter out workflow instances that were not using the correct correlation handle as I'd like to show only workflows for a specific project. The other issue is that the tracking database can be delayed quite a bit which causes issues for me trying to determine if a workflow is currently running.
Another possible solution could be to have the workflow explicitly update a database with its current status and any error messages. I'm leaning towards this solution but could use some expert advice.
TL;DR: I need to know the best way to query the execution status and any warning messages of a WCF Workflow service.
As you want to query workflow status and messages even after the workflow is finished I would start by creating a table where you can convert the correlation values a client send to the related workflow ID. I would create a custom activity to do that and drop it right after the receive that creates the workflow.
Next I would create a regular WCF service the client app uses to query the status. This WCF service can query the WF persistence store to see if a given workflow is still running. If so the active bookmarks column will tell you what SOAP messages the workflow is currently waiting for.
As far as messages go you can either use the AppFabric tracking infrastructure to store and retrieve them or you could create a custom activity and store them in your own database. It really depends if you are also interested in the standard WF tracking messages generated.
Update on cheking for running workflow instances:
There are several downsides to adding an IsRunning message to your workflow. For one you would need to make sure one branch keeps looping and waiting for the message but stops as soon as the other real workflow branch is done. Certainly possible but it complicates the workflow and is a possible source of errors. And as it is not part of the business problem it really has no place in the workflow as far as I am concerned. It also means that you will have to load a workflow from disk and persist it back just to tell you that it is there. If it was finished you will need to wait for a fault to indicate there was no workflow instance. And that usually means you get a timeout exception after, by default, 60 seconds. Add throttling to that and you request might be queued because there are too many other workflow instances or SOAP request being processed. So a timeout might mean that a workflow instance exists but is unreachable due to system constraints. Instead I would opt for the simple thing and check if the record in the instance store is still available. The additional info from the active bookmarks column will tell you what the workflow is waiting on, information I have used in the past to dynamically update the UI by enabling/disabling UI elements.