Does the service broker activation procedure need to be in a transaction? - sql

I'm setting up service broker to do some asynchronous jobs. The procedure I have to receive messages calls another stored procedure that does a lot of work and the fact that I'm in a transaction is causing some locking issues. The example, i've patterned this off of came from https://sqlperformance.com/2014/03/sql-performance/configuring-service-broker but I'm wondering if it is a bad idea to remove the transactions from the procedure that is processing the messages.

You don't need an explicit transaction, but understand that if you remove it that once the RECEIVE statement happens, the message is removed from the queue forever.
Contrast that with the case where you use a transaction. If the activation procedure aborts after the RECEIVE but before the transaction that contains it is finished, the message is put back at the top of the queue.
Of course, you have to be careful about poison messages, but at least you're not dropping messages on the floor.

Related

How can I get SQL Service Broker to actually use all available Queue Readers?

I've built a data collection framework around service broker. There are several procs that fill the queue with various jobs. Then a listener (activated procedure) that takes the jobs, decides what needs to be done with that item, and hands it off to the correct collection proc.
The activation queue has a MAX_QUEUE_READERS of 10, but almost never reaches that limit. Instead it will take far longer to process with just 1 or 2 activated tasks as seen from dm_broker_activated_tasks.
How can I incentivize or even force the higher number of workers?
EDIT: THIS MS doc says it only checks for activation every 5 sec.
Does that mean if my tasks take less that 5 seconds I have no way to parallelize them through service broker?
Service Broker has a specific concept for parallelism, namely the conversation group. Only messages from different groups can be processed in parallel. How this manifests is that a RECEIVE will lock the conversation group for the dequeued message and no other RECEIVE can dequeue messages from the same conversation group.
So even if you do have more messages in your queue, if they belong to the same conversation group then SQL Server cannot activate more parallel readers.
Even if you don't manage conversation groups explicitly (almost nobody does), they are managed implicitly by the fact that a conversation handle is also a group. Basically, every time you issue a single BEGIN DIALOG followed by several SEND on the same handle, they will not be processable in parallel. If you issue separate BEGIN DIALOG for each SEND they are processable in parallel, but you loose the order guarantee.

NserviceBus rollback after sending

Hi I have with this code
SendMessageToMyQueue();
UpdateStatusInDbThatMessageWasSent();
that sometimes message is processed before status is updated which I would like to avoid.
My question is if I wrap that two line with a transaction like this:
using(var tr = new TransactionScope())
{
SendMessageToMyQueue();
UpdateStatusInDbThatMessageWasSent();
tr.Compleate();
}
will be guaranteed that there will be a lock on MyQueue created and this lock will be not released until UpdateStatusInDbThatMessageWasSent will update the status?
also if I add try catch with rollback and updating status fails, will the message be removed from MyQueue ?
There is no such thing as lock on a queue. The message, however, will be processed transactionally, if the following conditions are met. By transactionally, I mean that the message will be returned to the queue if an unhandled exception is thrown. The conditions to make this happen are:
Your database can enlist and take part in a distributed transaction. Not every database out there does. Some Document databases have none (in case of MongoDB) or sketchy (in case of RavenDB) support for DTC.
Your transport also supports distributed transactions. If you go with a broker type transports, SQL Server Transport is your best bet and on Bus type transports MSMQ is a good choice. Transports like Azure ServiceBus or RabbitMQ have very limited transactions support and do not support distributed transactions.
You'll need to run Distributed Transaction Coordinator service configured and running.
Two other things to note:
What if you're using a transport that lacks DTC support? Most of the time, you are better off if you can design your system to be idempotent. Outbox feature of NServiceBus allows you simulate DTC to some extent.
When a message is picked from the queue, processed, and returned to the queue due to an exception, it might end up being in a different place in the queue. You need to design for messages arriving out of order when designing a message-based architecture.
With all said above, exactly-once delivery guarantees are always a hot topic and disputed.

NServiceBus - Long running handler prevents queue from processing any other messages

I am running NSB 5 and I am using NHibernate Persistence and have MaximumConcurrencyLevel set to 10.
I have a handler that calls a stored proc that executes an SSIS package. This package takes a non trivial amount of time to run. I started to notice that whenever this particular message type is handled all other message handling stops. I noticed via SQL Profiler that the constant querying of the queue table that NSB does in the background stops and that any extra messages put into the queue are not handled even though NSB is only handling one message.
Is there any guidelines or known issues for dealing with handlers that block the queue because database commands take a long time to complete?
Is sounds like 10 threads are busy, so the endpoint is blocked, can you test this?
I would recommend hosting this message handler in its own process
Make sense?

Broker Queue - Move Poisoned Messages to Table

Currently I have a queue that stores merge queries which are run once it is read off the queue. This all works well, and currently if there is an error with the merge the queue will disable and I have to manually remove the message (or fix the merge, as it were).
I was wondering whether it was possible to simply move the poisoned message to a table? The queues run important (and different) merges that must continually run to ensure data is updated. It is not beneficial to me for the queue to, say, become disabled over night and gain a huge backlog.
Is there any way for me to simply push the bad message into a table? I have attempted this myself however I wound up having a TRY...CATCH inside a TRANSACTION, which performs a rollback on the error anyway (thus invoking the 5 rollbacks to disable rule). Most solutions online mention only manually removing the message.
Any suggestions? Is this just a bad idea? If so, why?
Thanks.
The disable-after-5-rollbacks can be switched off by setting POISON_MESSAGE_HANDLING status to OFF in the CREATE/ALTER QUEUE statement. You can then use TRY...CATCH to manually deal with transactions that fail.
Like you I don't find this feature very useful, so almost always turn it off in my applications and deal with problem messages in whatever way seems best.

How does NServiceBus handle transactions?

Does NServiceBus automatically attempt to redeliver messages if handling fails? And if it does, is there a limit in the number of times delivery can be attempted?
NSB will enlist in a distributed transaction and if it fails it will retry the configured number of times. Look at the MsmqTransport config section.
EDIT: A distributed transaction begins as soon as you peek or receive a message from MSMQ. All of the work you do in a message handler will be included in the transaction and it is governed by the Distributed Transaction Coordinator. The DTC will also include things like DB transactions if you are updating DBs and so on.
If say a DB update fails, the whole thing rolls back and the message is put back on the queue.