SQL Service Broker Internal Activation Questions - sql

I setup Internal Activation for two stored procedures. One, inserts one or more records , the other, updates one or more records in the same table. So, I have two initiator, two target queues.
It works fine on development so far, but I wonder what types of problems I might encounter when we move it to prod where these two stored procedures are frequently called. We have already experiencing deadlock issues caused by these two stored procedures. Asynchronous execution is my main goal with this implementation.
Questions :
Is there a way to use one target queue for both stored procedures to prevent any chance of deadlocks?
Is there anything I can do to make it more reliable? like one execution error should not stop incoming requests
to the queue?
Tips to improve scalability (high number of execution per second)?
Can I set RETRY if there is a deadlock?
Here is the partial code of the insert stored procedure;
CREATE QUEUE [RecordAddUsersQueue];
CREATE SERVICE [RecordAddUsersService] ON QUEUE [RecordAddUsersQueue];
ALTER QUEUE [AddUsersQueue] WITH ACTIVATION
( STATUS = ON,
MAX_QUEUE_READERS = 1, --or 10?
PROCEDURE_NAME = usp_AddInstanceUsers,
EXECUTE AS OWNER);
CREATE PROCEDURE [dbo].[usp_AddInstanceUsers] #UsersXml xml
AS
BEGIN
DECLARE #Handle uniqueidentifier;
BEGIN DIALOG CONVERSATION #Handle
FROM SERVICE [RecordAddUsersService]
TO SERVICE 'AddUsersService'
ON CONTRACT [AddUsersContract]
WITH ENCRYPTION = OFF;
SEND ON CONVERSATION #Handle
MESSAGE TYPE [AddUsersXML] (#UsersXml);
END
GO
CREATE PROCEDURE [dbo].[usp_SB_AddInstanceUsers]
AS
BEGIN
DECLARE #Handle uniqueidentifier;
DECLARE #MessageType sysname;
DECLARE #UsersXML xml;
WHILE (1 = 1)
BEGIN
BEGIN TRANSACTION;
WAITFOR
(RECEIVE TOP (1)
#Handle = conversation_handle,
#MessageType = message_type_name,
#UsersXML = message_body
FROM [AddUsersQueue]), TIMEOUT 5000;
IF (##ROWCOUNT = 0)
BEGIN
ROLLBACK TRANSACTION;
BREAK;
END
IF (#MessageType = 'ReqAddUsersXML')
BEGIN
--<INSERT>....
DECLARE #ReplyMsg nvarchar(100);
SELECT
#ReplyMsg = N'<ReplyMsg>Message for AddUsers Initiator service.</ReplyMsg>';
SEND ON CONVERSATION #Handle
MESSAGE TYPE [RepAddUsersXML] (#ReplyMsg);
END
ELSE
IF #MessageType = N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog'
BEGIN
END CONVERSATION #Handle;
END
ELSE
IF #MessageType = N'http://schemas.microsoft.com/SQL/ServiceBroker/Error'
BEGIN
END CONVERSATION #Handle;
END
COMMIT TRANSACTION;
END
END
GO
Thank you,
Kuzey

Is there a way to use one target queue for both stored procedures to prevent any chance of deadlocks?
You can and you should. There is no reason for having two target services/queues/procedures. Send, to the same service, two different message types for the two operations you desire. The activated procedure should then execute logic for Add or logic for Update, depending on message type.
Is there anything I can do to make it more reliable? like one execution error should not stop incoming requests to the queue?
SSB activation will be very reliable, that's not going to be a problem. As long as you adhere strictly to transaction boundaries (do not commit dequeue operations before processing is complete), you'll never lose a message/update.
Tips to improve scalability (high number of execution per second)?
Read Writing Service Broker Procedures and Reusing Conversations. To achieve a high throughput processing, you will have to dequeue and process in batches (TOP(1000)) into #table variables. See Exception handling and nested transactions for a pattern that can be applied to process a batch of messages. You'll need to read and understand Conversation Group Locks.
Can I set RETRY if there is a deadlock?
No need to, SSB activation will retry for you. As you rollback, the dequeue (RECEIVE) will rollback thus making the messages again available for activation, and the procedure will automatically retry. Note that 5 rollbacks in a row will trigger the poison message trap
MAX_QUEUE_READERS = 1, --or 10?
If 1 cannot handle the load, add more. As long as you understand proper conversation group locking, the parallel activated procedures should handle unrelated business items and never deadlock. If you encounter deadlocks between instances of activated procedure on the same queue, it means you have a flaw in the conversation group logic and you allow messages seen by SSB as uncorrelated (different groups) to modify the same database records (same business entities) and lead to deadlocks.
BTW, you must have an activated procedure on the initiator service queue as well. See How to prevent conversation endpoint leaks.

Related

How can we avoid Stored Procedures being executed in parallel?

We have the following situation:
A Stored Procedure is invoked by a middleware and is given a XML file as parameter. The Procedure then parses the XML file and inserts values into temporary tables inside a loop. After looping, the values inside the temporary tables are inserted into physical tables.
Problem is, the Stored Procedure has a relatively long run-time (about 5 Minutes). In this period, it is likely that it is being invoked a second time, which would cause both processes to be suspended.
Now my question:
How can we avoid a second execution of a Stored Procedure if it is already running?
Best regards
I would recommend designing your application layer to prevent multiple instances of this process being run at once. For example, you could move the logic into a queue that is processed 1 message at a time. Another option would be locking at the application level to prevent the database call from being executed.
SQL Server does have a locking mechanism to ensure a block of code is not run multiple times: an "app lock". This is similar in concept to the lock statement in C# or other semaphores you might see in other languages.
To acquire an application lock, call sp_getapplock. For example:
begin tran
exec sp_getapplock #Resource = 'MyExpensiveProcess', #LockMode = 'Exclusive', #LockOwner = 'Transaction'
This call will block if another process has acquired the lock. If a second RPC call tries to run this process, and you would rather have the process return a helpful error message, you can pass in a #LockTimeout of 0 and check the return code.
For example, the code below raises an error if it could not acquire the lock. Your code could return something else that the application interprets as "process is already running, try again later":
begin tran
declare #result int
exec #result = sp_getapplock #Resource = 'MyExpensiveProcess', #LockMode = 'Exclusive', #LockOwner = 'Transaction', #LockTimeout = 0
if #result < 0
begin
rollback
raiserror (N'Could not acquire application lock', 16, 1)
end
To release the lock, call sp_releaseapplock.
exec sp_releaseapplock #Resource = 'MyExpensiveProcess'
Stored procedures are meant to be run multiple times and in parallel as well. The idea is to reuse the code.
If you want to avoid multiple run for same input, you need to take care of it manually. By implementing condition check for the input or using some locking mechanism.
If you don't want your procedure to run in parallel at all (regardless of input) best strategy is to acquire lock using some entry in DB table or using global variables depending on DBMS you are using.
You can check if the stored procedure is already running using exec sp_who2. This may be an approach to consider. In your SP, check this first and simply exit if it is. It will run again the next time the job executes.
You would need to filter out the current thread, make sure the count of that SP is 1 (1 will be for the current process, 2 means already running), or have a helper SP that is called first.
Here are other ideas: Check if stored procedure is running

SQL Server Service Broker Deadlock

I'm investigating a process that I did not build. It uses the service broker to create a queue of contacts that then need an action against them.
There is then a handler that receives 10k records and passes them to a stored procedure to process.
What happens if that final process fails for a deadlock with no error handling? Do these go back into the queue? If not what would I need to do to get them to go back into the queue?
Service broker queues can be accessed from within a transaction. So if you do something like this in your code (the below is pseudo-code; actual robust service broker code is a little beyond the scope of your question):
begin tran
receive top(10000) message_body
into #table
from dbo.yourQueue;
while(1=1)
begin
select top(1) #message = message
from #table;
if (#message is null)
break;
exec dbo.processMessage #message;
end
commit tran
… then you're set. What I'm saying is that as long as you're doing your receive and processing in the same transaction, any failure (deadlocks included) will rollback the transaction and put the messages back on the queue. Make sure you read up on poison message handling, though! If you get too many rollbacks, SQL will assume that there's an un-processable message and shut down the queue. That's a bad day when that happens.

Async calling a stored Proc using service Broker

In my application I need to call a Stored Proc Asynchronously.
For this I am using Sql Service Broker.
These are the steps Involved in creating the asynchronous calling.
1) I created Message,Contract,Queue,Service.
And Sending messages.I can see my messages in 'ReceiveQueue1'.
2) I created a stored Proc and a Queue
When I execute the Stored Proc(proc_AddRecord) its executing only once.
Its reading all the records in the Queues and adding those records to the table.
Upto this point its working fine.
But when I add some new messages to 'ReceiveQueue1' my stored proc is not adding those
records automatically to the table. I have to re execute the Stored Proc(proc_AddRecord)
inorder to add the new messages. Why is the Stored proc is not getting executed.
What I am supposed to do in order to call the Stored Proc Asynchronously.
The whole point of using Service Broker is to call stored procs asynchronously.
I am totally new to SQL Server Service Broker.
Appreciate any help.
Here is my code for the stored Proc
#
--exec proc_AddRecord
ALTER PROCEDURE proc_AddRecord
AS
Declare
#Conversation UniqueIdentifier,
#msgTypeName nvarchar(200),
#msg varbinary(max)
While (1=1)
Begin
Begin Transaction;
WAITFOR
(
Receive Top (1)
#Conversation = conversation_handle,
#msgTypeName = message_type_name,
#msg = message_body
from dbo.ReceiveQueue1
), TIMEOUT 5000
IF ##Rowcount = 0
Begin
Rollback Transaction
Break
End
PRINT #msg
If #msg = 'Sales'
BEGIN
insert into TableCity(deptNo,Manager,Group,EmpCount) VALUES(101,'Reeves',51, 29)
COMMIT Transaction
Continue
End
If #msg = 'HR'
BEGIN
insert into TableCity(deptNo,Manager,Group,EmpCount) VALUES(102,'Cussac',55, 14)
COMMIT Transaction
Continue
End
Begin
Print 'Process end of dialog messages here.'
End Conversation #Conversation
Commit Transaction
Continue
End
Rollback Transaction
END
ALTER QUEUE AddRecorQueue
WITH ACTIVATION (
PROCEDURE_NAME=proc_AddRecord,
MAX_QUEUE_READERS = 1,
STATUS = ON,
EXECUTE AS 'dbo');
You say you are executing the stored procedure, you shouldn't need to do that, not even once, it should always be done with the activation.
Should your activation be on your 'ReceiveQueue1' instead of your 'AddRecorQueue' I can't see the rest of your code, but the names suggest it.
Where does your stored procedure begin and end? Generally I'd put BEGIN just after the AS statement and END where the stored procedure should end, If you don't have these then you'd need a GO statement to separate it off. Otherwise your ALTER QUEUE statement would be part of the stored procedure
You also have "Rollback Transaction" so even if the activation was working it would all get rolled back, or raise an error saying there was no transaction had one of the IF statements been triggered.
I suggest you follow this tutorial for service broker in general and this one about internal activation. They should get you started.

Sql Server Service Broker

Currently we are using service broker to send the messages back and forth, which is working fine. But we wanted to group those messages by using the RELATED_CONVERSATION_GROUP. We wanted to use our own database persisted uuid as a RELATED_CONVERSATION_GROUP = #uuid from our database, but even though we use the same uuid every time the conversion_group_id comes different each time we receive the queue.
Do you guys know what is wrong with way i am creating the broker or the receive call, i have provided both the broker creation code and the receive call code below. Thanks
below is the code "Service Broker creation code"
CREATE PROCEDURE dbo.OnDataInserted
#EntityType NVARCHAR(100),
#MessageID BIGINT,
#uuid uniqueidentifier,
#message_body nvarchar(max)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #conversation UNIQUEIDENTIFIER
BEGIN DIALOG CONVERSATION #conversation
FROM SERVICE DataInsertSndService
TO SERVICE 'DataInsertRcvService'
ON CONTRACT DataInsertContract
WITH RELATED_CONVERSATION_GROUP = #uuid;
SEND ON CONVERSATION #conversation
MESSAGE TYPE DataInserted
(CAST(#message_body))
below is the code "Receive code"
WHILE 0 < ##TRANCOUNT ROLLBACK; SET NOCOUNT ON
BEGIN TRANSACTION;
DECLARE
#cID as uniqueidentifier,
#conversationHandle as uniqueidentifier,
#conversationGroupId as uniqueidentifier,
#tempConversationGroupId as uniqueidentifier,
#message_body VARBINARY(MAX)
RAISERROR ('Awaiting Message ...', 16, 1) WITH NOWAIT
;WAITFOR (RECEIVE TOP (1)
#cID = Substring(CAST(message_body as nvarchar(max)),4,36),
#conversationHandle = [conversation_handle],
#conversationGroupId = [conversation_group_id],
#message_body = message_body
FROM DataInsertRcvQueue)
RAISERROR ('Message Received', 16, 1) WITH NOWAIT
Select #tempConversationGroupId = conversationGroupID from ConversationGroupMapper where cID = #cID;
declare #temp as nvarchar(max);
Set #temp = CAST(#tempConversationGroupId as nvarchar(max));
if #temp <> ''
BEGIN
MOVE CONVERSATION #conversationHandle TO #tempConversationGroupId;
RAISERROR ('Moved to Existing Conversation Group' , 16, 1) WITH NOWAIT
END
else
BEGIN
insert into ConversationGroupMapper values (#cID,#conversationGroupId);
RAISERROR ('New Conversation Group' , 16, 1) WITH NOWAIT
END
WAITFOR DELAY '000:00:10'
COMMIT
RAISERROR ('Committed' , 16, 1) WITH NOWAIT
Elaboration
Our situation is that we need to receive items from this Service Broker queue in a loop, blocking on WAITFOR, and hand them off to another system over an unreliable network. Items received from the queue are destined for one of many connections to that remote system. If the item is not successfully delivered to the other system, the transaction for that single item should be rolled back and the item will be returned to the queue. We commit the transaction upon successful delivery, unlocking the sequence of messages to be picked up by a subsequent loop iteration.
Delays in a sequence of related items should not affect delivery of unrelated sequences. Single items are sent into the queue as soon as they are available and are forwarded immediately. Items should be forwarded single-file, though order of delivery even within a sequence is not strictly important.
From the loop that receives one message at a time, a new or existing TcpClient is selected from our list of open connections, and the message and the open connection are passed along though the chain of asynchronous IO callbacks until the transmission is complete. Then we complete the DB Transaction in which we received the Item from the Service Broker Queue.
How can Service Broker and conversation groups be used to assist in this scenario?
Conversation groups are a local concept only, used exclusively for locking: correlated conversations belong in a group so that while you process a message on one conversation, another thread cannot process a correlated message. There is no information about conversation groups exchanged by the two endpoints, so in your example all the initiator endpoints end up belonging to one conversation group, but the target endpoints are each a distinct conversation group (each group having only one conversation). The reason the system behaves like this is because conversation groups are designed to address a problem like, say, a trip booking service: when it receives a message to 'book a trip', it has to reserve a flight, a hotel and a car rental. It must send three messages, one to each of these services ('flights', 'hotels', 'cars') and then the responses will come back, asynchronously. When they do come back, the processing must ensure that they are not processed concurrently by separate threads, which would each try to update the 'trip' record status. In messaging, this problem is know as 'message correlation problem'.
However, often conversation groups are deployed in SSB solely for performance reasons: they allow larger RECEIVE results. Target endpoints can be moved together into a group by using MOVE CONVERSATION but in practice there is a much simpler trick: reverse the direction of the conversation. Have your destination start the conversations (grouped), and the source sends its 'updates' on the conversation(s) started by the destination.
Some notes:
Don't use the fire-and-forget pattern of BEGIN/SEND/END. You're making it impossible to diagnose any problem in future, see Fire and Forget: Good for the military, but not for Service Broker conversations.
Never ever use WITH CLEANUP in production code. It is intended for administrative last-resort action like disaster recovery. If you abuse it you deny SSB any chance to properly track the message for correct retry delivery (if the message bounces on the target, for whatever reason, it will be lost forever).
SSB does not guarantee order across conversations, only within one conversation. Starting a new conversation for each INSERT event does not guarantee to preserve, on target, the order of insert operations.

Transactions within loop within stored procedure

I'm working on a procedure that will update a large number of items on a remote server, using records from a local database. Here's the pseudocode.
CREATE PROCEDURE UpdateRemoteServer
pre-processing
get cursor with ID's of records to be updated
while on cursor
process the item
No matter how much we optimize it, the routine is going to take a while, so we don't want the whole thing to be processed as a single transaction. The items are flagged after being processed, so it should be possible to pick up where we left off if the process is interrupted.
Wrapping the contents of the loop ("process the item") in a begin/commit tran does not do the trick... it seems that the whole statement
EXEC UpdateRemoteServer
is treated as a single transaction. How can I make each item process as a complete, separate transaction?
Note that I would love to run these as "non-transacted updates", but that option is only available (so far as I know) in 2008.
EXEC procedure does not create a transaction. A very simple test will show this:
create procedure usp_foo
as
begin
select ##trancount;
end
go
exec usp_foo;
The ##trancount inside usp_foo is 0, so the EXEC statement does not start an implicit transaction. If you have a transaction started when entering UpdateRemoteServer it means somebody started that transaction, I can't say who.
That being said, using remote servers and DTC to update items is going to perform quite bad. Is the other server also SQL Server 2005 at least? Maybe you can queue the requests to update and use messaging between the local and remote server and have the remote server perform the updates based on the info from the message. It would perform significantly better because both servers only have to deal with local transactions, and you get much better availability due to the loose coupling of queued messaging.
Updated
Cursors actually don't start transactions. The typical cursor based batch processing is usually based on cursors and batches updates into transactions of a certain size. This is fairly common for overnight jobs, as it allows for better performance (log flush throughput due to larger transaction size) and jobs can be interrupted and resumed w/o losing everithing. A simplified version of a batch processing loop is typically like this:
create procedure usp_UpdateRemoteServer
as
begin
declare #id int, #batch int;
set nocount on;
set #batch = 0;
declare crsFoo cursor
forward_only static read_only
for
select object_id
from sys.objects;
open crsFoo;
begin transaction
fetch next from crsFoo into #id ;
while ##fetch_status = 0
begin
-- process here
declare #transactionId int;
SELECT #transactionId = transaction_id
FROM sys.dm_tran_current_transaction;
print #transactionId;
set #batch = #batch + 1
if #batch > 10
begin
commit;
print ##trancount;
set #batch = 0;
begin transaction;
end
fetch next from crsFoo into #id ;
end
commit;
close crsFoo;
deallocate crsFoo;
end
go
exec usp_UpdateRemoteServer;
I ommitted the error handling part (begin try/begin catch) and the fancy ##fetch_status checks (static cursors actually don't need them anyway). This demo code shows that during the run there are several different transactions started (different transaction IDs). Many times batches also deploy transaction savepoints at each item processed so they can skip safely an item that causes an exception, using a pattern similar to the one in my link, but this does not apply to distributed transactions since savepoints and DTC don't mix.
EDIT: as pointed out by Remus below, cursors do NOT open a transaction by default; thus, this is not the answer to the question posed by the OP. I still think there are better options than a cursor, but that doesn't answer the question.
Stu
ORIGINAL ANSWER:
The specific symptom you describe is due to the fact that a cursor opens a transaction by default, therefore no matter how you work it, you're gonna have a long-running transaction as long as you are using a cursor (unless you avoid locks altogether, which is another bad idea).
As others are pointing out, cursors SUCK. You don't need them for 99.9999% of the time.
You really have two options if you want to do this at the database level with SQL Server:
Use SSIS to perform your operation; very fast, but may not be available to you in your particular flavor of SQL Server.
Because you're dealing with remote servers, and you're worried about connectivity, you may have to use a looping mechanism, so use WHILE instead and commit batches at a time. Although WHILE has many of the same issues as a cursor (looping still sucks in SQL), you avoid creating the outer transaction.
Stu
Are yo running this only from within sql server, or from an app? if so, get the list to be processed, then loop in the app to only process for the subsets as required.
Then the transaction should be handled by your app, and should only lock the items being updated/pages the items are in.
NEVER process one item at a time in a loop when you are doing transactional work. You can loop through records processing groups of them but never ever do one record at a time. Do set-based inserts instead and your performance will change from hours to minutes or even seconds. If you are using a cursor to insert update or delete and it isn't handling at least 1000 rowa in each statement (not one at atime) you are doing the wrong thing. Cursors are an extremely poor practice for such thing.
Just an idea ..
Only process a few items when the procedure is called (e.g. only get the TOP 10 items to process)
Process those
Hopefully, this will be the end of the transaction.
Then write a wrapper that calls the procedure as long as there is more work to do (either use a simple count(..) to see if there are items or have the procedure return true indicating that there is more work to do.
Don't know if this works, but maybe the idea is helpful.